WorldWideScience

Sample records for methods including time

  1. Force measuring valve assemblies, systems including such valve assemblies and related methods

    Science.gov (United States)

    DeWall, Kevin George [Pocatello, ID; Garcia, Humberto Enrique [Idaho Falls, ID; McKellar, Michael George [Idaho Falls, ID

    2012-04-17

    Methods of evaluating a fluid condition may include stroking a valve member and measuring a force acting on the valve member during the stroke. Methods of evaluating a fluid condition may include measuring a force acting on a valve member in the presence of fluid flow over a period of time and evaluating at least one of the frequency of changes in the measured force over the period of time and the magnitude of the changes in the measured force over the period of time to identify the presence of an anomaly in a fluid flow and, optionally, its estimated location. Methods of evaluating a valve condition may include directing a fluid flow through a valve while stroking a valve member, measuring a force acting on the valve member during the stroke, and comparing the measured force to a reference force. Valve assemblies and related systems are also disclosed.

  2. Fast-timing methods for semiconductor detectors

    International Nuclear Information System (INIS)

    Spieler, H.

    1982-03-01

    The basic parameters are discussed which determine the accuracy of timing measurements and their effect in a practical application, specifically timing with thin-surface barrier detectors. The discussion focusses on properties of the detector, low-noise amplifiers, trigger circuits and time converters. New material presented in this paper includes bipolar transistor input stages with noise performance superior to currently available FETs, noiseless input terminations in sub-nanosecond preamplifiers and methods using transmission lines to couple the detector to remotely mounted preamplifiers. Trigger circuits are characterized in terms of effective rise time, equivalent input noise and residual jitter

  3. The time domain triple probe method

    International Nuclear Information System (INIS)

    Meier, M.A.; Hallock, G.A.; Tsui, H.Y.W.; Bengtson, R.D.

    1994-01-01

    A new Langmuir probe technique based on the triple probe method is being developed to provide simultaneous measurement of plasma temperature, potential, and density with the temporal and spatial resolution required to accurately characterize plasma turbulence. When the conventional triple probe method is used in an inhomogeneous plasma, local differences in the plasma measured at each probe introduce significant error in the estimation of turbulence parameters. The Time Domain Triple Probe method (TDTP) uses high speed switching of Langmuir probe potential, rather than spatially separated probes, to gather the triple probe information thus avoiding these errors. Analysis indicates that plasma response times and recent electronics technology meet the requirements to implement the TDTP method. Data reduction techniques of TDTP data are to include linear and higher order correlation analysis to estimate fluctuation induced particle and thermal transport, as well as energy relationships between temperature, density, and potential fluctuations

  4. Winter Holts Oscillatory Method: A New Method of Resampling in Time Series.

    Directory of Open Access Journals (Sweden)

    Muhammad Imtiaz Subhani

    2016-12-01

    Full Text Available The core proposition behind this research is to create innovative methods of bootstrapping that can be applied in time series data. In order to find new methods of bootstrapping, various methods were reviewed; The data of automotive Sales, Market Shares and Net Exports of the top 10 countries, which includes China, Europe, United States of America (USA, Japan, Germany, South Korea, India, Mexico, Brazil, Spain and, Canada from 2002 to 2014 were collected through various sources which includes UN Comtrade, Index Mundi and World Bank. The findings of this paper confirmed that Bootstrapping for resampling through winter forecasting by Oscillation and Average methods give more robust results than the winter forecasting by any general methods.

  5. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien; Claudel, Christian G.

    2015-01-01

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  6. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien

    2015-12-30

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  7. Verifying Real-Time Systems using Explicit-time Description Methods

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking has been extensively researched in recent years. Many new formalisms with time extensions and tools based on them have been presented. On the other hand, Explicit-Time Description Methods aim to verify real-time systems with general untimed model checkers. Lamport presented an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables for time requirements. This paper proposes a new explicit-time description method with no reliance on global variables. Instead, it uses rendezvous synchronization steps between the Tick process and each system process to simulate time. This new method achieves better modularity and facilitates usage of more complex timing constraints. The two explicit-time description methods are implemented in DIVINE, a well-known distributed-memory model checker. Preliminary experiment results show that our new method, with better modularity, is comparable to Lamport's method with respect to time and memory efficiency.

  8. Explicit time marching methods for the time-dependent Euler computations

    International Nuclear Information System (INIS)

    Tai, C.H.; Chiang, D.C.; Su, Y.P.

    1997-01-01

    Four explicit type time marching methods, including one proposed by the authors, are examined. The TVD conditions of this method are analyzed with the linear conservation law as the model equation. Performance of these methods when applied to the Euler equations are numerically tested. Seven examples are tested, the main concern is the performance of the methods when discontinuities with different strengths are encountered. When the discontinuity is getting stronger, spurious oscillation shows up for three existing methods, while the method proposed by the authors always gives the results with satisfaction. The effect of the limiter is also investigated. To put these methods in the same basis for the comparison the same spatial discretization is used. Roe's solver is used to evaluate the fluxes at the cell interface; spatially second-order accuracy is achieved by the MUSCL reconstruction. 19 refs., 8 figs

  9. Damped time advance methods for particles and EM fields

    International Nuclear Information System (INIS)

    Friedman, A.; Ambrosiano, J.J.; Boyd, J.K.; Brandon, S.T.; Nielsen, D.E. Jr.; Rambo, P.W.

    1990-01-01

    Recent developments in the application of damped time advance methods to plasma simulations include the synthesis of implicit and explicit ''adjustably damped'' second order accurate methods for particle motion and electromagnetic field propagation. This paper discusses this method

  10. Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in Cochrane reviews: research on research study.

    Science.gov (United States)

    Dechartres, Agnes; Trinquart, Ludovic; Atal, Ignacio; Moher, David; Dickersin, Kay; Boutron, Isabelle; Perrodeau, Elodie; Altman, Douglas G; Ravaud, Philippe

    2017-06-08

    Objective  To examine how poor reporting and inadequate methods for key methodological features in randomised controlled trials (RCTs) have changed over the past three decades. Design  Mapping of trials included in Cochrane reviews. Data sources  Data from RCTs included in all Cochrane reviews published between March 2011 and September 2014 reporting an evaluation of the Cochrane risk of bias items: sequence generation, allocation concealment, blinding, and incomplete outcome data. Data extraction  For each RCT, we extracted consensus on risk of bias made by the review authors and identified the primary reference to extract publication year and journal. We matched journal names with Journal Citation Reports to get 2014 impact factors. Main outcomes measures  We considered the proportions of trials rated by review authors at unclear and high risk of bias as surrogates for poor reporting and inadequate methods, respectively. Results  We analysed 20 920 RCTs (from 2001 reviews) published in 3136 journals. The proportion of trials with unclear risk of bias was 48.7% for sequence generation and 57.5% for allocation concealment; the proportion of those with high risk of bias was 4.0% and 7.2%, respectively. For blinding and incomplete outcome data, 30.6% and 24.7% of trials were at unclear risk and 33.1% and 17.1% were at high risk, respectively. Higher journal impact factor was associated with a lower proportion of trials at unclear or high risk of bias. The proportion of trials at unclear risk of bias decreased over time, especially for sequence generation, which fell from 69.1% in 1986-1990 to 31.2% in 2011-14 and for allocation concealment (70.1% to 44.6%). After excluding trials at unclear risk of bias, use of inadequate methods also decreased over time: from 14.8% to 4.6% for sequence generation and from 32.7% to 11.6% for allocation concealment. Conclusions  Poor reporting and inadequate methods have decreased over time, especially for sequence generation

  11. A hybrid method combining the Time-Domain Method of Moments, the Time-Domain Uniform Theory of Diffraction and the FDTD

    Directory of Open Access Journals (Sweden)

    A. Becker

    2007-06-01

    Full Text Available In this paper a hybrid method combining the Time-Domain Method of Moments (TD-MoM, the Time-Domain Uniform Theory of Diffraction (TD-UTD and the Finite-Difference Time-Domain Method (FDTD is presented. When applying this new hybrid method, thin-wire antennas are modeled with the TD-MoM, inhomogeneous bodies are modelled with the FDTD and large perfectly conducting plates are modelled with the TD-UTD. All inhomogeneous bodies are enclosed in a so-called FDTD-volume and the thin-wire antennas can be embedded into this volume or can lie outside. The latter avoids the simulation of white space between antennas and inhomogeneous bodies. If the antennas are positioned into the FDTD-volume, their discretization does not need to agree with the grid of the FDTD. By using the TD-UTD large perfectly conducting plates can be considered efficiently in the solution-procedure. Thus this hybrid method allows time-domain simulations of problems including very different classes of objects, applying the respective most appropriate numerical techniques to every object.

  12. Communication: Time-dependent optimized coupled-cluster method for multielectron dynamics

    Science.gov (United States)

    Sato, Takeshi; Pathak, Himadri; Orimo, Yuki; Ishikawa, Kenichi L.

    2018-02-01

    Time-dependent coupled-cluster method with time-varying orbital functions, called time-dependent optimized coupled-cluster (TD-OCC) method, is formulated for multielectron dynamics in an intense laser field. We have successfully derived the equations of motion for CC amplitudes and orthonormal orbital functions based on the real action functional, and implemented the method including double excitations (TD-OCCD) and double and triple excitations (TD-OCCDT) within the optimized active orbitals. The present method is size extensive and gauge invariant, a polynomial cost-scaling alternative to the time-dependent multiconfiguration self-consistent-field method. The first application of the TD-OCC method of intense-laser driven correlated electron dynamics in Ar atom is reported.

  13. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.

  14. Inferring time derivatives including cell growth rates using Gaussian processes

    Science.gov (United States)

    Swain, Peter S.; Stevenson, Keiran; Leary, Allen; Montano-Gutierrez, Luis F.; Clark, Ivan B. N.; Vogel, Jackie; Pilizota, Teuta

    2016-12-01

    Often the time derivative of a measured variable is of as much interest as the variable itself. For a growing population of biological cells, for example, the population's growth rate is typically more important than its size. Here we introduce a non-parametric method to infer first and second time derivatives as a function of time from time-series data. Our approach is based on Gaussian processes and applies to a wide range of data. In tests, the method is at least as accurate as others, but has several advantages: it estimates errors both in the inference and in any summary statistics, such as lag times, and allows interpolation with the corresponding error estimation. As illustrations, we infer growth rates of microbial cells, the rate of assembly of an amyloid fibril and both the speed and acceleration of two separating spindle pole bodies. Our algorithm should thus be broadly applicable.

  15. Fast timing methods for semiconductor detectors. Revision

    International Nuclear Information System (INIS)

    Spieler, H.

    1984-10-01

    This tutorial paper discusses the basic parameters which determine the accuracy of timing measurements and their effect in a practical application, specifically timing with thin-surface barrier detectors. The discussion focusses on properties of the detector, low-noise amplifiers, trigger circuits and time converters. New material presented in this paper includes bipolar transistor input stages with noise performance superior to currently available FETs, noiseless input terminations in sub-nanosecond preamplifiers and methods using transmission lines to couple the detector to remotely mounted preamplifiers. Trigger circuits are characterized in terms of effective rise time, equivalent input noise and residual jitter

  16. Methods of producing adsorption media including a metal oxide

    Science.gov (United States)

    Mann, Nicholas R; Tranter, Troy J

    2014-03-04

    Methods of producing a metal oxide are disclosed. The method comprises dissolving a metal salt in a reaction solvent to form a metal salt/reaction solvent solution. The metal salt is converted to a metal oxide and a caustic solution is added to the metal oxide/reaction solvent solution to adjust the pH of the metal oxide/reaction solvent solution to less than approximately 7.0. The metal oxide is precipitated and recovered. A method of producing adsorption media including the metal oxide is also disclosed, as is a precursor of an active component including particles of a metal oxide.

  17. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  18. Microfluidic devices and methods including porous polymer monoliths

    Science.gov (United States)

    Hatch, Anson V; Sommer, Gregory J; Singh, Anup K; Wang, Ying-Chih; Abhyankar, Vinay V

    2014-04-22

    Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting porous polymer monolith may include surfaces terminated with iniferter species. Capture molecules may then be grafted to the monolith pores.

  19. A method for including external feed in depletion calculations with CRAM and implementation into ORIGEN

    International Nuclear Information System (INIS)

    Isotalo, A.E.; Wieselquist, W.A.

    2015-01-01

    Highlights: • A method for handling external feed in depletion calculations with CRAM. • Source term can have polynomial or exponentially decaying time-dependence. • CRAM with source term and adjoint capability implemented to ORIGEN in SCALE. • The new solver is faster and more accurate than the original solver of ORIGEN. - Abstract: A method for including external feed with polynomial time dependence in depletion calculations with the Chebyshev Rational Approximation Method (CRAM) is presented and the implementation of CRAM to the ORIGEN module of the SCALE suite is described. In addition to being able to handle time-dependent feed rates, the new solver also adds the capability to perform adjoint calculations. Results obtained with the new CRAM solver and the original depletion solver of ORIGEN are compared to high precision reference calculations, which shows the new solver to be orders of magnitude more accurate. Furthermore, in most cases, the new solver is up to several times faster due to not requiring similar substepping as the original one

  20. Evaluation and comparison of multiple test methods, including real-time PCR, for Legionella detection in clinical specimens.

    Directory of Open Access Journals (Sweden)

    Adriana Peci

    2016-08-01

    Full Text Available Legionella is a gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture and PCR test methods and to determine if sputum is an alternative to the use of more invasive bronchoalveolar lavage (BAL. Data for this study included specimens tested for Legionella at PHOL from January 1, 2010 to April 30, 2014, as part of routine clinical testing. We found sensitivity of UAT compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV 63.8% and negative predictive value (NPV 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7% and NPV 98.1%. Of 146 patients who had a Legionella positive result by PCR, only 66(45.2% also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%; sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results despite testing methods (Fisher Exact p-values=1.0, for each test. In summary, all test methods have inherent weaknesses in identifying Legionella; thereforemore than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection, and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical, from patients being tested for Legionella.

  1. Evaluation and Comparison of Multiple Test Methods, Including Real-time PCR, for Legionella Detection in Clinical Specimens

    Science.gov (United States)

    Peci, Adriana; Winter, Anne-Luise; Gubbay, Jonathan B.

    2016-01-01

    Legionella is a Gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture, and polymerase chain reaction (PCR) test methods and to determine if sputum is an acceptable alternative to the use of more invasive bronchoalveolar lavage (BAL). Data for this study included specimens tested for Legionella at Public Health Ontario Laboratories from 1st January, 2010 to 30th April, 2014, as part of routine clinical testing. We found sensitivity of urinary antigen test (UAT) compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV) 63.8%, and negative predictive value (NPV) 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7%, and NPV 98.1%. Out of 146 patients who had a Legionella-positive result by PCR, only 66 (45.2%) also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%); sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results regardless testing methods (Fisher Exact p-values = 1.0, for each test). In summary, all test methods have inherent weaknesses in identifying Legionella; therefore, more than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical from patients being tested for Legionella. PMID:27630979

  2. Methods for determining time of death.

    Science.gov (United States)

    Madea, Burkhard

    2016-12-01

    Medicolegal death time estimation must estimate the time since death reliably. Reliability can only be provided empirically by statistical analysis of errors in field studies. Determining the time since death requires the calculation of measurable data along a time-dependent curve back to the starting point. Various methods are used to estimate the time since death. The current gold standard for death time estimation is a previously established nomogram method based on the two-exponential model of body cooling. Great experimental and practical achievements have been realized using this nomogram method. To reduce the margin of error of the nomogram method, a compound method was developed based on electrical and mechanical excitability of skeletal muscle, pharmacological excitability of the iris, rigor mortis, and postmortem lividity. Further increasing the accuracy of death time estimation involves the development of conditional probability distributions for death time estimation based on the compound method. Although many studies have evaluated chemical methods of death time estimation, such methods play a marginal role in daily forensic practice. However, increased precision of death time estimation has recently been achieved by considering various influencing factors (i.e., preexisting diseases, duration of terminal episode, and ambient temperature). Putrefactive changes may be used for death time estimation in water-immersed bodies. Furthermore, recently developed technologies, such as H magnetic resonance spectroscopy, can be used to quantitatively study decompositional changes. This review addresses the gold standard method of death time estimation in forensic practice and promising technological and scientific developments in the field.

  3. A Blade Tip Timing Method Based on a Microwave Sensor

    Directory of Open Access Journals (Sweden)

    Jilong Zhang

    2017-05-01

    Full Text Available Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy.

  4. Time delayed Ensemble Nudging Method

    Science.gov (United States)

    An, Zhe; Abarbanel, Henry

    Optimal nudging method based on time delayed embedding theory has shows potentials on analyzing and data assimilation in previous literatures. To extend the application and promote the practical implementation, new nudging assimilation method based on the time delayed embedding space is presented and the connection with other standard assimilation methods are studied. Results shows the incorporating information from the time series of data can reduce the sufficient observation needed to preserve the quality of numerical prediction, making it a potential alternative in the field of data assimilation of large geophysical models.

  5. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  6. Membrane for distillation including nanostructures, methods of making membranes, and methods of desalination and separation

    KAUST Repository

    Lai, Zhiping; Huang, Kuo-Wei; Chen, Wei

    2016-01-01

    In accordance with the purpose(s) of the present disclosure, as embodied and broadly described herein, embodiments of the present disclosure provide membranes, methods of making the membrane, systems including the membrane, methods of separation, methods of desalination, and the like.

  7. Membrane for distillation including nanostructures, methods of making membranes, and methods of desalination and separation

    KAUST Repository

    Lai, Zhiping

    2016-01-21

    In accordance with the purpose(s) of the present disclosure, as embodied and broadly described herein, embodiments of the present disclosure provide membranes, methods of making the membrane, systems including the membrane, methods of separation, methods of desalination, and the like.

  8. Generalized Time-Limited Balanced Reduction Method

    DEFF Research Database (Denmark)

    Shaker, Hamid Reza; Shaker, Fatemeh

    2013-01-01

    In this paper, a new method for model reduction of bilinear systems is presented. The proposed technique is from the family of gramian-based model reduction methods. The method uses time-interval generalized gramians in the reduction procedure rather than the ordinary generalized gramians...... and in such a way it improves the accuracy of the approximation within the time-interval which the method is applied. The time-interval generalized gramians are the solutions to the generalized time-interval Lyapunov equations. The conditions for these equations to be solvable are derived and an algorithm...

  9. Accessible methods for the dynamic time-scale decomposition of biochemical systems.

    Science.gov (United States)

    Surovtsova, Irina; Simus, Natalia; Lorenz, Thomas; König, Artjom; Sahle, Sven; Kummer, Ursula

    2009-11-01

    The growing complexity of biochemical models asks for means to rationally dissect the networks into meaningful and rather independent subnetworks. Such foregoing should ensure an understanding of the system without any heuristics employed. Important for the success of such an approach is its accessibility and the clarity of the presentation of the results. In order to achieve this goal, we developed a method which is a modification of the classical approach of time-scale separation. This modified method as well as the more classical approach have been implemented for time-dependent application within the widely used software COPASI. The implementation includes different possibilities for the representation of the results including 3D-visualization. The methods are included in COPASI which is free for academic use and available at www.copasi.org. irina.surovtsova@bioquant.uni-heidelberg.de Supplementary data are available at Bioinformatics online.

  10. Beyond the sticker price: including and excluding time in comparing food prices.

    Science.gov (United States)

    Yang, Yanliang; Davis, George C; Muth, Mary K

    2015-07-01

    An ongoing debate in the literature is how to measure the price of food. Most analyses have not considered the value of time in measuring the price of food. Whether or not the value of time is included in measuring the price of a food may have important implications for classifying foods based on their relative cost. The purpose of this article is to compare prices that exclude time (time-exclusive price) with prices that include time (time-inclusive price) for 2 types of home foods: home foods using basic ingredients (home recipes) vs. home foods using more processed ingredients (processed recipes). The time-inclusive and time-exclusive prices are compared to determine whether the time-exclusive prices in isolation may mislead in drawing inferences regarding the relative prices of foods. We calculated the time-exclusive price and time-inclusive price of 100 home recipes and 143 processed recipes and then categorized them into 5 standard food groups: grains, proteins, vegetables, fruit, and dairy. We then examined the relation between the time-exclusive prices and the time-inclusive prices and dietary recommendations. For any food group, the processed food time-inclusive price was always less than the home recipe time-inclusive price, even if the processed food's time-exclusive price was more expensive. Time-inclusive prices for home recipes were especially higher for the more time-intensive food groups, such as grains, vegetables, and fruit, which are generally underconsumed relative to the guidelines. Focusing only on the sticker price of a food and ignoring the time cost may lead to different conclusions about relative prices and policy recommendations than when the time cost is included. © 2015 American Society for Nutrition.

  11. The time-dependent density matrix renormalisation group method

    Science.gov (United States)

    Ma, Haibo; Luo, Zhen; Yao, Yao

    2018-04-01

    Substantial progress of the time-dependent density matrix renormalisation group (t-DMRG) method in the recent 15 years is reviewed in this paper. By integrating the time evolution with the sweep procedures in density matrix renormalisation group (DMRG), t-DMRG provides an efficient tool for real-time simulations of the quantum dynamics for one-dimensional (1D) or quasi-1D strongly correlated systems with a large number of degrees of freedom. In the illustrative applications, the t-DMRG approach is applied to investigate the nonadiabatic processes in realistic chemical systems, including exciton dissociation and triplet fission in polymers and molecular aggregates as well as internal conversion in pyrazine molecule.

  12. Time-domain Green's Function Method for three-dimensional nonlinear subsonic flows

    Science.gov (United States)

    Tseng, K.; Morino, L.

    1978-01-01

    The Green's Function Method for linearized 3D unsteady potential flow (embedded in the computer code SOUSSA P) is extended to include the time-domain analysis as well as the nonlinear term retained in the transonic small disturbance equation. The differential-delay equations in time, as obtained by applying the Green's Function Method (in a generalized sense) and the finite-element technique to the transonic equation, are solved directly in the time domain. Comparisons are made with both linearized frequency-domain calculations and existing nonlinear results.

  13. An Efficient Integer Coding and Computing Method for Multiscale Time Segment

    Directory of Open Access Journals (Sweden)

    TONG Xiaochong

    2016-12-01

    Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.

  14. State Space Methods for Timed Petri Nets

    DEFF Research Database (Denmark)

    Christensen, Søren; Jensen, Kurt; Mailund, Thomas

    2001-01-01

    it possible to condense the usually infinite state space of a timed Petri net into a finite condensed state space without loosing analysis power. The second method supports on-the-fly verification of certain safety properties of timed systems. We discuss the application of the two methods in a number......We present two recently developed state space methods for timed Petri nets. The two methods reconciles state space methods and time concepts based on the introduction of a global clock and associating time stamps to tokens. The first method is based on an equivalence relation on states which makes...

  15. A high-order time-accurate interrogation method for time-resolved PIV

    International Nuclear Information System (INIS)

    Lynch, Kyle; Scarano, Fulvio

    2013-01-01

    both cases, it is demonstrated that the measurement time interval can be significantly extended without compromising the correlation signal-to-noise ratio and with no increase of the truncation error. The increase of velocity dynamic range scales more than linearly with the number of frames included for the analysis, which supersedes by one order of magnitude the pair correlation by window deformation. The main factors influencing the performance of the method are discussed, namely the number of images composing the sequence and the polynomial order chosen to represent the motion throughout the trajectory. (paper)

  16. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    OpenAIRE

    Chaoyang Shi; Bi Yu Chen; William H. K. Lam; Qingquan Li

    2017-01-01

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are f...

  17. Time series analysis methods and applications for flight data

    CERN Document Server

    Zhang, Jianye

    2017-01-01

    This book focuses on different facets of flight data analysis, including the basic goals, methods, and implementation techniques. As mass flight data possesses the typical characteristics of time series, the time series analysis methods and their application for flight data have been illustrated from several aspects, such as data filtering, data extension, feature optimization, similarity search, trend monitoring, fault diagnosis, and parameter prediction, etc. An intelligent information-processing platform for flight data has been established to assist in aircraft condition monitoring, training evaluation and scientific maintenance. The book will serve as a reference resource for people working in aviation management and maintenance, as well as researchers and engineers in the fields of data analysis and data mining.

  18. Interval-Censored Time-to-Event Data Methods and Applications

    CERN Document Server

    Chen, Ding-Geng

    2012-01-01

    Interval-Censored Time-to-Event Data: Methods and Applications collects the most recent techniques, models, and computational tools for interval-censored time-to-event data. Top biostatisticians from academia, biopharmaceutical industries, and government agencies discuss how these advances are impacting clinical trials and biomedical research. Divided into three parts, the book begins with an overview of interval-censored data modeling, including nonparametric estimation, survival functions, regression analysis, multivariate data analysis, competing risks analysis, and other models for interva

  19. Time-dependent shock acceleration of energetic electrons including synchrotron losses

    International Nuclear Information System (INIS)

    Fritz, K.; Webb, G.M.

    1990-01-01

    The present investigation of the time-dependent particle acceleration problem in strong shocks, including synchrotron radiation losses, solves the transport equation analytically by means of Laplace transforms. The particle distribution thus obtained is then transformed numerically into real space for the cases of continuous and impulsive injections of particles at the shock. While in the continuous case the steady-state spectrum undergoes evolution, impulsive injection is noted to yield such unpredicted features as a pile-up of high-energy particles or a steep power-law with time-dependent spectral index. The time-dependent calculations reveal varying spectral shapes and more complex features for the higher energies which may be useful in the interpretation of outburst spectra. 33 refs

  20. Unsteady panel method for complex configurations including wake modeling

    CSIR Research Space (South Africa)

    Van Zyl, Lourens H

    2008-01-01

    Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...

  1. Another method of dead time correction

    International Nuclear Information System (INIS)

    Sabol, J.

    1988-01-01

    A new method of the correction of counting losses caused by a non-extended dead time of pulse detection systems is presented. The approach is based on the distribution of time intervals between pulses at the output of the system. The method was verified both experimentally and by using the Monte Carlo simulations. The results show that the suggested technique is more reliable and accurate than other methods based on a separate measurement of the dead time. (author) 5 refs

  2. Comparative study of on-line response time measurement methods for platinum resistance thermometer

    International Nuclear Information System (INIS)

    Zwingelstein, G.; Gopal, R.

    1979-01-01

    This study deals with the in site determination of the response time of platinum resistance sensor. In the first part of this work, two methods furnishing the reference response time of the sensors are studied. In the second part of the work, two methods obtaining the response time without dismounting of the sensor, are studied. A comparative study of the performances of these methods is included for fluid velocities varying from 0 to 10 m/sec, in both laboratory and plant conditions

  3. Crystal timing offset calibration method for time of flight PET scanners

    Science.gov (United States)

    Ye, Jinghan; Song, Xiyun

    2016-03-01

    In time-of-flight (TOF) positron emission tomography (PET), precise calibration of the timing offset of each crystal of a PET scanner is essential. Conventionally this calibration requires a specially designed tool just for this purpose. In this study a method that uses a planar source to measure the crystal timing offsets (CTO) is developed. The method uses list mode acquisitions of a planar source placed at multiple orientations inside the PET scanner field-of-view (FOV). The placement of the planar source in each acquisition is automatically figured out from the measured data, so that a fixture for exactly placing the source is not required. The expected coincidence time difference for each detected list mode event can be found from the planar source placement and the detector geometry. A deviation of the measured time difference from the expected one is due to CTO of the two crystals. The least squared solution of the CTO is found iteratively using the list mode events. The effectiveness of the crystal timing calibration method is evidenced using phantom images generated by placing back each list mode event into the image space with the timing offset applied to each event. The zigzagged outlines of the phantoms in the images become smooth after the crystal timing calibration is applied. In conclusion, a crystal timing calibration method is developed. The method uses multiple list mode acquisitions of a planar source to find the least squared solution of crystal timing offsets.

  4. Catalyst support structure, catalyst including the structure, reactor including a catalyst, and methods of forming same

    Science.gov (United States)

    Van Norman, Staci A.; Aston, Victoria J.; Weimer, Alan W.

    2017-05-09

    Structures, catalysts, and reactors suitable for use for a variety of applications, including gas-to-liquid and coal-to-liquid processes and methods of forming the structures, catalysts, and reactors are disclosed. The catalyst material can be deposited onto an inner wall of a microtubular reactor and/or onto porous tungsten support structures using atomic layer deposition techniques.

  5. Time Scale in Least Square Method

    Directory of Open Access Journals (Sweden)

    Özgür Yeniay

    2014-01-01

    Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.

  6. Advances in Time Estimation Methods for Molecular Data.

    Science.gov (United States)

    Kumar, Sudhir; Hedges, S Blair

    2016-04-01

    Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data

  7. Time-efficient multidimensional threshold tracking method

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Kowalewski, Borys; Dau, Torsten

    2015-01-01

    Traditionally, adaptive methods have been used to reduce the time it takes to estimate psychoacoustic thresholds. However, even with adaptive methods, there are many cases where the testing time is too long to be clinically feasible, particularly when estimating thresholds as a function of anothe...

  8. Multiple time scale methods in tokamak magnetohydrodynamics

    International Nuclear Information System (INIS)

    Jardin, S.C.

    1984-01-01

    Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed

  9. Software Design Methods for Real-Time Systems

    Science.gov (United States)

    1989-12-01

    This module describes the concepts and methods used in the software design of real time systems . It outlines the characteristics of real time systems , describes...the role of software design in real time system development, surveys and compares some software design methods for real - time systems , and

  10. Flexible barrier film, method of forming same, and organic electronic device including same

    Science.gov (United States)

    Blizzard, John; Tonge, James Steven; Weidner, William Kenneth

    2013-03-26

    A flexible barrier film has a thickness of from greater than zero to less than 5,000 nanometers and a water vapor transmission rate of no more than 1.times.10.sup.-2 g/m.sup.2/day at 22.degree. C. and 47% relative humidity. The flexible barrier film is formed from a composition, which comprises a multi-functional acrylate. The composition further comprises the reaction product of an alkoxy-functional organometallic compound and an alkoxy-functional organosilicon compound. A method of forming the flexible barrier film includes the steps of disposing the composition on a substrate and curing the composition to form the flexible barrier film. The flexible barrier film may be utilized in organic electronic devices.

  11. A time-dependent neutron transport method of characteristics formulation with time derivative propagation

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, Adam J., E-mail: adamhoff@umich.edu; Lee, John C., E-mail: jcl@umich.edu

    2016-02-15

    A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Source Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.

  12. A time-dependent neutron transport method of characteristics formulation with time derivative propagation

    International Nuclear Information System (INIS)

    Hoffman, Adam J.; Lee, John C.

    2016-01-01

    A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Source Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.

  13. Including mixed methods research in systematic reviews: Examples from qualitative syntheses in TB and malaria control

    Science.gov (United States)

    2012-01-01

    Background Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. Methods We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Results Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Conclusions Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research. PMID:22545681

  14. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    International Nuclear Information System (INIS)

    Gora, D.; Bernardini, E.; Cruz Silva, A.H.

    2011-04-01

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  15. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    Energy Technology Data Exchange (ETDEWEB)

    Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)

    2011-04-15

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  16. Including mixed methods research in systematic reviews: examples from qualitative syntheses in TB and malaria control.

    Science.gov (United States)

    Atkins, Salla; Launiala, Annika; Kagaha, Alexander; Smith, Helen

    2012-04-30

    Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research.

  17. Numerical simulation of electromagnetic waves in Schwarzschild space-time by finite difference time domain method and Green function method

    Science.gov (United States)

    Jia, Shouqing; La, Dongsheng; Ma, Xuelian

    2018-04-01

    The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.

  18. A simple method for one-loop renormalization in curved space-time

    Energy Technology Data Exchange (ETDEWEB)

    Markkanen, Tommi [Helsinki Institute of Physics and Department of Physics, P.O. Box 64, FI-00014, University of Helsinki (Finland); Tranberg, Anders, E-mail: tommi.markkanen@helsinki.fi, E-mail: anders.tranberg@uis.no [Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)

    2013-08-01

    We present a simple method for deriving the renormalization counterterms from the components of the energy-momentum tensor in curved space-time. This method allows control over the finite parts of the counterterms and provides explicit expressions for each term separately. As an example, the method is used for the self-interacting scalar field in a Friedmann-Robertson-Walker metric in the adiabatic approximation, where we calculate the renormalized equation of motion for the field and the renormalized components of the energy-momentum tensor to fourth adiabatic order while including interactions to one-loop order. Within this formalism the trace anomaly, including contributions from interactions, is shown to have a simple derivation. We compare our results to those obtained by two standard methods, finding agreement with the Schwinger-DeWitt expansion but disagreement with adiabatic subtractions for interacting theories.

  19. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    Science.gov (United States)

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  20. Time-Dependent Close-Coupling Methods for Electron-Atom/Molecule Scattering

    International Nuclear Information System (INIS)

    Colgan, James

    2014-01-01

    The time-dependent close-coupling (TDCC) method centers on an accurate representation of the interaction between two outgoing electrons moving in the presence of a Coulomb field. It has been extensively applied to many problems of electrons, photons, and ions scattering from light atomic targets. Theoretical Description: The TDCC method centers on a solution of the time-dependent Schrödinger equation for two interacting electrons. The advantages of a time-dependent approach are two-fold; one treats the electron-electron interaction essentially in an exact manner (within numerical accuracy) and a time-dependent approach avoids the difficult boundary condition encountered when two free electrons move in a Coulomb field (the classic three-body Coulomb problem). The TDCC method has been applied to many fundamental atomic collision processes, including photon-, electron- and ion-impact ionization of light atoms. For application to electron-impact ionization of atomic systems, one decomposes the two-electron wavefunction in a partial wave expansion and represents the subsequent two-electron radial wavefunctions on a numerical lattice. The number of partial waves required to converge the ionization process depends on the energy of the incoming electron wavepacket and on the ionization threshold of the target atom or ion.

  1. FREEZING AND THAWING TIME PREDICTION METHODS OF FOODS II: NUMARICAL METHODS

    Directory of Open Access Journals (Sweden)

    Yahya TÜLEK

    1999-03-01

    Full Text Available Freezing is one of the excellent methods for the preservation of foods. If freezing and thawing processes and frozen storage method are carried out correctly, the original characteristics of the foods can remain almost unchanged over an extended periods of time. It is very important to determine the freezing and thawing time period of the foods, as they strongly influence the both quality of food material and process productivity and the economy. For developing a simple and effectively usable mathematical model, less amount of process parameters and physical properties should be enrolled in calculations. But it is a difficult to have all of these in one prediction method. For this reason, various freezing and thawing time prediction methods were proposed in literature and research studies have been going on.

  2. Composite materials and bodies including silicon carbide and titanium diboride and methods of forming same

    Science.gov (United States)

    Lillo, Thomas M.; Chu, Henry S.; Harrison, William M.; Bailey, Derek

    2013-01-22

    Methods of forming composite materials include coating particles of titanium dioxide with a substance including boron (e.g., boron carbide) and a substance including carbon, and reacting the titanium dioxide with the substance including boron and the substance including carbon to form titanium diboride. The methods may be used to form ceramic composite bodies and materials, such as, for example, a ceramic composite body or material including silicon carbide and titanium diboride. Such bodies and materials may be used as armor bodies and armor materials. Such methods may include forming a green body and sintering the green body to a desirable final density. Green bodies formed in accordance with such methods may include particles comprising titanium dioxide and a coating at least partially covering exterior surfaces thereof, the coating comprising a substance including boron (e.g., boron carbide) and a substance including carbon.

  3. A method for generating high resolution satellite image time series

    Science.gov (United States)

    Guo, Tao

    2014-10-01

    There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation

  4. Initiation devices, initiation systems including initiation devices and related methods

    Energy Technology Data Exchange (ETDEWEB)

    Daniels, Michael A.; Condit, Reston A.; Rasmussen, Nikki; Wallace, Ronald S.

    2018-04-10

    Initiation devices may include at least one substrate, an initiation element positioned on a first side of the at least one substrate, and a spark gap electrically coupled to the initiation element and positioned on a second side of the at least one substrate. Initiation devices may include a plurality of substrates where at least one substrate of the plurality of substrates is electrically connected to at least one adjacent substrate of the plurality of substrates with at least one via extending through the at least one substrate. Initiation systems may include such initiation devices. Methods of igniting energetic materials include passing a current through a spark gap formed on at least one substrate of the initiation device, passing the current through at least one via formed through the at least one substrate, and passing the current through an explosive bridge wire of the initiation device.

  5. Comparison of methods for determining the hydrologic recovery time after forest disturbance

    Science.gov (United States)

    Oda, T.; Green, M.; Ohte, N.; Urakawa, R.; Endo, I.; Scanlon, T. M.; Sebestyen, S. D.; McGuire, K. J.; Katsuyama, M.; Fukuzawa, K.; Tague, C.; Hiraoka, M.; Fukushima, K.; Giambelluca, T. W.

    2013-12-01

    Changes in forest hydrology changes after forest disturbance vary among catchments. Although studies have summarized the initial runoff changes following forest disturbance, the estimates of long-term recovery time are less frequently reported. To understand the mechanisms of long-term recovery processes and to predict the long-term changes in streamflow after forest disturbance, it is important to compare recovery times after disturbance. However, there is no clear consensus regarding the best methodology for such research, especially for watershed studies that were not designed as paired watersheds. We compared methods of determining the hydrologic recovery time to determine if there is a common method for sites in any hydroclimatic setting. We defined the hydrologic recovery time to be the time of disturbance to the time when hydrological factors first recovered to pre-disturbance levels. We acquired data on long-term rainfall and runoff at 16 sites in northeastern USA and Japan that had at least 10 years (and up to 50 years) of post disturbance data. The types of disturbance include harvesting, diseases and insect damages. We compared multiple indices of hydrological response including annual runoff, annual runoff ratio (annual runoff/annual rainfall), annual loss (annual rainfall-annual runoff), fiftieth-percentile annual flow, and seasonal water balance. The results showed that comparing annual runoff to a reference site was most robust at constraining the recovery time, followed by using pre-disturbance data as reference data and calculating the differences in annual runoff from pre-disturbance levels. However, in case of small disturbance at sites without reference data or long-term pre-disturbance data, the inter-annual variation of rainfall makes the effect of disturbance unclear. We found that annual loss had smaller inter-annual variation, and defining recovery time with annual loss was best in terms of matching the results from paired watersheds. The

  6. Method for Determining the Time Parameter

    Directory of Open Access Journals (Sweden)

    K. P. Baslyk

    2014-01-01

    Full Text Available This article proposes a method for calculating one of the characteristics that represents the flight program of the first stage of ballistic rocket i.e. time parameter of the program of attack angle.In simulation of placing the payload for the first stage, a program of flight is used which consists of three segments, namely a vertical climb of the rocket, a segment of programmed reversal by attack angle, and a segment of gravitational reversal with zero angle of attack.The programed reversal by attack angle is simulated as a rapidly decreasing and increasing function. This function depends on the attack angle amplitude, time and time parameter.If the projected and ballistic parameters and the amplitude of attack angle were determined this coefficient is calculated based the constraint that the rocket velocity is equal to 0.8 from the sound velocity (0,264 km/sec when the angle of attack becomes equal to zero. Such constraint is transformed to the nonlinear equation, which can be solved using a Newton method.The attack angle amplitude value is unknown for the design analysis. Exceeding some maximum admissible value for this parameter may lead to excessive trajectory collapsing (foreshortening, which can be identified as an arising negative trajectory angle.Consequently, therefore it is necessary to compute the maximum value of the attack angle amplitude with the following constraints: a trajectory angle is positive during the entire first stage flight and the rocket velocity is equal to 0,264 km/sec by the end of program of angle attack. The problem can be formulated as a task of the nonlinear programming, minimization of the modified Lagrange function, which is solved using the multipliers method.If multipliers and penalty parameter are constant the optimization problem without constraints takes place. Using the determined coordinate descent method allows solving the problem of modified Lagrange function of unconstrained minimization with fixed

  7. Methods for determining unimpeded aircraft taxiing time and evaluating airport taxiing performance

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2017-04-01

    Full Text Available The objective of this study is to improve the methods of determining unimpeded (nominal taxiing time, which is the reference time used for estimating taxiing delay, a widely accepted performance indicator of airport surface movement. After reviewing existing methods used widely by different air navigation service providers (ANSP, new methods relying on computer software and statistical tools, and econometrics regression models are proposed. Regression models are highly recommended because they require less detailed data and can serve the needs of general performance analysis of airport surface operations. The proposed econometrics model outperforms existing ones by introducing more explanatory variables, especially taking aircraft passing and over-passing into the considering of queue length calculation and including runway configuration, ground delay program, and weather factors. The length of the aircraft queue in the taxiway system and the interaction between queues are major contributors to long taxi-out times. The proposed method provides a consistent and more accurate method of calculating taxiing delay and it can be used for ATM-related performance analysis and international comparison.

  8. Time-dependent problems and difference methods

    CERN Document Server

    Gustafsson, Bertil; Oliger, Joseph

    2013-01-01

    Praise for the First Edition "". . . fills a considerable gap in the numerical analysis literature by providing a self-contained treatment . . . this is an important work written in a clear style . . . warmly recommended to any graduate student or researcher in the field of the numerical solution of partial differential equations."" -SIAM Review Time-Dependent Problems and Difference Methods, Second Edition continues to provide guidance for the analysis of difference methods for computing approximate solutions to partial differential equations for time-de

  9. R package imputeTestbench to compare imputations methods for univariate time series

    OpenAIRE

    Bokde, Neeraj; Kulat, Kishore; Beck, Marcus W; Asencio-Cortés, Gualberto

    2016-01-01

    This paper describes the R package imputeTestbench that provides a testbench for comparing imputation methods for missing data in univariate time series. The imputeTestbench package can be used to simulate the amount and type of missing data in a complete dataset and compare filled data using different imputation methods. The user has the option to simulate missing data by removing observations completely at random or in blocks of different sizes. Several default imputation methods are includ...

  10. The Method of Lines Solution of the Regularized Long-Wave Equation Using Runge-Kutta Time Discretization Method

    Directory of Open Access Journals (Sweden)

    H. O. Bakodah

    2013-01-01

    Full Text Available A method of lines approach to the numerical solution of nonlinear wave equations typified by the regularized long wave (RLW is presented. The method developed uses a finite differences discretization to the space. Solution of the resulting system was obtained by applying fourth Runge-Kutta time discretization method. Using Von Neumann stability analysis, it is shown that the proposed method is marginally stable. To test the accuracy of the method some numerical experiments on test problems are presented. Test problems including solitary wave motion, two-solitary wave interaction, and the temporal evaluation of a Maxwellian initial pulse are studied. The accuracy of the present method is tested with and error norms and the conservation properties of mass, energy, and momentum under the RLW equation.

  11. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    Directory of Open Access Journals (Sweden)

    Chaoyang Shi

    2017-12-01

    Full Text Available Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  12. A novel technique for including surface tension in PLIC-VOF methods

    Energy Technology Data Exchange (ETDEWEB)

    Meier, M.; Yadigaroglu, G. [Swiss Federal Institute of Technology, Nuclear Engineering Lab. ETH-Zentrum, CLT, Zurich (Switzerland); Smith, B. [Paul Scherrer Inst. (PSI), Villigen (Switzerland). Lab. for Thermal-Hydraulics

    2002-02-01

    Various versions of Volume-of-Fluid (VOF) methods have been used successfully for the numerical simulation of gas-liquid flows with an explicit tracking of the phase interface. Of these, Piecewise-Linear Interface Construction (PLIC-VOF) appears as a fairly accurate, although somewhat more involved variant. Including effects due to surface tension remains a problem, however. The most prominent methods, Continuum Surface Force (CSF) of Brackbill et al. and the method of Zaleski and co-workers (both referenced later), both induce spurious or 'parasitic' currents, and only moderate accuracy in regards to determining the curvature. We present here a new method to determine curvature accurately using an estimator function, which is tuned with a least-squares-fit against reference data. Furthermore, we show how spurious currents may be drastically reduced using the reconstructed interfaces from the PLIC-VOF method. (authors)

  13. Highly comparative time-series analysis: the empirical structure of time series and their methods.

    Science.gov (United States)

    Fulcher, Ben D; Little, Max A; Jones, Nick S

    2013-06-06

    The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.

  14. Laser-induced electron dynamics including photoionization: A heuristic model within time-dependent configuration interaction theory.

    Science.gov (United States)

    Klinkusch, Stefan; Saalfrank, Peter; Klamroth, Tillmann

    2009-09-21

    We report simulations of laser-pulse driven many-electron dynamics by means of a simple, heuristic extension of the time-dependent configuration interaction singles (TD-CIS) approach. The extension allows for the treatment of ionizing states as nonstationary states with a finite, energy-dependent lifetime to account for above-threshold ionization losses in laser-driven many-electron dynamics. The extended TD-CIS method is applied to the following specific examples: (i) state-to-state transitions in the LiCN molecule which correspond to intramolecular charge transfer, (ii) creation of electronic wave packets in LiCN including wave packet analysis by pump-probe spectroscopy, and, finally, (iii) the effect of ionization on the dynamic polarizability of H(2) when calculated nonperturbatively by TD-CIS.

  15. A method for investigating relative timing information on phylogenetic trees.

    Science.gov (United States)

    Ford, Daniel; Matsen, Frederick A; Stadler, Tanja

    2009-04-01

    In this paper, we present a new way to describe the timing of branching events in phylogenetic trees. Our description is in terms of the relative timing of diversification events between sister clades; as such it is complementary to existing methods using lineages-through-time plots which consider diversification in aggregate. The method can be applied to look for evidence of diversification happening in lineage-specific "bursts", or the opposite, where diversification between 2 clades happens in an unusually regular fashion. In order to be able to distinguish interesting events from stochasticity, we discuss 2 classes of neutral models on trees with relative timing information and develop a statistical framework for testing these models. These model classes include both the coalescent with ancestral population size variation and global rate speciation-extinction models. We end the paper with 2 example applications: first, we show that the evolution of the hepatitis C virus deviates from the coalescent with arbitrary population size. Second, we analyze a large tree of ants, demonstrating that a period of elevated diversification rates does not appear to have occurred in a bursting manner.

  16. Real-time hybrid simulation using the convolution integral method

    International Nuclear Information System (INIS)

    Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A

    2011-01-01

    This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results

  17. Methods optimization for the first time core critical

    International Nuclear Information System (INIS)

    Yan Liang

    2014-01-01

    The PWR reactor core commissioning programs the content of the first critical reactor physics experiment, and describes thc physical test method. However, all the methods arc not exactly the same but efficient. This article aims to enhance the reactor for the first time in the process of critical safety, shorten the overall time of critical physical test for the first time, and improve the integrity of critical physical test data for the first time and accuracy, eventually to improve the operation of the plant economic benefit adopting sectional dilution, power feedback for Doppler point improvement of physical test methods, and so on. (author)

  18. Improved method for considering PMU’s uncertainty and its effect on real-time stability assessment methods based on Thevenin equivalent

    DEFF Research Database (Denmark)

    Perez, Angel; Jóhannsson, Hjörtur; Østergaard, Jacob

    2015-01-01

    This article characterizes experimentally the relation between phase and magnitude error from Phasor Measurement Units (PMU) in steady state and study its effect on real-time stability assessment methods. This is achieved by a set of laboratory tests applied to four different devices, where...... a bivariate Gaussian mixture distribution was used to represent the error, obtained experimentally, and later include it in the synthesized PMU measurement using the Monte Carlo Method. Two models for including uncertainty are compared and the results show that taking into account the correlation between...

  19. Asymptotic equilibrium diffusion analysis of time-dependent Monte Carlo methods for grey radiative transfer

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2004-01-01

    The equations of nonlinear, time-dependent radiative transfer are known to yield the equilibrium diffusion equation as the leading-order solution of an asymptotic analysis when the mean-free path and mean-free time of a photon become small. We apply this same analysis to the Fleck-Cummings, Carter-Forest, and N'kaoua Monte Carlo approximations for grey (frequency-independent) radiative transfer. Although Monte Carlo simulation usually does not require the discretizations found in deterministic transport techniques, Monte Carlo methods for radiative transfer require a time discretization due to the nonlinearities of the problem. If an asymptotic analysis of the equations used by a particular Monte Carlo method yields an accurate time-discretized version of the equilibrium diffusion equation, the method should generate accurate solutions if a time discretization is chosen that resolves temperature changes, even if the time steps are much larger than the mean-free time of a photon. This analysis is of interest because in many radiative transfer problems, it is a practical necessity to use time steps that are large compared to a mean-free time. Our asymptotic analysis shows that: (i) the N'kaoua method has the equilibrium diffusion limit, (ii) the Carter-Forest method has the equilibrium diffusion limit if the material temperature change during a time step is small, and (iii) the Fleck-Cummings method does not have the equilibrium diffusion limit. We include numerical results that verify our theoretical predictions

  20. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    International Nuclear Information System (INIS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s 2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful

  1. Time-domain simulation of constitutive relations for nonlinear acoustics including relaxation for frequency power law attenuation media modeling

    Science.gov (United States)

    Jiménez, Noé; Camarena, Francisco; Redondo, Javier; Sánchez-Morcillo, Víctor; Konofagou, Elisa E.

    2015-10-01

    We report a numerical method for solving the constitutive relations of nonlinear acoustics, where multiple relaxation processes are included in a generalized formulation that allows the time-domain numerical solution by an explicit finite differences scheme. Thus, the proposed physical model overcomes the limitations of the one-way Khokhlov-Zabolotskaya-Kuznetsov (KZK) type models and, due to the Lagrangian density is implicitly included in the calculation, the proposed method also overcomes the limitations of Westervelt equation in complex configurations for medical ultrasound. In order to model frequency power law attenuation and dispersion, such as observed in biological media, the relaxation parameters are fitted to both exact frequency power law attenuation/dispersion media and also empirically measured attenuation of a variety of tissues that does not fit an exact power law. Finally, a computational technique based on artificial relaxation is included to correct the non-negligible numerical dispersion of the finite difference scheme, and, on the other hand, improve stability trough artificial attenuation when shock waves are present. This technique avoids the use of high-order finite-differences schemes leading to fast calculations. The present algorithm is especially suited for practical configuration where spatial discontinuities are present in the domain (e.g. axisymmetric domains or zero normal velocity boundary conditions in general). The accuracy of the method is discussed by comparing the proposed simulation solutions to one dimensional analytical and k-space numerical solutions.

  2. Adding Timing Requirements to the CODARTS Real-Time Software Design Method

    DEFF Research Database (Denmark)

    Bach, K.R.

    The CODARTS software design method consideres how concurrent, distributed and real-time applications can be designed. Although accounting for the important issues of task and communication, the method does not provide means for expressing the timeliness of the tasks and communication directly...

  3. A negative-norm least-squares method for time-harmonic Maxwell equations

    KAUST Repository

    Copeland, Dylan M.

    2012-04-01

    This paper presents and analyzes a negative-norm least-squares finite element discretization method for the dimension-reduced time-harmonic Maxwell equations in the case of axial symmetry. The reduced equations are expressed in cylindrical coordinates, and the analysis consequently involves weighted Sobolev spaces based on the degenerate radial weighting. The main theoretical results established in this work include existence and uniqueness of the continuous and discrete formulations and error estimates for simple finite element functions. Numerical experiments confirm the error estimates and efficiency of the method for piecewise constant coefficients. © 2011 Elsevier Inc.

  4. Predicting Taxi-Out Time at Congested Airports with Optimization-Based Support Vector Regression Methods

    Directory of Open Access Journals (Sweden)

    Guan Lian

    2018-01-01

    Full Text Available Accurate prediction of taxi-out time is significant precondition for improving the operationality of the departure process at an airport, as well as reducing the long taxi-out time, congestion, and excessive emission of greenhouse gases. Unfortunately, several of the traditional methods of predicting taxi-out time perform unsatisfactorily at congested airports. This paper describes and tests three of those conventional methods which include Generalized Linear Model, Softmax Regression Model, and Artificial Neural Network method and two improved Support Vector Regression (SVR approaches based on swarm intelligence algorithm optimization, which include Particle Swarm Optimization (PSO and Firefly Algorithm. In order to improve the global searching ability of Firefly Algorithm, adaptive step factor and Lévy flight are implemented simultaneously when updating the location function. Six factors are analysed, of which delay is identified as one significant factor in congested airports. Through a series of specific dynamic analyses, a case study of Beijing International Airport (PEK is tested with historical data. The performance measures show that the proposed two SVR approaches, especially the Improved Firefly Algorithm (IFA optimization-based SVR method, not only perform as the best modelling measures and accuracy rate compared with the representative forecast models, but also can achieve a better predictive performance when dealing with abnormal taxi-out time states.

  5. Predictability of monthly temperature and precipitation using automatic time series forecasting methods

    Science.gov (United States)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2018-02-01

    We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.

  6. A Comparison of Iterative 2D-3D Pose Estimation Methods for Real-Time Applications

    DEFF Research Database (Denmark)

    Grest, Daniel; Krüger, Volker; Petersen, Thomas

    2009-01-01

    This work compares iterative 2D-3D Pose Estimation methods for use in real-time applications. The compared methods are available for public as C++ code. One method is part of the openCV library, namely POSIT. Because POSIT is not applicable for planar 3Dpoint congurations, we include the planar P...

  7. Multiple Shooting and Time Domain Decomposition Methods

    CERN Document Server

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  8. Simulation of three-dimensional, time-dependent, incompressible flows by a finite element method

    International Nuclear Information System (INIS)

    Chan, S.T.; Gresho, P.M.; Lee, R.L.; Upson, C.D.

    1981-01-01

    A finite element model has been developed for simulating the dynamics of problems encountered in atmospheric pollution and safety assessment studies. The model is based on solving the set of three-dimensional, time-dependent, conservation equations governing incompressible flows. Spatial discretization is performed via a modified Galerkin finite element method, and time integration is carried out via the forward Euler method (pressure is computed implicitly, however). Several cost-effective techniques (including subcycling, mass lumping, and reduced Gauss-Legendre quadrature) which have been implemented are discussed. Numerical results are presented to demonstrate the applicability of the model

  9. Dead time corrections using the backward extrapolation method

    Energy Technology Data Exchange (ETDEWEB)

    Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)

    2017-05-11

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.

  10. Novel crystal timing calibration method based on total variation

    Science.gov (United States)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  11. Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves

    Science.gov (United States)

    Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua

    2017-09-01

    In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.

  12. Scattering in an intense radiation field: Time-independent methods

    International Nuclear Information System (INIS)

    Rosenberg, L.

    1977-01-01

    The standard time-independent formulation of nonrelativistic scattering theory is here extended to take into account the presence of an intense external radiation field. In the case of scattering by a static potential the extension is accomplished by the introduction of asymptotic states and intermediate-state propagators which account for the absorption and induced emission of photons by the projectile as it propagates through the field. Self-energy contributions to the propagator are included by a systematic summation of forward-scattering terms. The self-energy analysis is summarized in the form of a modified perturbation expansion of the type introduced by Watson some time ago in the context of nuclear-scattering theory. This expansion, which has a simple continued-fraction structure in the case of a single-mode field, provides a generally applicable successive approximation procedure for the propagator and the asymptotic states. The problem of scattering by a composite target is formulated using the effective-potential method. The modified perturbation expansion which accounts for self-energy effects is applicable here as well. A discussion of a coupled two-state model is included to summarize and clarify the calculational procedures

  13. A Novel Time Synchronization Method for Dynamic Reconfigurable Bus

    Directory of Open Access Journals (Sweden)

    Zhang Weigong

    2016-01-01

    Full Text Available UM-BUS is a novel dynamically reconfigurable high-speed serial bus for embedded systems. It can achieve fault tolerance by detecting the channel status in real time and reconfigure dynamically at run-time. The bus supports direct interconnections between up to eight master nodes and multiple slave nodes. In order to solve the time synchronization problem among master nodes, this paper proposes a novel time synchronization method, which can meet the requirement of time precision in UM-BUS. In this proposed method, time is firstly broadcasted through time broadcast packets. Then, the transmission delay and time deviations via three handshakes during link self-checking and channel detection can be worked out referring to the IEEE 1588 protocol. Thereby, each node calibrates its own time according to the broadcasted time. The proposed method has been proved to meet the requirement of real-time time synchronization. The experimental results show that the synchronous precision can achieve a bias less than 20 ns.

  14. A Time Series Forecasting Method

    Directory of Open Access Journals (Sweden)

    Wang Zhao-Yu

    2017-01-01

    Full Text Available This paper proposes a novel time series forecasting method based on a weighted self-constructing clustering technique. The weighted self-constructing clustering processes all the data patterns incrementally. If a data pattern is not similar enough to an existing cluster, it forms a new cluster of its own. However, if a data pattern is similar enough to an existing cluster, it is removed from the cluster it currently belongs to and added to the most similar cluster. During the clustering process, weights are learned for each cluster. Given a series of time-stamped data up to time t, we divide it into a set of training patterns. By using the weighted self-constructing clustering, the training patterns are grouped into a set of clusters. To estimate the value at time t + 1, we find the k nearest neighbors of the input pattern and use these k neighbors to decide the estimation. Experimental results are shown to demonstrate the effectiveness of the proposed approach.

  15. DRK methods for time-domain oscillator simulation

    NARCIS (Netherlands)

    Sevat, M.F.; Houben, S.H.M.J.; Maten, ter E.J.W.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.

    2006-01-01

    This paper presents a new Runge-Kutta type integration method that is well-suited for time-domain simulation of oscillators. A unique property of the new method is that its damping characteristics can be controlled by a continuous parameter.

  16. Systems and Methods for Fabricating Structures Including Metallic Glass-Based Materials Using Low Pressure Casting

    Science.gov (United States)

    Hofmann, Douglas C. (Inventor); Kennett, Andrew (Inventor)

    2018-01-01

    Systems and methods to fabricate objects including metallic glass-based materials using low-pressure casting techniques are described. In one embodiment, a method of fabricating an object that includes a metallic glass-based material includes: introducing molten alloy into a mold cavity defined by a mold using a low enough pressure such that the molten alloy does not conform to features of the mold cavity that are smaller than 100 microns; and cooling the molten alloy such that it solidifies, the solid including a metallic glass-based material.

  17. Financial time series analysis based on information categorization method

    Science.gov (United States)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  18. Solar cells, structures including organometallic halide perovskite monocrystalline films, and methods of preparation thereof

    KAUST Repository

    Bakr, Osman; Peng, Wei; Wang, Lingfei

    2017-01-01

    Embodiments of the present disclosure provide for solar cells including an organometallic halide perovskite monocrystalline film (see fig. 1.1B), other devices including the organometallic halide perovskite monocrystalline film, methods of making

  19. Inverse methods for estimating primary input signals from time-averaged isotope profiles

    Science.gov (United States)

    Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.

    2005-08-01

    Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.

  20. Comparative Evaluations of Four Specification Methods for Real-Time Systems

    Science.gov (United States)

    1989-12-01

    December 1989 Comparative Evaluations of Four Specification Methods for Real - Time Systems David P. Wood William G. Wood Specification and Design Methods...Methods for Real - Time Systems Abstract: A number of methods have been proposed in the last decade for the specification of system and software requirements...and software specification for real - time systems . Our process for the identification of methods that meet the above criteria is described in greater

  1. Electrode assemblies, plasma apparatuses and systems including electrode assemblies, and methods for generating plasma

    Science.gov (United States)

    Kong, Peter C; Grandy, Jon D; Detering, Brent A; Zuck, Larry D

    2013-09-17

    Electrode assemblies for plasma reactors include a structure or device for constraining an arc endpoint to a selected area or region on an electrode. In some embodiments, the structure or device may comprise one or more insulating members covering a portion of an electrode. In additional embodiments, the structure or device may provide a magnetic field configured to control a location of an arc endpoint on the electrode. Plasma generating modules, apparatus, and systems include such electrode assemblies. Methods for generating a plasma include covering at least a portion of a surface of an electrode with an electrically insulating member to constrain a location of an arc endpoint on the electrode. Additional methods for generating a plasma include generating a magnetic field to constrain a location of an arc endpoint on an electrode.

  2. Immersed Boundary-Lattice Boltzmann Method Using Two Relaxation Times

    Directory of Open Access Journals (Sweden)

    Kosuke Hayashi

    2012-06-01

    Full Text Available An immersed boundary-lattice Boltzmann method (IB-LBM using a two-relaxation time model (TRT is proposed. The collision operator in the lattice Boltzmann equation is modeled using two relaxation times. One of them is used to set the fluid viscosity and the other is for numerical stability and accuracy. A direct-forcing method is utilized for treatment of immersed boundary. A multi-direct forcing method is also implemented to precisely satisfy the boundary conditions at the immersed boundary. Circular Couette flows between a stationary cylinder and a rotating cylinder are simulated for validation of the proposed method. The method is also validated through simulations of circular and spherical falling particles. Effects of the functional forms of the direct-forcing term and the smoothed-delta function, which interpolates the fluid velocity to the immersed boundary and distributes the forcing term to fixed Eulerian grid points, are also examined. As a result, the following conclusions are obtained: (1 the proposed method does not cause non-physical velocity distribution in circular Couette flows even at high relaxation times, whereas the single-relaxation time (SRT model causes a large non-physical velocity distortion at a high relaxation time, (2 the multi-direct forcing reduces the errors in the velocity profile of a circular Couette flow at a high relaxation time, (3 the two-point delta function is better than the four-point delta function at low relaxation times, but worse at high relaxation times, (4 the functional form of the direct-forcing term does not affect predictions, and (5 circular and spherical particles falling in liquids are well predicted by using the proposed method both for two-dimensional and three-dimensional cases.

  3. [A new measurement method of time-resolved spectrum].

    Science.gov (United States)

    Shi, Zhi-gang; Huang, Shi-hua; Liang, Chun-jun; Lei, Quan-sheng

    2007-02-01

    A new method for measuring time-resolved spectrum (TRS) is brought forward. Programming with assemble language controlled the micro-control-processor (AT89C51), and a kind of peripheral circuit constituted the drive circuit, which drived the stepping motor to run the monochromator. So the light of different kinds of expected wavelength could be obtained. The optical signal was transformed to electrical signal by optical-to-electrical transform with the help of photomultiplier tube (Hamamatsu 1P28). The electrical signal of spectrum data was transmitted to the oscillograph. Connecting the two serial interfaces of RS232 between the oscillograph and computer, the electrical signal of spectrum data could be transmitted to computer for programming to draw the attenuation curve and time-resolved spectrum (TRS) of the swatch. The method for measuring time-resolved spectrum (TRS) features parallel measurement in time scale but serial measurement in wavelength scale. Time-resolved spectrum (TRS) and integrated emission spectrum of Tb3+ in swatch Tb(o-BBA)3 phen were measured using this method. Compared with the real time-resolved spectrum (TRS). It was validated to be feasible, credible and convenient. The 3D spectra of fluorescence intensity-wavelength-time, and the integrated spectrum of the swatch Tb(o-BBA)3 phen are given.

  4. Determination of beta attenuation coefficients by means of timing method

    International Nuclear Information System (INIS)

    Ermis, E.E.; Celiktas, C.

    2012-01-01

    Highlights: ► Beta attenuation coefficients of absorber materials were found in this study. ► For this process, a new method (timing method) was suggested. ► The obtained beta attenuation coefficients were compatible with the results from the traditional one. ► The timing method can be used to determine beta attenuation coefficient. - Abstract: Using a counting system with plastic scintillation detector, beta linear and mass attenuation coefficients were determined for bakelite, Al, Fe and plexiglass absorbers by means of timing method. To show the accuracy and reliability of the obtained results through this method, the coefficients were also found via conventional energy method. Obtained beta attenuation coefficients from both methods were compared with each other and the literature values. Beta attenuation coefficients obtained through timing method were found to be compatible with the values obtained from conventional energy method and the literature.

  5. Probabilistic real-time contingency ranking method

    International Nuclear Information System (INIS)

    Mijuskovic, N.A.; Stojnic, D.

    2000-01-01

    This paper describes a real-time contingency method based on a probabilistic index-expected energy not supplied. This way it is possible to take into account the stochastic nature of the electric power system equipment outages. This approach enables more comprehensive ranking of contingencies and it is possible to form reliability cost values that can form the basis for hourly spot price calculations. The electric power system of Serbia is used as an example for the method proposed. (author)

  6. Finite element method for time-space-fractional Schrodinger equation

    Directory of Open Access Journals (Sweden)

    Xiaogang Zhu

    2017-07-01

    Full Text Available In this article, we develop a fully discrete finite element method for the nonlinear Schrodinger equation (NLS with time- and space-fractional derivatives. The time-fractional derivative is described in Caputo's sense and the space-fractional derivative in Riesz's sense. Its stability is well derived; the convergent estimate is discussed by an orthogonal operator. We also extend the method to the two-dimensional time-space-fractional NLS and to avoid the iterative solvers at each time step, a linearized scheme is further conducted. Several numerical examples are implemented finally, which confirm the theoretical results as well as illustrate the accuracy of our methods.

  7. Method and system for real-time analysis of biosensor data

    Science.gov (United States)

    Greenbaum, Elias; Rodriguez, Jr., Miguel

    2014-08-19

    A method of biosensor-based detection of toxins includes the steps of providing a fluid to be analyzed having a plurality of photosynthetic organisms therein, wherein chemical, biological or radiological agents alter a nominal photosynthetic activity of the photosynthetic organisms. At a first time a measured photosynthetic activity curve is obtained from the photosynthetic organisms. The measured curve is automatically compared to a reference photosynthetic activity curve to determine differences therebetween. The presence of the chemical, biological or radiological agents, or precursors thereof, are then identified if present in the fluid using the differences.

  8. The inverse method parametric verification of real-time embedded systems

    CERN Document Server

    André , Etienne

    2013-01-01

    This book introduces state-of-the-art verification techniques for real-time embedded systems, based on the inverse method for parametric timed automata. It reviews popular formalisms for the specification and verification of timed concurrent systems and, in particular, timed automata as well as several extensions such as timed automata equipped with stopwatches, linear hybrid automata and affine hybrid automata.The inverse method is introduced, and its benefits for guaranteeing robustness in real-time systems are shown. Then, it is shown how an iteration of the inverse method can solv

  9. Estimating evolutionary rates using time-structured data: a general comparison of phylogenetic methods.

    Science.gov (United States)

    Duchêne, Sebastián; Geoghegan, Jemma L; Holmes, Edward C; Ho, Simon Y W

    2016-11-15

    In rapidly evolving pathogens, including viruses and some bacteria, genetic change can accumulate over short time-frames. Accordingly, their sampling times can be used to calibrate molecular clocks, allowing estimation of evolutionary rates. Methods for estimating rates from time-structured data vary in how they treat phylogenetic uncertainty and rate variation among lineages. We compiled 81 virus data sets and estimated nucleotide substitution rates using root-to-tip regression, least-squares dating and Bayesian inference. Although estimates from these three methods were often congruent, this largely relied on the choice of clock model. In particular, relaxed-clock models tended to produce higher rate estimates than methods that assume constant rates. Discrepancies in rate estimates were also associated with high among-lineage rate variation, and phylogenetic and temporal clustering. These results provide insights into the factors that affect the reliability of rate estimates from time-structured sequence data, emphasizing the importance of clock-model testing. sduchene@unimelb.edu.au or garzonsebastian@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Solar cells, structures including organometallic halide perovskite monocrystalline films, and methods of preparation thereof

    KAUST Repository

    Bakr, Osman M.

    2017-03-02

    Embodiments of the present disclosure provide for solar cells including an organometallic halide perovskite monocrystalline film (see fig. 1.1B), other devices including the organometallic halide perovskite monocrystalline film, methods of making organometallic halide perovskite monocrystalline film, and the like.

  11. Seismic assessment of a site using the time series method

    International Nuclear Information System (INIS)

    Krutzik, N.J.; Rotaru, I.; Bobei, M.; Mingiuc, C.; Serban, V.; Androne, M.

    1997-01-01

    To increase the safety of a NPP located on a seismic site, the seismic acceleration level to which the NPP should be qualified must be as representative as possible for that site, with a conservative degree of safety but not too exaggerated. The consideration of the seismic events affecting the site as independent events and the use of statistic methods to define some safety levels with very low annual occurrence probability (10 -4 ) may lead to some exaggerations of the seismic safety level. The use of some very high value for the seismic acceleration imposed by the seismic safety levels required by the hazard analysis may lead to very costly technical solutions that can make the plant operation more difficult and increase maintenance costs. The considerations of seismic events as a time series with dependence among the events produced, may lead to a more representative assessment of a NPP site seismic activity and consequently to a prognosis on the seismic level values to which the NPP would be ensured throughout its life-span. That prognosis should consider the actual seismic activity (including small earthquakes in real time) of the focuses that affect the plant site. The paper proposes the applications of Autoregressive Time Series to issue a prognosis on the seismic activity of a focus and presents the analysis on Vrancea focus that affects NPP Cernavoda site, by this method. The paper also presents the manner to analyse the focus activity as per the new approach and it assesses the maximum seismic acceleration that may affect NPP Cernavoda throughout its life-span (∼ 30 years). Development and applications of new mathematical analysis method, both for long - and short - time intervals, may lead to important contributions in the process of foretelling the seismic events in the future. (authors)

  12. Time domain contact model for tyre/road interaction including nonlinear contact stiffness due to small-scale roughness

    Science.gov (United States)

    Andersson, P. B. U.; Kropp, W.

    2008-11-01

    Rolling resistance, traction, wear, excitation of vibrations, and noise generation are all attributes to consider in optimisation of the interaction between automotive tyres and wearing courses of roads. The key to understand and describe the interaction is to include a wide range of length scales in the description of the contact geometry. This means including scales on the order of micrometres that have been neglected in previous tyre/road interaction models. A time domain contact model for the tyre/road interaction that includes interfacial details is presented. The contact geometry is discretised into multiple elements forming pairs of matching points. The dynamic response of the tyre is calculated by convolving the contact forces with pre-calculated Green's functions. The smaller-length scales are included by using constitutive interfacial relations, i.e. by using nonlinear contact springs, for each pair of contact elements. The method is presented for normal (out-of-plane) contact and a method for assessing the stiffness of the nonlinear springs based on detailed geometry and elastic data of the tread is suggested. The governing equations of the nonlinear contact problem are solved with the Newton-Raphson iterative scheme. Relations between force, indentation, and contact stiffness are calculated for a single tread block in contact with a road surface. The calculated results have the same character as results from measurements found in literature. Comparison to traditional contact formulations shows that the effect of the small-scale roughness is large; the contact stiffness is only up to half of the stiffness that would result if contact is made over the whole element directly to the bulk of the tread. It is concluded that the suggested contact formulation is a suitable model to include more details of the contact interface. Further, the presented result for the tread block in contact with the road is a suitable input for a global tyre/road interaction model

  13. A variable-order time-dependent neutron transport method for nuclear reactor kinetics using analytically-integrated space-time characteristics

    International Nuclear Information System (INIS)

    Hoffman, A. J.; Lee, J. C.

    2013-01-01

    A new time-dependent neutron transport method based on the method of characteristics (MOC) has been developed. Whereas most spatial kinetics methods treat time dependence through temporal discretization, this new method treats time dependence by defining the characteristics to span space and time. In this implementation regions are defined in space-time where the thickness of the region in time fulfills an analogous role to the time step in discretized methods. The time dependence of the local source is approximated using a truncated Taylor series expansion with high order derivatives approximated using backward differences, permitting the solution of the resulting space-time characteristic equation. To avoid a drastic increase in computational expense and memory requirements due to solving many discrete characteristics in the space-time planes, the temporal variation of the boundary source is similarly approximated. This allows the characteristics in the space-time plane to be represented analytically rather than discretely, resulting in an algorithm comparable in implementation and expense to one that arises from conventional time integration techniques. Furthermore, by defining the boundary flux time derivative in terms of the preceding local source time derivative and boundary flux time derivative, the need to store angularly-dependent data is avoided without approximating the angular dependence of the angular flux time derivative. The accuracy of this method is assessed through implementation in the neutron transport code DeCART. The method is employed with variable-order local source representation to model a TWIGL transient. The results demonstrate that this method is accurate and more efficient than the discretized method. (authors)

  14. Comparative Analysis of Neural Network Training Methods in Real-time Radiotherapy

    Directory of Open Access Journals (Sweden)

    Nouri S.

    2017-03-01

    Full Text Available Background: The motions of body and tumor in some regions such as chest during radiotherapy treatments are one of the major concerns protecting normal tissues against high doses. By using real-time radiotherapy technique, it is possible to increase the accuracy of delivered dose to the tumor region by means of tracing markers on the body of patients. Objective: This study evaluates the accuracy of some artificial intelligence methods including neural network and those of combination with genetic algorithm as well as particle swarm optimization (PSO estimating tumor positions in real-time radiotherapy. Method: One hundred recorded signals of three external markers were used as input data. The signals from 3 markers thorough 10 breathing cycles of a patient treated via a cyber-knife for a lung tumor were used as data input. Then, neural network method and its combination with genetic or PSO algorithms were applied determining the tumor locations using MATLAB© software program. Results: The accuracies were obtained 0.8%, 12% and 14% in neural network, genetic and particle swarm optimization algorithms, respectively. Conclusion: The internal target volume (ITV should be determined based on the applied neural network algorithm on training steps.

  15. Embedded and real time system development a software engineering perspective concepts, methods and principles

    CERN Document Server

    Saeed, Saqib; Darwish, Ashraf; Abraham, Ajith

    2014-01-01

    Nowadays embedded and real-time systems contain complex software. The complexity of embedded systems is increasing, and the amount and variety of software in the embedded products are growing. This creates a big challenge for embedded and real-time software development processes and there is a need to develop separate metrics and benchmarks. “Embedded and Real Time System Development: A Software Engineering Perspective: Concepts, Methods and Principles” presents practical as well as conceptual knowledge of the latest tools, techniques and methodologies of embedded software engineering and real-time systems. Each chapter includes an in-depth investigation regarding the actual or potential role of software engineering tools in the context of the embedded system and real-time system. The book presents state-of-the art and future perspectives with industry experts, researchers, and academicians sharing ideas and experiences including surrounding frontier technologies, breakthroughs, innovative solutions and...

  16. Analysis of Longitudinal Studies With Repeated Outcome Measures: Adjusting for Time-Dependent Confounding Using Conventional Methods.

    Science.gov (United States)

    Keogh, Ruth H; Daniel, Rhian M; VanderWeele, Tyler J; Vansteelandt, Stijn

    2018-05-01

    Estimation of causal effects of time-varying exposures using longitudinal data is a common problem in epidemiology. When there are time-varying confounders, which may include past outcomes, affected by prior exposure, standard regression methods can lead to bias. Methods such as inverse probability weighted estimation of marginal structural models have been developed to address this problem. However, in this paper we show how standard regression methods can be used, even in the presence of time-dependent confounding, to estimate the total effect of an exposure on a subsequent outcome by controlling appropriately for prior exposures, outcomes, and time-varying covariates. We refer to the resulting estimation approach as sequential conditional mean models (SCMMs), which can be fitted using generalized estimating equations. We outline this approach and describe how including propensity score adjustment is advantageous. We compare the causal effects being estimated using SCMMs and marginal structural models, and we compare the two approaches using simulations. SCMMs enable more precise inferences, with greater robustness against model misspecification via propensity score adjustment, and easily accommodate continuous exposures and interactions. A new test for direct effects of past exposures on a subsequent outcome is described.

  17. Nonlinear system identification NARMAX methods in the time, frequency, and spatio-temporal domains

    CERN Document Server

    Billings, Stephen A

    2013-01-01

    Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains describes a comprehensive framework for the identification and analysis of nonlinear dynamic systems in the time, frequency, and spatio-temporal domains. This book is written with an emphasis on making the algorithms accessible so that they can be applied and used in practice. Includes coverage of: The NARMAX (nonlinear autoregressive moving average with exogenous inputs) modelThe orthogonal least squares algorithm that allows models to be built term by

  18. Earthquake analysis of structures including structure-soil interaction by a substructure method

    International Nuclear Information System (INIS)

    Chopra, A.K.; Guttierrez, J.A.

    1977-01-01

    A general substructure method for analysis of response of nuclear power plant structures to earthquake ground motion, including the effects of structure-soil interaction, is summarized. The method is applicable to complex structures idealized as finite element systems and the soil region treated as either a continuum, for example as a viscoelastic halfspace, or idealized as a finite element system. The halfspace idealization permits reliable analysis for sites where essentially similar soils extend to large depths and there is no rigid boundary such as soil-rock interface. For sites where layers of soft soil are underlain by rock at shallow depth, finite element idealization of the soil region is appropriate; in this case, the direct and substructure methods would lead to equivalent results but the latter provides the better alternative. Treating the free field motion directly as the earthquake input in the substructure eliminates the deconvolution calculations and the related assumption-regarding type and direction of earthquake waves-required in the direct method. (Auth.)

  19. A modular method to handle multiple time-dependent quantities in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Shin, J; Faddegon, B A; Perl, J; Schümann, J; Paganetti, H

    2012-01-01

    A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method. (paper)

  20. A quasi-static algorithm that includes effects of characteristic time scales for simulating failures in brittle materials

    KAUST Repository

    Liu, Jinxing

    2013-04-24

    When the brittle heterogeneous material is simulated via lattice models, the quasi-static failure depends on the relative magnitudes of Telem, the characteristic releasing time of the internal forces of the broken elements and Tlattice, the characteristic relaxation time of the lattice, both of which are infinitesimal compared with Tload, the characteristic loading period. The load-unload (L-U) method is used for one extreme, Telem << Tlattice, whereas the force-release (F-R) method is used for the other, Telem T lattice. For cases between the above two extremes, we develop a new algorithm by combining the L-U and the F-R trial displacement fields to construct the new trial field. As a result, our algorithm includes both L-U and F-R failure characteristics, which allows us to observe the influence of the ratio of Telem to Tlattice by adjusting their contributions in the trial displacement field. Therefore, the material dependence of the snap-back instabilities is implemented by introducing one snap-back parameter γ. Although in principle catastrophic failures can hardly be predicted accurately without knowing all microstructural information, effects of γ can be captured by numerical simulations conducted on samples with exactly the same microstructure but different γs. Such a same-specimen-based study shows how the lattice behaves along with the changing ratio of the L-U and F-R components. © 2013 The Author(s).

  1. Time-frequency energy density precipitation method for time-of-flight extraction of narrowband Lamb wave detection signals

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Y., E-mail: thuzhangyu@foxmail.com; Huang, S. L., E-mail: huangsling@tsinghua.edu.cn; Wang, S.; Zhao, W. [State Key Laboratory of Power Systems, Department of Electrical Engineering, Tsinghua University, Beijing 100084 (China)

    2016-05-15

    The time-of-flight of the Lamb wave provides an important basis for defect evaluation in metal plates and is the input signal for Lamb wave tomographic imaging. However, the time-of-flight can be difficult to acquire because of the Lamb wave dispersion characteristics. This work proposes a time-frequency energy density precipitation method to accurately extract the time-of-flight of narrowband Lamb wave detection signals in metal plates. In the proposed method, a discrete short-time Fourier transform is performed on the narrowband Lamb wave detection signals to obtain the corresponding discrete time-frequency energy density distribution. The energy density values at the center frequency for all discrete time points are then calculated by linear interpolation. Next, the time-domain energy density curve focused on that center frequency is precipitated by least squares fitting of the calculated energy density values. Finally, the peak times of the energy density curve obtained relative to the initial pulse signal are extracted as the time-of-flight for the narrowband Lamb wave detection signals. An experimental platform is established for time-of-flight extraction of narrowband Lamb wave detection signals, and sensitivity analysis of the proposed time-frequency energy density precipitation method is performed in terms of propagation distance, dispersion characteristics, center frequency, and plate thickness. For comparison, the widely used Hilbert–Huang transform method is also implemented for time-of-flight extraction. The results show that the time-frequency energy density precipitation method can accurately extract the time-of-flight with relative error of <1% and thus can act as a universal time-of-flight extraction method for narrowband Lamb wave detection signals.

  2. Time-frequency energy density precipitation method for time-of-flight extraction of narrowband Lamb wave detection signals

    International Nuclear Information System (INIS)

    Zhang, Y.; Huang, S. L.; Wang, S.; Zhao, W.

    2016-01-01

    The time-of-flight of the Lamb wave provides an important basis for defect evaluation in metal plates and is the input signal for Lamb wave tomographic imaging. However, the time-of-flight can be difficult to acquire because of the Lamb wave dispersion characteristics. This work proposes a time-frequency energy density precipitation method to accurately extract the time-of-flight of narrowband Lamb wave detection signals in metal plates. In the proposed method, a discrete short-time Fourier transform is performed on the narrowband Lamb wave detection signals to obtain the corresponding discrete time-frequency energy density distribution. The energy density values at the center frequency for all discrete time points are then calculated by linear interpolation. Next, the time-domain energy density curve focused on that center frequency is precipitated by least squares fitting of the calculated energy density values. Finally, the peak times of the energy density curve obtained relative to the initial pulse signal are extracted as the time-of-flight for the narrowband Lamb wave detection signals. An experimental platform is established for time-of-flight extraction of narrowband Lamb wave detection signals, and sensitivity analysis of the proposed time-frequency energy density precipitation method is performed in terms of propagation distance, dispersion characteristics, center frequency, and plate thickness. For comparison, the widely used Hilbert–Huang transform method is also implemented for time-of-flight extraction. The results show that the time-frequency energy density precipitation method can accurately extract the time-of-flight with relative error of <1% and thus can act as a universal time-of-flight extraction method for narrowband Lamb wave detection signals.

  3. Prospects of Frequency-Time Correlation Analysis for Detecting Pipeline Leaks by Acoustic Emission Method

    International Nuclear Information System (INIS)

    Faerman, V A; Cheremnov, A G; Avramchuk, V V; Luneva, E E

    2014-01-01

    In the current work the relevance of nondestructive test method development applied for pipeline leak detection is considered. It was shown that acoustic emission testing is currently one of the most widely spread leak detection methods. The main disadvantage of this method is that it cannot be applied in monitoring long pipeline sections, which in its turn complicates and slows down the inspection of the line pipe sections of main pipelines. The prospects of developing alternative techniques and methods based on the use of the spectral analysis of signals were considered and their possible application in leak detection on the basis of the correlation method was outlined. As an alternative, the time-frequency correlation function calculation is proposed. This function represents the correlation between the spectral components of the analyzed signals. In this work, the technique of time-frequency correlation function calculation is described. The experimental data that demonstrate obvious advantage of the time-frequency correlation function compared to the simple correlation function are presented. The application of the time-frequency correlation function is more effective in suppressing the noise components in the frequency range of the useful signal, which makes maximum of the function more pronounced. The main drawback of application of the time- frequency correlation function analysis in solving leak detection problems is a great number of calculations that may result in a further increase in pipeline time inspection. However, this drawback can be partially reduced by the development and implementation of efficient algorithms (including parallel) of computing the fast Fourier transform using computer central processing unit and graphic processing unit

  4. Method and apparatus for real-time measurement of fuel gas compositions and heating values

    Science.gov (United States)

    Zelepouga, Serguei; Pratapas, John M.; Saveliev, Alexei V.; Jangale, Vilas V.

    2016-03-22

    An exemplary embodiment can be an apparatus for real-time, in situ measurement of gas compositions and heating values. The apparatus includes a near infrared sensor for measuring concentrations of hydrocarbons and carbon dioxide, a mid infrared sensor for measuring concentrations of carbon monoxide and a semiconductor based sensor for measuring concentrations of hydrogen gas. A data processor having a computer program for reducing the effects of cross-sensitivities of the sensors to components other than target components of the sensors is also included. Also provided are corresponding or associated methods for real-time, in situ determination of a composition and heating value of a fuel gas.

  5. A novel weight determination method for time series data aggregation

    Science.gov (United States)

    Xu, Paiheng; Zhang, Rong; Deng, Yong

    2017-09-01

    Aggregation in time series is of great importance in time series smoothing, predicting and other time series analysis process, which makes it crucial to address the weights in times series correctly and reasonably. In this paper, a novel method to obtain the weights in time series is proposed, in which we adopt induced ordered weighted aggregation (IOWA) operator and visibility graph averaging (VGA) operator and linearly combine the weights separately generated by the two operator. The IOWA operator is introduced to the weight determination of time series, through which the time decay factor is taken into consideration. The VGA operator is able to generate weights with respect to the degree distribution in the visibility graph constructed from the corresponding time series, which reflects the relative importance of vertices in time series. The proposed method is applied to two practical datasets to illustrate its merits. The aggregation of Construction Cost Index (CCI) demonstrates the ability of proposed method to smooth time series, while the aggregation of The Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) illustrate how proposed method maintain the variation tendency of original data.

  6. Fault detection of gearbox using time-frequency method

    Science.gov (United States)

    Widodo, A.; Satrijo, Dj.; Prahasto, T.; Haryanto, I.

    2017-04-01

    This research deals with fault detection and diagnosis of gearbox by using vibration signature. In this work, fault detection and diagnosis are approached by employing time-frequency method, and then the results are compared with cepstrum analysis. Experimental work has been conducted for data acquisition of vibration signal thru self-designed gearbox test rig. This test-rig is able to demonstrate normal and faulty gearbox i.e., wears and tooth breakage. Three accelerometers were used for vibration signal acquisition from gearbox, and optical tachometer was used for shaft rotation speed measurement. The results show that frequency domain analysis using fast-fourier transform was less sensitive to wears and tooth breakage condition. However, the method of short-time fourier transform was able to monitor the faults in gearbox. Wavelet Transform (WT) method also showed good performance in gearbox fault detection using vibration signal after employing time synchronous averaging (TSA).

  7. A time-delayed method for controlling chaotic maps

    International Nuclear Information System (INIS)

    Chen Maoyin; Zhou Donghua; Shang Yun

    2005-01-01

    Combining the repetitive learning strategy and the optimality principle, this Letter proposes a time-delayed method to control chaotic maps. This method can effectively stabilize unstable periodic orbits within chaotic attractors in the sense of least mean square. Numerical simulations of some chaotic maps verify the effectiveness of this method

  8. Testing the multi-configuration time-dependent Hartree-Fock method

    International Nuclear Information System (INIS)

    Zanghellini, Juergen; Kitzler, Markus; Brabec, Thomas; Scrinzi, Armin

    2004-01-01

    We test the multi-configuration time-dependent Hartree-Fock method as a new approach towards the numerical calculation of dynamical processes in multi-electron systems using the harmonic quantum dot and one-dimensional helium in strong laser pulses as models. We find rapid convergence for quantities such as ground-state population, correlation coefficient and single ionization towards the exact results. The method converges, where the time-dependent Hartree-Fock method fails qualitatively

  9. Transformation-cost time-series method for analyzing irregularly sampled data.

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations-with associated costs-to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  10. Transformation-cost time-series method for analyzing irregularly sampled data

    Science.gov (United States)

    Ozken, Ibrahim; Eroglu, Deniz; Stemler, Thomas; Marwan, Norbert; Bagci, G. Baris; Kurths, Jürgen

    2015-06-01

    Irregular sampling of data sets is one of the challenges often encountered in time-series analysis, since traditional methods cannot be applied and the frequently used interpolation approach can corrupt the data and bias the subsequence analysis. Here we present the TrAnsformation-Cost Time-Series (TACTS) method, which allows us to analyze irregularly sampled data sets without degenerating the quality of the data set. Instead of using interpolation we consider time-series segments and determine how close they are to each other by determining the cost needed to transform one segment into the following one. Using a limited set of operations—with associated costs—to transform the time series segments, we determine a new time series, that is our transformation-cost time series. This cost time series is regularly sampled and can be analyzed using standard methods. While our main interest is the analysis of paleoclimate data, we develop our method using numerical examples like the logistic map and the Rössler oscillator. The numerical data allows us to test the stability of our method against noise and for different irregular samplings. In addition we provide guidance on how to choose the associated costs based on the time series at hand. The usefulness of the TACTS method is demonstrated using speleothem data from the Secret Cave in Borneo that is a good proxy for paleoclimatic variability in the monsoon activity around the maritime continent.

  11. Palmprint Verification Using Time Series Method

    Directory of Open Access Journals (Sweden)

    A. A. Ketut Agung Cahyawan Wiranatha

    2013-11-01

    Full Text Available The use of biometrics as an automatic recognition system is growing rapidly in solving security problems, palmprint is one of biometric system which often used. This paper used two steps in center of mass moment method for region of interest (ROI segmentation and apply the time series method combined with block window method as feature representation. Normalized Euclidean Distance is used to measure the similarity degrees of two feature vectors of palmprint. System testing is done using 500 samples palms, with 4 samples as the reference image and the 6 samples as test images. Experiment results show that this system can achieve a high performance with success rate about 97.33% (FNMR=1.67%, FMR=1.00 %, T=0.036.

  12. 20 CFR 617.35 - Time and method of payment.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Time and method of payment. 617.35 Section 617.35 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR TRADE ADJUSTMENT ASSISTANCE FOR WORKERS UNDER THE TRADE ACT OF 1974 Job Search Allowances § 617.35 Time and method...

  13. Non-linear shape functions over time in the space-time finite element method

    Directory of Open Access Journals (Sweden)

    Kacprzyk Zbigniew

    2017-01-01

    Full Text Available This work presents a generalisation of the space-time finite element method proposed by Kączkowski in his seminal of 1970’s and early 1980’s works. Kączkowski used linear shape functions in time. The recurrence formula obtained by Kączkowski was conditionally stable. In this paper, non-linear shape functions in time are proposed.

  14. Singular perturbation methods for nonlinear dynamic systems with time delays

    International Nuclear Information System (INIS)

    Hu, H.Y.; Wang, Z.H.

    2009-01-01

    This review article surveys the recent advances in the dynamics and control of time-delay systems, with emphasis on the singular perturbation methods, such as the method of multiple scales, the method of averaging, and two newly developed methods, the energy analysis and the pseudo-oscillator analysis. Some examples are given to demonstrate the advantages of the methods. The comparisons with other methods show that these methods lead to easier computations and higher accurate prediction on the local dynamics of time-delay systems near a Hopf bifurcation.

  15. Real-Time Pore Pressure Detection: Indicators and Improved Methods

    Directory of Open Access Journals (Sweden)

    Jincai Zhang

    2017-01-01

    Full Text Available High uncertainties may exist in the predrill pore pressure prediction in new prospects and deepwater subsalt wells; therefore, real-time pore pressure detection is highly needed to reduce drilling risks. The methods for pore pressure detection (the resistivity, sonic, and corrected d-exponent methods are improved using the depth-dependent normal compaction equations to adapt to the requirements of the real-time monitoring. A new method is proposed to calculate pore pressure from the connection gas or elevated background gas, which can be used for real-time pore pressure detection. The pore pressure detection using the logging-while-drilling, measurement-while-drilling, and mud logging data is also implemented and evaluated. Abnormal pore pressure indicators from the well logs, mud logs, and wellbore instability events are identified and analyzed to interpret abnormal pore pressures for guiding real-time drilling decisions. The principles for identifying abnormal pressure indicators are proposed to improve real-time pore pressure monitoring.

  16. A comparison of three time-domain anomaly detection methods

    Energy Technology Data Exchange (ETDEWEB)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E. [Delft University of Technology (Netherlands). Interfaculty Reactor Institute

    1996-01-01

    Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the {chi}{sup 2} method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author).

  17. A comparison of three time-domain anomaly detection methods

    International Nuclear Information System (INIS)

    Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E.

    1996-01-01

    Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the χ 2 method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author)

  18. A frequency domain linearized Navier-Stokes method including acoustic damping by eddy viscosity using RANS

    Science.gov (United States)

    Holmberg, Andreas; Kierkegaard, Axel; Weng, Chenyang

    2015-06-01

    In this paper, a method for including damping of acoustic energy in regions of strong turbulence is derived for a linearized Navier-Stokes method in the frequency domain. The proposed method is validated and analyzed in 2D only, although the formulation is fully presented in 3D. The result is applied in a study of the linear interaction between the acoustic and the hydrodynamic field in a 2D T-junction, subject to grazing flow at Mach 0.1. Part of the acoustic energy at the upstream edge of the junction is shed as harmonically oscillating disturbances, which are conveyed across the shear layer over the junction, where they interact with the acoustic field. As the acoustic waves travel in regions of strong shear, there is a need to include the interaction between the background turbulence and the acoustic field. For this purpose, the oscillation of the background turbulence Reynold's stress, due to the acoustic field, is modeled using an eddy Newtonian model assumption. The time averaged flow is first solved for using RANS along with a k-ε turbulence model. The spatially varying turbulent eddy viscosity is then added to the spatially invariant kinematic viscosity in the acoustic set of equations. The response of the 2D T-junction to an incident acoustic field is analyzed via a plane wave scattering matrix model, and the result is compared to experimental data for a T-junction of rectangular ducts. A strong improvement in the agreement between calculation and experimental data is found when the modification proposed in this paper is implemented. Discrepancies remaining are likely due to inaccuracies in the selected turbulence model, which is known to produce large errors e.g. for flows with significant rotation, which the grazing flow across the T-junction certainly is. A natural next step is therefore to test the proposed methodology together with more sophisticated turbulence models.

  19. Lung lesion doubling times: values and variability based on method of volume determination

    International Nuclear Information System (INIS)

    Eisenbud Quint, Leslie; Cheng, Joan; Schipper, Matthew; Chang, Andrew C.; Kalemkerian, Gregory

    2008-01-01

    Purpose: To determine doubling times (DTs) of lung lesions based on volumetric measurements from thin-section CT imaging. Methods: Previously untreated patients with ≥ two thin-section CT scans showing a focal lung lesion were identified. Lesion volumes were derived using direct volume measurements and volume calculations based on lesion area and diameter. Growth rates (GRs) were compared by tissue diagnosis and measurement technique. Results: 54 lesions were evaluated including 8 benign lesions, 10 metastases, 3 lymphomas, 15 adenocarcinomas, 11 squamous carcinomas, and 7 miscellaneous lung cancers. Using direct volume measurements, median DTs were 453, 111, 15, 181, 139 and 137 days, respectively. Lung cancer DTs ranged from 23-2239 days. There were no significant differences in GRs among the different lesion types. There was considerable variability among GRs using different volume determination methods. Conclusions: Lung cancer doubling times showed a substantial range, and different volume determination methods gave considerably different DTs

  20. Improved Real-time Denoising Method Based on Lifting Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Liu Zhaohua

    2014-06-01

    Full Text Available Signal denoising can not only enhance the signal to noise ratio (SNR but also reduce the effect of noise. In order to satisfy the requirements of real-time signal denoising, an improved semisoft shrinkage real-time denoising method based on lifting wavelet transform was proposed. The moving data window technology realizes the real-time wavelet denoising, which employs wavelet transform based on lifting scheme to reduce computational complexity. Also hyperbolic threshold function and recursive threshold computing can ensure the dynamic characteristics of the system, in addition, it can improve the real-time calculating efficiency as well. The simulation results show that the semisoft shrinkage real-time denoising method has quite a good performance in comparison to the traditional methods, namely soft-thresholding and hard-thresholding. Therefore, this method can solve more practical engineering problems.

  1. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  2. Time-frequency energy density precipitation method for time-of-flight extraction of narrowband Lamb wave detection signals.

    Science.gov (United States)

    Zhang, Y; Huang, S L; Wang, S; Zhao, W

    2016-05-01

    The time-of-flight of the Lamb wave provides an important basis for defect evaluation in metal plates and is the input signal for Lamb wave tomographic imaging. However, the time-of-flight can be difficult to acquire because of the Lamb wave dispersion characteristics. This work proposes a time-frequency energy density precipitation method to accurately extract the time-of-flight of narrowband Lamb wave detection signals in metal plates. In the proposed method, a discrete short-time Fourier transform is performed on the narrowband Lamb wave detection signals to obtain the corresponding discrete time-frequency energy density distribution. The energy density values at the center frequency for all discrete time points are then calculated by linear interpolation. Next, the time-domain energy density curve focused on that center frequency is precipitated by least squares fitting of the calculated energy density values. Finally, the peak times of the energy density curve obtained relative to the initial pulse signal are extracted as the time-of-flight for the narrowband Lamb wave detection signals. An experimental platform is established for time-of-flight extraction of narrowband Lamb wave detection signals, and sensitivity analysis of the proposed time-frequency energy density precipitation method is performed in terms of propagation distance, dispersion characteristics, center frequency, and plate thickness. For comparison, the widely used Hilbert-Huang transform method is also implemented for time-of-flight extraction. The results show that the time-frequency energy density precipitation method can accurately extract the time-of-flight with relative error of wave detection signals.

  3. An Optimization Method of Time Window Based on Travel Time and Reliability

    Directory of Open Access Journals (Sweden)

    Fengjie Fu

    2015-01-01

    Full Text Available The dynamic change of urban road travel time was analyzed using video image detector data, and it showed cyclic variation, so the signal cycle length at the upstream intersection was conducted as the basic unit of time window; there was some evidence of bimodality in the actual travel time distributions; therefore, the fitting parameters of the travel time bimodal distribution were estimated using the EM algorithm. Then the weighted average value of the two means was indicated as the travel time estimation value, and the Modified Buffer Time Index (MBIT was expressed as travel time variability; based on the characteristics of travel time change and MBIT along with different time windows, the time window was optimized dynamically for minimum MBIT, requiring that the travel time change be lower than the threshold value and traffic incidents can be detected real time; finally, travel times on Shandong Road in Qingdao were estimated every 10 s, 120 s, optimal time windows, and 480 s and the comparisons demonstrated that travel time estimation in optimal time windows can exactly and steadily reflect the real-time traffic. It verifies the effectiveness of the optimization method.

  4. Change Semantic Constrained Online Data Cleaning Method for Real-Time Observational Data Stream

    Science.gov (United States)

    Ding, Yulin; Lin, Hui; Li, Rongrong

    2016-06-01

    to large estimation error. In order to achieve the best generalization error, it is an important challenge for the data cleaning methodology to be able to characterize the behavior of data stream distributions and adaptively update a model to include new information and remove old information. However, the complicated data changing property invalidates traditional data cleaning methods, which rely on the assumption of a stationary data distribution, and drives the need for more dynamic and adaptive online data cleaning methods. To overcome these shortcomings, this paper presents a change semantics constrained online filtering method for real-time observational data. Based on the principle that the filter parameter should vary in accordance to the data change patterns, this paper embeds semantic description, which quantitatively depicts the change patterns in the data distribution to self-adapt the filter parameter automatically. Real-time observational water level data streams of different precipitation scenarios are selected for testing. Experimental results prove that by means of this method, more accurate and reliable water level information can be available, which is prior to scientific and prompt flood assessment and decision-making.

  5. CHANGE SEMANTIC CONSTRAINED ONLINE DATA CLEANING METHOD FOR REAL-TIME OBSERVATIONAL DATA STREAM

    Directory of Open Access Journals (Sweden)

    Y. Ding

    2016-06-01

    data streams, which may led to large estimation error. In order to achieve the best generalization error, it is an important challenge for the data cleaning methodology to be able to characterize the behavior of data stream distributions and adaptively update a model to include new information and remove old information. However, the complicated data changing property invalidates traditional data cleaning methods, which rely on the assumption of a stationary data distribution, and drives the need for more dynamic and adaptive online data cleaning methods. To overcome these shortcomings, this paper presents a change semantics constrained online filtering method for real-time observational data. Based on the principle that the filter parameter should vary in accordance to the data change patterns, this paper embeds semantic description, which quantitatively depicts the change patterns in the data distribution to self-adapt the filter parameter automatically. Real-time observational water level data streams of different precipitation scenarios are selected for testing. Experimental results prove that by means of this method, more accurate and reliable water level information can be available, which is prior to scientific and prompt flood assessment and decision-making.

  6. A simple method to calculate first-passage time densities with arbitrary initial conditions

    Science.gov (United States)

    Nyberg, Markus; Ambjörnsson, Tobias; Lizana, Ludvig

    2016-06-01

    Numerous applications all the way from biology and physics to economics depend on the density of first crossings over a boundary. Motivated by the lack of general purpose analytical tools for computing first-passage time densities (FPTDs) for complex problems, we propose a new simple method based on the independent interval approximation (IIA). We generalise previous formulations of the IIA to include arbitrary initial conditions as well as to deal with discrete time and non-smooth continuous time processes. We derive a closed form expression for the FPTD in z and Laplace-transform space to a boundary in one dimension. Two classes of problems are analysed in detail: discrete time symmetric random walks (Markovian) and continuous time Gaussian stationary processes (Markovian and non-Markovian). Our results are in good agreement with Langevin dynamics simulations.

  7. Electron-phonon thermalization in a scalable method for real-time quantum dynamics

    Science.gov (United States)

    Rizzi, Valerio; Todorov, Tchavdar N.; Kohanoff, Jorge J.; Correa, Alfredo A.

    2016-01-01

    We present a quantum simulation method that follows the dynamics of out-of-equilibrium many-body systems of electrons and oscillators in real time. Its cost is linear in the number of oscillators and it can probe time scales from attoseconds to hundreds of picoseconds. Contrary to Ehrenfest dynamics, it can thermalize starting from a variety of initial conditions, including electronic population inversion. While an electronic temperature can be defined in terms of a nonequilibrium entropy, a Fermi-Dirac distribution in general emerges only after thermalization. These results can be used to construct a kinetic model of electron-phonon equilibration based on the explicit quantum dynamics.

  8. A prediction method based on wavelet transform and multiple models fusion for chaotic time series

    International Nuclear Information System (INIS)

    Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha

    2017-01-01

    In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

  9. Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods

    International Nuclear Information System (INIS)

    Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris

    2016-01-01

    Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.

  10. A new integrated dual time-point amyloid PET/MRI data analysis method

    International Nuclear Information System (INIS)

    Cecchin, Diego; Zucchetta, Pietro; Turco, Paolo; Bui, Franco; Barthel, Henryk; Tiepolt, Solveig; Sabri, Osama; Poggiali, Davide; Cagnin, Annachiara; Gallo, Paolo; Frigo, Anna Chiara

    2017-01-01

    In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ( 18 F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative ''dual time-point'' indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between age

  11. A new integrated dual time-point amyloid PET/MRI data analysis method

    Energy Technology Data Exchange (ETDEWEB)

    Cecchin, Diego; Zucchetta, Pietro; Turco, Paolo; Bui, Franco [University Hospital of Padua, Nuclear Medicine Unit, Department of Medicine - DIMED, Padua (Italy); Barthel, Henryk; Tiepolt, Solveig; Sabri, Osama [Leipzig University, Department of Nuclear Medicine, Leipzig (Germany); Poggiali, Davide; Cagnin, Annachiara; Gallo, Paolo [University Hospital of Padua, Neurology, Department of Neurosciences (DNS), Padua (Italy); Frigo, Anna Chiara [University Hospital of Padua, Biostatistics, Epidemiology and Public Health Unit, Department of Cardiac, Thoracic and Vascular Sciences, Padua (Italy)

    2017-11-15

    In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ({sup 18}F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative ''dual time-point'' indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between

  12. A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method

    Science.gov (United States)

    Zhan, Lei; Xiong, Juntao; Liu, Feng

    2016-05-01

    The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.

  13. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  14. A Timing Estimation Method Based-on Skewness Analysis in Vehicular Wireless Networks.

    Science.gov (United States)

    Cui, Xuerong; Li, Juan; Wu, Chunlei; Liu, Jian-Hang

    2015-11-13

    Vehicle positioning technology has drawn more and more attention in vehicular wireless networks to reduce transportation time and traffic accidents. Nowadays, global navigation satellite systems (GNSS) are widely used in land vehicle positioning, but most of them are lack precision and reliability in situations where their signals are blocked. Positioning systems base-on short range wireless communication are another effective way that can be used in vehicle positioning or vehicle ranging. IEEE 802.11p is a new real-time short range wireless communication standard for vehicles, so a new method is proposed to estimate the time delay or ranges between vehicles based on the IEEE 802.11p standard which includes three main steps: cross-correlation between the received signal and the short preamble, summing up the correlated results in groups, and finding the maximum peak using a dynamic threshold based on the skewness analysis. With the range between each vehicle or road-side infrastructure, the position of neighboring vehicles can be estimated correctly. Simulation results were presented in the International Telecommunications Union (ITU) vehicular multipath channel, which show that the proposed method provides better precision than some well-known timing estimation techniques, especially in low signal to noise ratio (SNR) environments.

  15. A Timing Estimation Method Based-on Skewness Analysis in Vehicular Wireless Networks

    Directory of Open Access Journals (Sweden)

    Xuerong Cui

    2015-11-01

    Full Text Available Vehicle positioning technology has drawn more and more attention in vehicular wireless networks to reduce transportation time and traffic accidents. Nowadays, global navigation satellite systems (GNSS are widely used in land vehicle positioning, but most of them are lack precision and reliability in situations where their signals are blocked. Positioning systems base-on short range wireless communication are another effective way that can be used in vehicle positioning or vehicle ranging. IEEE 802.11p is a new real-time short range wireless communication standard for vehicles, so a new method is proposed to estimate the time delay or ranges between vehicles based on the IEEE 802.11p standard which includes three main steps: cross-correlation between the received signal and the short preamble, summing up the correlated results in groups, and finding the maximum peak using a dynamic threshold based on the skewness analysis. With the range between each vehicle or road-side infrastructure, the position of neighboring vehicles can be estimated correctly. Simulation results were presented in the International Telecommunications Union (ITU vehicular multipath channel, which show that the proposed method provides better precision than some well-known timing estimation techniques, especially in low signal to noise ratio (SNR environments.

  16. Multiple time-scale optimization scheduling for islanded microgrids including PV, wind turbine, diesel generator and batteries

    DEFF Research Database (Denmark)

    Xiao, Zhao xia; Nan, Jiakai; Guerrero, Josep M.

    2017-01-01

    A multiple time-scale optimization scheduling including day ahead and short time for an islanded microgrid is presented. In this paper, the microgrid under study includes photovoltaics (PV), wind turbine (WT), diesel generator (DG), batteries, and shiftable loads. The study considers the maximum...... efficiency operation area for the diesel engine and the cost of the battery charge/discharge cycle losses. The day-ahead generation scheduling takes into account the minimum operational cost and the maximum load satisfaction as the objective function. Short-term optimal dispatch is based on minimizing...

  17. Applications of hybrid time-frequency methods in nonlinear structural dynamics

    International Nuclear Information System (INIS)

    Politopoulos, I.; Piteau, Ph.; Borsoi, L.; Antunes, J.

    2014-01-01

    This paper presents a study on methods which may be used to compute the nonlinear response of systems whose linear properties are determined in the frequency or Laplace domain. Typically, this kind of situation may arise in soil-structure and fluid-structure interaction problems. In particular three methods are investigated: (a) the hybrid time-frequency method, (b) the computation of the convolution integral which requires an inverse Fourier or Laplace transform of the system's transfer function, and (c) the identification of an equivalent system defined in the time domain which may be solved with classical time integration methods. These methods are illustrated by their application to some simple, one degree of freedom, non-linear systems and their advantages and drawbacks are highlighted. (authors)

  18. Earthquake analysis of structures including structure-soil interaction by a substructure method

    International Nuclear Information System (INIS)

    Chopra, A.K.; Guttierrez, J.A.

    1977-01-01

    A general substructure method for analysis of response of nuclear power plant structures to earthquake ground motion, including the effects of structure-soil interaction, is summarized. The method is applicable to complex structures idealized as finite element systems and the soil region treated as either a continuum, for example as a viscoelastic halfspace, or idealized as a finite element system. The halfspace idealization permits reliable analysis for sites where essentially similar soils extend to large depths and there is no rigid boundary such as soil-rock interface. For sites where layers of soft soil are underlain by rock at shallow depth, finite element idealization of the soil region is appropriate; in this case, the direct and substructure methods would lead to equivalent results but the latter provides the better alternative. Treating the free field motion directly as the earthquake input in the substructure method eliminates the deconvolution calculations and the related assumption -regarding type and direction of earthquake waves- required in the direct method. The substructure method is computationally efficient because the two substructures-the structure and the soil region- are analyzed separately; and, more important, it permits taking advantage of the important feature that response to earthquake ground motion is essentially contained in the lower few natural modes of vibration of the structure on fixed base. For sites where essentially similar soils extend to large depths and there is no obvious rigid boundary such as a soil-rock interface, numerical results for earthquake response of a nuclear reactor structure are presented to demonstrate that the commonly used finite element method may lead to unacceptable errors; but the substructure method leads to reliable results

  19. Endurance time method for Seismic analysis and design of structures

    International Nuclear Information System (INIS)

    Estekanchi, H.E.; Vafai, A.; Sadeghazar, M.

    2004-01-01

    In this paper, a new method for performance based earthquake analysis and design has been introduced. In this method, the structure is subjected to accelerograms that impose increasing dynamic demand on the structure with time. Specified damage indexes are monitored up to the collapse level or other performance limit that defines the endurance limit point for the structure. Also, a method for generating standard intensifying accelerograms has been described. Three accelerograms have been generated using this method. Furthermore, the concept of Endurance Time has been described by applying these accelerograms to single and multi degree of freedom linear systems. The application of this method for analysis of complex nonlinear systems has been explained. Endurance Time method provides a uniform approach to seismic analysis and design of complex structures that can be applied in numerical and experimental investigations

  20. Formal methods for dependable real-time systems

    Science.gov (United States)

    Rushby, John

    1993-01-01

    The motivation for using formal methods to specify and reason about real time properties is outlined and approaches that were proposed and used are sketched. The formal verifications of clock synchronization algorithms are concluded as showing that mechanically supported reasoning about complex real time behavior is feasible. However, there was significant increase in the effectiveness of verification systems since those verifications were performed, at it is to be expected that verifications of comparable difficulty will become fairly routine. The current challenge lies in developing perspicuous and economical approaches to the formalization and specification of real time properties.

  1. Real-time biscuit tile image segmentation method based on edge detection.

    Science.gov (United States)

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter

    2018-05-01

    In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Application of the multi-objective cross-entropy method to the vehicle routing problem with soft time windows

    Directory of Open Access Journals (Sweden)

    C Hauman

    2014-06-01

    Full Text Available The vehicle routing problem with time windows is a widely studied problem with many real-world applications. The problem considered here entails the construction of routes that a number of identical vehicles travel to service different nodes within a certain time window. New benchmark problems with multi-objective features were recently suggested in the literature and the multi-objective optimisation cross-entropy method is applied to these problems to investigate the feasibility of the method and to determine and propose reference solutions for the benchmark problems. The application of the cross-entropy method to the multi-objective vehicle routing problem with soft time windows is investigated. The objectives that are evaluated include the minimisation of the total distance travelled, the number of vehicles and/or routes, the total waiting time and delay time of the vehicles and the makespan of a route.

  3. An analytical nodal method for time-dependent one-dimensional discrete ordinates problems

    International Nuclear Information System (INIS)

    Barros, R.C. de

    1992-01-01

    In recent years, relatively little work has been done in developing time-dependent discrete ordinates (S N ) computer codes. Therefore, the topic of time integration methods certainly deserves further attention. In this paper, we describe a new coarse-mesh method for time-dependent monoenergetic S N transport problesm in slab geometry. This numerical method preserves the analytic solution of the transverse-integrated S N nodal equations by constants, so we call our method the analytical constant nodal (ACN) method. For time-independent S N problems in finite slab geometry and for time-dependent infinite-medium S N problems, the ACN method generates numerical solutions that are completely free of truncation errors. Bsed on this positive feature, we expect the ACN method to be more accurate than conventional numerical methods for S N transport calculations on coarse space-time grids

  4. Performing dynamic time history analyses by extension of the response spectrum method

    International Nuclear Information System (INIS)

    Hulbert, G.M.

    1983-01-01

    A method is presented to calculate the dynamic time history response of finite-element models using results from response spectrum analyses. The proposed modified time history method does not represent a new mathamatical approach to dynamic analysis but suggests a more efficient ordering of the analytical equations and procedures. The modified time history method is considerably faster and less expensive to use than normal time hisory methods. This paper presents the theory and implementation of the modified time history approach along with comparisons of the modified and normal time history methods for a prototypic seismic piping design problem

  5. Some observations concerning blade-element-momentum (BEM) methods and vortex wake methods, including numerical experiments with a simple vortex model

    Energy Technology Data Exchange (ETDEWEB)

    Snel, H. [Netherlands Energy Research Foundation ECN, Renewable Energy, Wind Energy (Netherlands)

    1997-08-01

    Recently the Blade Element Momentum (BEM) method has been made more versatile. Inclusion of rotational effects on time averaged profile coefficients have improved its achievements for performance calculations in stalled flow. Time dependence as a result of turbulent inflow, pitching actions and yawed operation is now treated more correctly (although more improvement is needed) than before. It is of interest to note that adaptations in modelling of unsteady or periodic induction stem from qualitative and quantitative insights obtained from free vortex models. Free vortex methods and further into the future Navier Stokes (NS) calculations, together with wind tunnel and field experiments, can be very useful in enhancing the potential of BEM for aero-elastic response calculations. It must be kept in mind however that extreme caution must be used with free vortex methods, as will be discussed in the following chapters. A discussion of the shortcomings and the strength of BEM and of vortex wake models is given. Some ideas are presented on how BEM might be improved without too much loss of efficiency. (EG)

  6. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    Science.gov (United States)

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  7. An iterated Radau method for time-dependent PDE's

    NARCIS (Netherlands)

    S. Pérez-Rodríguez; S. González-Pinto; B.P. Sommeijer (Ben)

    2008-01-01

    htmlabstractThis paper is concerned with the time integration of semi-discretized, multi-dimensional PDEs of advection-diffusion-reaction type. To cope with the stiffness of these ODEs, an implicit method has been selected, viz., the two-stage, third-order Radau IIA method. The main topic of this

  8. Sharp Penalty Term and Time Step Bounds for the Interior Penalty Discontinuous Galerkin Method for Linear Hyperbolic Problems

    NARCIS (Netherlands)

    Geevers, Sjoerd; van der Vegt, J.J.W.

    2017-01-01

    We present sharp and sucient bounds for the interior penalty term and time step size to ensure stability of the symmetric interior penalty discontinuous Galerkin (SIPDG) method combined with an explicit time-stepping scheme. These conditions hold for generic meshes, including unstructured

  9. Efficient Time-Domain Ray-Tracing Technique for the Analysis of Ultra-Wideband Indoor Environments including Lossy Materials and Multiple Effects

    Directory of Open Access Journals (Sweden)

    F. Saez de Adana

    2009-01-01

    Full Text Available This paper presents an efficient application of the Time-Domain Uniform Theory of Diffraction (TD-UTD for the analysis of Ultra-Wideband (UWB mobile communications for indoor environments. The classical TD-UTD formulation is modified to include the contribution of lossy materials and multiple-ray interactions with the environment. The electromagnetic analysis is combined with a ray-tracing acceleration technique to treat realistic and complex environments. The validity of this method is tested with measurements performed inside the Polytechnic building of the University of Alcala and shows good performance of the model for the analysis of UWB propagation.

  10. Comparison of different methods to include recycling in LCAs of aluminium cans and disposable polystyrene cups

    NARCIS (Netherlands)

    Harst-Wintraecken, van der Eugenie; Potting, José; Kroeze, Carolien

    2016-01-01

    Many methods have been reported and used to include recycling in life cycle assessments (LCAs). This paper evaluates six widely used methods: three substitution methods (i.e. substitution based on equal quality, a correction factor, and alternative material), allocation based on the number of

  11. A time-domain method to generate artificial time history from a given reference response spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Gang Sik [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Song, Oh Seop [Dept. of Mechanical Engineering, Chungnam National University, Daejeon (Korea, Republic of)

    2016-06-15

    Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance.

  12. A time-domain method to generate artificial time history from a given reference response spectrum

    International Nuclear Information System (INIS)

    Shin, Gang Sik; Song, Oh Seop

    2016-01-01

    Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance

  13. The development of efficient numerical time-domain modeling methods for geophysical wave propagation

    Science.gov (United States)

    Zhu, Lieyuan

    This Ph.D. dissertation focuses on the numerical simulation of geophysical wave propagation in the time domain including elastic waves in solid media, the acoustic waves in fluid media, and the electromagnetic waves in dielectric media. This thesis shows that a linear system model can describe accurately the physical processes of those geophysical waves' propagation and can be used as a sound basis for modeling geophysical wave propagation phenomena. The generalized stability condition for numerical modeling of wave propagation is therefore discussed in the context of linear system theory. The efficiency of a series of different numerical algorithms in the time-domain for modeling geophysical wave propagation are discussed and compared. These algorithms include the finite-difference time-domain method, pseudospectral time domain method, alternating directional implicit (ADI) finite-difference time domain method. The advantages and disadvantages of these numerical methods are discussed and the specific stability condition for each modeling scheme is carefully derived in the context of the linear system theory. Based on the review and discussion of these existing approaches, the split step, ADI pseudospectral time domain (SS-ADI-PSTD) method is developed and tested for several cases. Moreover, the state-of-the-art stretched-coordinate perfect matched layer (SCPML) has also been implemented in SS-ADI-PSTD algorithm as the absorbing boundary condition for truncating the computational domain and absorbing the artificial reflection from the domain boundaries. After algorithmic development, a few case studies serve as the real-world examples to verify the capacities of the numerical algorithms and understand the capabilities and limitations of geophysical methods for detection of subsurface contamination. The first case is a study using ground penetrating radar (GPR) amplitude variation with offset (AVO) for subsurface non-aqueous-liquid (NAPL) contamination. The

  14. MO-FG-BRA-03: A Novel Method for Characterizing Gating Response Time in Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Wiersma, R; McCabe, B; Belcher, A; Jenson, P [The University of Chicago, Chicago, IL (United States); Smith, B [University Illinois at Chicago, Orland Park, IL (United States); Aydogan, B [The University of Chicago, Chicago, IL (United States); University Illinois at Chicago, Orland Park, IL (United States)

    2016-06-15

    Purpose: Low temporal latency between a gating ON/OFF signal and the LINAC beam ON/OFF during respiratory gating is critical for patient safety. Current film based methods to assess gating response have poor temporal resolution and are highly qualitative. We describe a novel method to precisely measure gating lag times at high temporal resolutions and use it to characterize the temporal response of several gating systems. Methods: A respiratory gating simulator with an oscillating platform was modified to include a linear potentiometer for position measurement. A photon diode was placed at linear accelerator isocenter for beam output measurement. The output signals of the potentiometer and diode were recorded simultaneously at 2500 Hz (0.4 millisecond (ms) sampling interval) with an analogue-to-digital converter (ADC). The techniques was used on three commercial respiratory gating systems. The ON and OFF of the beam signal were located and compared to the expected gating window for both phase and position based gating and the temporal lag times extracted using a polynomial fit method. Results: A Varian RPM system with a monoscopic IR camera was measured to have mean beam ON and OFF lag times of 98.2 ms and 89.6 ms, respectively. A Varian RPM system with a stereoscopic IR camera was measured to have mean beam ON and OFF lag times of 86.0 ms and 44.0 ms, respectively. A Calypso magnetic fiducial tracking system was measured to have mean beam ON and OFF lag times of 209.0 ms and 60.0 ms, respectively. Conclusions: A novel method allowed for quantitative determination of gating timing accuracy for several clinically used gating systems. All gating systems met the 100 ms TG-142 criteria for mean beam OFF times. For beam ON response, the Calypso system exceeded the recommended response time.

  15. Seismic assessment of a site using the time series method

    International Nuclear Information System (INIS)

    Krutzik, N.J.; Rotaru, I.; Bobei, M.; Mingiuc, C.; Serban, V.; Androne, M.

    2001-01-01

    1. To increase the safety of a NPP located on a seismic site, the seismic acceleration level to which the NPP should be qualified must be as representative as possible for that site, with a conservative degree of safety but not too exaggerated. 2. The consideration of the seismic events affecting the site as independent events and the use of statistic methods to define some safety levels with very low annual occurrence probabilities (10 -4 ) may lead to some exaggerations of the seismic safety level. 3. The use of some very high values for the seismic accelerations imposed by the seismic safety levels required by the hazard analysis may lead to very expensive technical solutions that can make the plant operation more difficult and increase the maintenance costs. 4. The consideration of seismic events as a time series with dependence among the events produced may lead to a more representative assessment of a NPP site seismic activity and consequently to a prognosis on the seismic level values to which the NPP would be ensured throughout its life-span. That prognosis should consider the actual seismic activity (including small earthquakes in real time) of the focuses that affect the plant site. The method is useful for two purposes: a) research, i.e. homogenizing the history data basis by the generation of earthquakes during periods lacking information and correlation of the information with the existing information. The aim is to perform the hazard analysis using a homogeneous data set in order to determine the seismic design data for a site; b) operation, i.e. the performance of a prognosis on the seismic activity on a certain site and consideration of preventive measures to minimize the possible effects of an earthquake. 5. The paper proposes the application of Autoregressive Time Series to issue a prognosis on the seismic activity of a focus and presents the analysis on Vrancea focus that affects Cernavoda NPP site by this method. 6. The paper also presents the

  16. Spectral methods for time dependent partial differential equations

    Science.gov (United States)

    Gottlieb, D.; Turkel, E.

    1983-01-01

    The theory of spectral methods for time dependent partial differential equations is reviewed. When the domain is periodic Fourier methods are presented while for nonperiodic problems both Chebyshev and Legendre methods are discussed. The theory is presented for both hyperbolic and parabolic systems using both Galerkin and collocation procedures. While most of the review considers problems with constant coefficients the extension to nonlinear problems is also discussed. Some results for problems with shocks are presented.

  17. Comparison of missing value imputation methods in time series: the case of Turkish meteorological data

    Science.gov (United States)

    Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci

    2013-04-01

    This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.

  18. Timely disclosure of progress in long-term cancer survival: the boomerang method substantially improved estimates in a comparative study.

    Science.gov (United States)

    Brenner, Hermann; Jansen, Lina

    2016-02-01

    Monitoring cancer survival is a key task of cancer registries, but timely disclosure of progress in long-term survival remains a challenge. We introduce and evaluate a novel method, denoted "boomerang method," for deriving more up-to-date estimates of long-term survival. We applied three established methods (cohort, complete, and period analysis) and the boomerang method to derive up-to-date 10-year relative survival of patients diagnosed with common solid cancers and hematological malignancies in the United States. Using the Surveillance, Epidemiology and End Results 9 database, we compared the most up-to-date age-specific estimates that might have been obtained with the database including patients diagnosed up to 2001 with 10-year survival later observed for patients diagnosed in 1997-2001. For cancers with little or no increase in survival over time, the various estimates of 10-year relative survival potentially available by the end of 2001 were generally rather similar. For malignancies with strongly increasing survival over time, including breast and prostate cancer and all hematological malignancies, the boomerang method provided estimates that were closest to later observed 10-year relative survival in 23 of the 34 groups assessed. The boomerang method can substantially improve up-to-dateness of long-term cancer survival estimates in times of ongoing improvement in prognosis. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. The Application of Time-Frequency Methods to HUMS

    Science.gov (United States)

    Pryor, Anna H.; Mosher, Marianne; Lewicki, David G.; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper reports the study of four time-frequency transforms applied to vibration signals and presents a new metric for comparing them for fault detection. The four methods to be described and compared are the Short Time Frequency Transform (STFT), the Choi-Williams Distribution (WV-CW), the Continuous Wavelet Transform (CWT) and the Discrete Wavelet Transform (DWT). Vibration data of bevel gear tooth fatigue cracks, under a variety of operating load levels, are analyzed using these methods. The new metric for automatic fault detection is developed and can be produced from any systematic numerical representation of the vibration signals. This new metric reveals indications of gear damage with all of the methods on this data set. Analysis with the CWT detects mechanical problems with the test rig not found with the other transforms. The WV-CW and CWT use considerably more resources than the STFT and the DWT. More testing of the new metric is needed to determine its value for automatic fault detection and to develop methods of setting the threshold for the metric.

  20. A multi-domain spectral method for time-fractional differential equations

    Science.gov (United States)

    Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.

    2015-07-01

    This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.

  1. Self-consistent DFT +U method for real-space time-dependent density functional theory calculations

    Science.gov (United States)

    Tancogne-Dejean, Nicolas; Oliveira, Micael J. T.; Rubio, Angel

    2017-12-01

    We implemented various DFT+U schemes, including the Agapito, Curtarolo, and Buongiorno Nardelli functional (ACBN0) self-consistent density-functional version of the DFT +U method [Phys. Rev. X 5, 011006 (2015), 10.1103/PhysRevX.5.011006] within the massively parallel real-space time-dependent density functional theory (TDDFT) code octopus. We further extended the method to the case of the calculation of response functions with real-time TDDFT+U and to the description of noncollinear spin systems. The implementation is tested by investigating the ground-state and optical properties of various transition-metal oxides, bulk topological insulators, and molecules. Our results are found to be in good agreement with previously published results for both the electronic band structure and structural properties. The self-consistent calculated values of U and J are also in good agreement with the values commonly used in the literature. We found that the time-dependent extension of the self-consistent DFT+U method yields improved optical properties when compared to the empirical TDDFT+U scheme. This work thus opens a different theoretical framework to address the nonequilibrium properties of correlated systems.

  2. Suitability of voltage stability study methods for real-time assessment

    DEFF Research Database (Denmark)

    Perez, Angel; Jóhannsson, Hjörtur; Vancraeyveld, Pieter

    2013-01-01

    This paper analyzes the suitability of existing methods for long-term voltage stability assessment for real-time operation. An overview of the relevant methods is followed with a comparison that takes into account the accuracy, computational efficiency and characteristics when used for security...... assessment. The results enable an evaluation of the run time of each method with respect to the number of inputs. Furthermore, the results assist in identifying which of the methods is most suitable for realtime operation in future power system with production based on fluctuating energy sources....

  3. Solving the Schroedinger equation using the finite difference time domain method

    International Nuclear Information System (INIS)

    Sudiarta, I Wayan; Geldart, D J Wallace

    2007-01-01

    In this paper, we solve the Schroedinger equation using the finite difference time domain (FDTD) method to determine energies and eigenfunctions. In order to apply the FDTD method, the Schroedinger equation is first transformed into a diffusion equation by the imaginary time transformation. The resulting time-domain diffusion equation is then solved numerically by the FDTD method. The theory and an algorithm are provided for the procedure. Numerical results are given for illustrative examples in one, two and three dimensions. It is shown that the FDTD method accurately determines eigenfunctions and energies of these systems

  4. Real time simulation method for fast breeder reactors dynamics

    International Nuclear Information System (INIS)

    Miki, Tetsushi; Mineo, Yoshiyuki; Ogino, Takamichi; Kishida, Koji; Furuichi, Kenji.

    1985-01-01

    The development of multi-purpose real time simulator models with suitable plant dynamics was made; these models can be used not only in training operators but also in designing control systems, operation sequences and many other items which must be studied for the development of new type reactors. The prototype fast breeder reactor ''Monju'' is taken as an example. Analysis is made on various factors affecting the accuracy and computer load of its dynamic simulation. A method is presented which determines the optimum number of nodes in distributed systems and time steps. The oscillations due to the numerical instability are observed in the dynamic simulation of evaporators with a small number of nodes, and a method to cancel these oscillations is proposed. It has been verified through the development of plant dynamics simulation codes that these methods can provide efficient real time dynamics models of fast breeder reactors. (author)

  5. Reduction Methods for Real-time Simulations in Hybrid Testing

    DEFF Research Database (Denmark)

    Andersen, Sebastian

    2016-01-01

    Hybrid testing constitutes a cost-effective experimental full scale testing method. The method was introduced in the 1960's by Japanese researchers, as an alternative to conventional full scale testing and small scale material testing, such as shake table tests. The principle of the method...... is performed on a glass fibre reinforced polymer composite box girder. The test serves as a pilot test for prospective real-time tests on a wind turbine blade. The Taylor basis is implemented in the test, used to perform the numerical simulations. Despite of a number of introduced errors in the real...... is to divide a structure into a physical substructure and a numerical substructure, and couple these in a test. If the test is conducted in real-time it is referred to as real time hybrid testing. The hybrid testing concept has developed significantly since its introduction in the 1960', both with respect...

  6. BOX-COX REGRESSION METHOD IN TIME SCALING

    Directory of Open Access Journals (Sweden)

    ATİLLA GÖKTAŞ

    2013-06-01

    Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error  when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.

  7. A method for the computation of turbulent polymeric liquids including hydrodynamic interactions and chain entanglements

    Energy Technology Data Exchange (ETDEWEB)

    Kivotides, Demosthenes, E-mail: demosthenes.kivotides@strath.ac.uk

    2017-02-12

    An asymptotically exact method for the direct computation of turbulent polymeric liquids that includes (a) fully resolved, creeping microflow fields due to hydrodynamic interactions between chains, (b) exact account of (subfilter) residual stresses, (c) polymer Brownian motion, and (d) direct calculation of chain entanglements, is formulated. Although developed in the context of polymeric fluids, the method is equally applicable to turbulent colloidal dispersions and aerosols. - Highlights: • An asymptotically exact method for the computation of polymer and colloidal fluids is developed. • The method is valid for all flow inertia and all polymer volume fractions. • The method models entanglements and hydrodynamic interactions between polymer chains.

  8. Development of calculation method for one-dimensional kinetic analysis in fission reactors, including feedback effects

    International Nuclear Information System (INIS)

    Paixao, S.B.; Marzo, M.A.S.; Alvim, A.C.M.

    1986-01-01

    The calculation method used in WIGLE code is studied. Because of the non availability of such a praiseworthy solution, expounding the method minutely has been tried. This developed method has been applied for the solution of the one-dimensional, two-group, diffusion equations in slab, axial analysis, including non-boiling heat transfer, accountig for feedback. A steady-state program (CITER-1D), written in FORTRAN 4, has been implemented, providing excellent results, ratifying the developed work quality. (Author) [pt

  9. Including foreshocks and aftershocks in time-independent probabilistic seismic hazard analyses

    Science.gov (United States)

    Boyd, Oliver S.

    2012-01-01

    Time‐independent probabilistic seismic‐hazard analysis treats each source as being temporally and spatially independent; hence foreshocks and aftershocks, which are both spatially and temporally dependent on the mainshock, are removed from earthquake catalogs. Yet, intuitively, these earthquakes should be considered part of the seismic hazard, capable of producing damaging ground motions. In this study, I consider the mainshock and its dependents as a time‐independent cluster, each cluster being temporally and spatially independent from any other. The cluster has a recurrence time of the mainshock; and, by considering the earthquakes in the cluster as a union of events, dependent events have an opportunity to contribute to seismic ground motions and hazard. Based on the methods of the U.S. Geological Survey for a high‐hazard site, the inclusion of dependent events causes ground motions that are exceeded at probability levels of engineering interest to increase by about 10% but could be as high as 20% if variations in aftershock productivity can be accounted for reliably.

  10. 8-channel system for neutron-nuclear investigations by time-of-flight method

    International Nuclear Information System (INIS)

    Shvetsov, V.N.; Enik, T.L.; Mitsyna, L.V.; Popov, A.B.; Salamatin, I.M.; Sedyshev, P.V.; Sirotin, A.P.; Astakhova, N.V.; Salamatin, K.M.

    2011-01-01

    In connection with commissioning of the IREN pulsed resonance neutron source, new electronics and appropriate software are developed for registration of time-of-flight spectra with small width of the channel (10 ns). The hardware-software system is intended for research of the IREN neutron beam characteristics, properties of new detectors, and also for performance of precision experiments under conditions of low intensity or registration of rare events. The time encoder is the key element of the system hardware. It is developed on the basis of the Cypress-technologies. The unit can measure time intervals for signals intensity up to 10 5 for each of eight inputs. Using a USB interface provides system mobility. The TOF System Software includes the control program, driver software layer, data sorting program and data processing utilities and other units, performed as executable applications. The interprocess communication between units is provided by network and/or by specially designed interface based on the mechanism of named files mapped into memory. This method provides fastest possible communication between processes. The developed methods of integrating the executable components into a system provide a distributed system, improve the reusing of the software and provide the ability to assemble the system by the user

  11. openPSTD: The open source pseudospectral time-domain method for acoustic propagation

    Science.gov (United States)

    Hornikx, Maarten; Krijnen, Thomas; van Harten, Louis

    2016-06-01

    An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory usage as it allows to spatially sample close to the Nyquist criterion, thus keeping both the required spatial and temporal resolution coarse. In the implementation it has been opted to model the physical geometry as a composition of rectangular two-dimensional subdomains, hence initially restricting the implementation to orthogonal and two-dimensional situations. The strategy of using subdomains divides the problem domain into local subsets, which enables the simulation software to be built according to Object-Oriented Programming best practices and allows room for further computational parallelization. The software is built using the open source components, Blender, Numpy and Python, and has been published under an open source license itself as well. For accelerating the software, an option has been included to accelerate the calculations by a partial implementation of the code on the Graphical Processing Unit (GPU), which increases the throughput by up to fifteen times. The details of the implementation are reported, as well as the accuracy of the code.

  12. Novel Verification Method for Timing Optimization Based on DPSO

    Directory of Open Access Journals (Sweden)

    Chuandong Chen

    2018-01-01

    Full Text Available Timing optimization for logic circuits is one of the key steps in logic synthesis. Extant research data are mainly proposed based on various intelligence algorithms. Hence, they are neither comparable with timing optimization data collected by the mainstream electronic design automation (EDA tool nor able to verify the superiority of intelligence algorithms to the EDA tool in terms of optimization ability. To address these shortcomings, a novel verification method is proposed in this study. First, a discrete particle swarm optimization (DPSO algorithm was applied to optimize the timing of the mixed polarity Reed-Muller (MPRM logic circuit. Second, the Design Compiler (DC algorithm was used to optimize the timing of the same MPRM logic circuit through special settings and constraints. Finally, the timing optimization results of the two algorithms were compared based on MCNC benchmark circuits. The timing optimization results obtained using DPSO are compared with those obtained from DC, and DPSO demonstrates an average reduction of 9.7% in the timing delays of critical paths for a number of MCNC benchmark circuits. The proposed verification method directly ascertains whether the intelligence algorithm has a better timing optimization ability than DC.

  13. Statistical time series methods for damage diagnosis in a scale aircraft skeleton structure: loosened bolts damage scenarios

    International Nuclear Information System (INIS)

    Kopsaftopoulos, Fotis P; Fassois, Spilios D

    2011-01-01

    A comparative assessment of several vibration based statistical time series methods for Structural Health Monitoring (SHM) is presented via their application to a scale aircraft skeleton laboratory structure. A brief overview of the methods, which are either scalar or vector type, non-parametric or parametric, and pertain to either the response-only or excitation-response cases, is provided. Damage diagnosis, including both the detection and identification subproblems, is tackled via scalar or vector vibration signals. The methods' effectiveness is assessed via repeated experiments under various damage scenarios, with each scenario corresponding to the loosening of one or more selected bolts. The results of the study confirm the 'global' damage detection capability and effectiveness of statistical time series methods for SHM.

  14. Super-nodal methods for space-time kinetics

    Science.gov (United States)

    Mertyurek, Ugur

    The purpose of this research has been to develop an advanced Super-Nodal method to reduce the run time of 3-D core neutronics models, such as in the NESTLE reactor core simulator and FORMOSA nuclear fuel management optimization codes. Computational performance of the neutronics model is increased by reducing the number of spatial nodes used in the core modeling. However, as the number of spatial nodes decreases, the error in the solution increases. The Super-Nodal method reduces the error associated with the use of coarse nodes in the analyses by providing a new set of cross sections and ADFs (Assembly Discontinuity Factors) for the new nodalization. These so called homogenization parameters are obtained by employing consistent collapsing technique. During this research a new type of singularity, namely "fundamental mode singularity", is addressed in the ANM (Analytical Nodal Method) solution. The "Coordinate Shifting" approach is developed as a method to address this singularity. Also, the "Buckling Shifting" approach is developed as an alternative and more accurate method to address the zero buckling singularity, which is a more common and well known singularity problem in the ANM solution. In the course of addressing the treatment of these singularities, an effort was made to provide better and more robust results from the Super-Nodal method by developing several new methods for determining the transverse leakage and collapsed diffusion coefficient, which generally are the two main approximations in the ANM methodology. Unfortunately, the proposed new transverse leakage and diffusion coefficient approximations failed to provide a consistent improvement to the current methodology. However, improvement in the Super-Nodal solution is achieved by updating the homogenization parameters at several time points during a transient. The update is achieved by employing a refinement technique similar to pin-power reconstruction. A simple error analysis based on the relative

  15. Change of time methods in quantitative finance

    CERN Document Server

    Swishchuk, Anatoliy

    2016-01-01

    This book is devoted to the history of Change of Time Methods (CTM), the connections of CTM to stochastic volatilities and finance, fundamental aspects of the theory of CTM, basic concepts, and its properties. An emphasis is given on many applications of CTM in financial and energy markets, and the presented numerical examples are based on real data. The change of time method is applied to derive the well-known Black-Scholes formula for European call options, and to derive an explicit option pricing formula for a European call option for a mean-reverting model for commodity prices. Explicit formulas are also derived for variance and volatility swaps for financial markets with a stochastic volatility following a classical and delayed Heston model. The CTM is applied to price financial and energy derivatives for one-factor and multi-factor alpha-stable Levy-based models. Readers should have a basic knowledge of probability and statistics, and some familiarity with stochastic processes, such as Brownian motion, ...

  16. A cluster merging method for time series microarray with production values.

    Science.gov (United States)

    Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio

    2014-09-01

    A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.

  17. Development of efficient time-evolution method based on three-term recurrence relation

    International Nuclear Information System (INIS)

    Akama, Tomoko; Kobayashi, Osamu; Nanbu, Shinkoh

    2015-01-01

    The advantage of the real-time (RT) propagation method is a direct solution of the time-dependent Schrödinger equation which describes frequency properties as well as all dynamics of a molecular system composed of electrons and nuclei in quantum physics and chemistry. Its applications have been limited by computational feasibility, as the evaluation of the time-evolution operator is computationally demanding. In this article, a new efficient time-evolution method based on the three-term recurrence relation (3TRR) was proposed to reduce the time-consuming numerical procedure. The basic formula of this approach was derived by introducing a transformation of the operator using the arcsine function. Since this operator transformation causes transformation of time, we derived the relation between original and transformed time. The formula was adapted to assess the performance of the RT time-dependent Hartree-Fock (RT-TDHF) method and the time-dependent density functional theory. Compared to the commonly used fourth-order Runge-Kutta method, our new approach decreased computational time of the RT-TDHF calculation by about factor of four, showing the 3TRR formula to be an efficient time-evolution method for reducing computational cost

  18. A maintenance time prediction method considering ergonomics through virtual reality simulation.

    Science.gov (United States)

    Zhou, Dong; Zhou, Xin-Xin; Guo, Zi-Yue; Lv, Chuan

    2016-01-01

    Maintenance time is a critical quantitative index in maintainability prediction. An efficient maintenance time measurement methodology plays an important role in early stage of the maintainability design. While traditional way to measure the maintenance time ignores the differences between line production and maintenance action. This paper proposes a corrective MOD method considering several important ergonomics factors to predict the maintenance time. With the help of the DELMIA analysis tools, the influence coefficient of several factors are discussed to correct the MOD value and the designers can measure maintenance time by calculating the sum of the corrective MOD time of each maintenance therbligs. Finally a case study is introduced, by maintaining the virtual prototype of APU motor starter in DELMIA, designer obtains the actual maintenance time by the proposed method, and the result verifies the effectiveness and accuracy of the proposed method.

  19. Creep behavior of bone cement: a method for time extrapolation using time-temperature equivalence.

    Science.gov (United States)

    Morgan, R L; Farrar, D F; Rose, J; Forster, H; Morgan, I

    2003-04-01

    The clinical lifetime of poly(methyl methacrylate) (PMMA) bone cement is considerably longer than the time over which it is convenient to perform creep testing. Consequently, it is desirable to be able to predict the long term creep behavior of bone cement from the results of short term testing. A simple method is described for prediction of long term creep using the principle of time-temperature equivalence in polymers. The use of the method is illustrated using a commercial acrylic bone cement. A creep strain of approximately 0.6% is predicted after 400 days under a constant flexural stress of 2 MPa. The temperature range and stress levels over which it is appropriate to perform testing are described. Finally, the effects of physical aging on the accuracy of the method are discussed and creep data from aged cement are reported.

  20. Numerical Methods for Pricing American Options with Time-Fractional PDE Models

    Directory of Open Access Journals (Sweden)

    Zhiqiang Zhou

    2016-01-01

    Full Text Available In this paper we develop a Laplace transform method and a finite difference method for solving American option pricing problem when the change of the option price with time is considered as a fractal transmission system. In this scenario, the option price is governed by a time-fractional partial differential equation (PDE with free boundary. The Laplace transform method is applied to the time-fractional PDE. It then leads to a nonlinear equation for the free boundary (i.e., optimal early exercise boundary function in Laplace space. After numerically finding the solution of the nonlinear equation, the Laplace inversion is used to transform the approximate early exercise boundary into the time space. Finally the approximate price of the American option is obtained. A boundary-searching finite difference method is also proposed to solve the free-boundary time-fractional PDEs for pricing the American options. Numerical examples are carried out to compare the Laplace approach with the finite difference method and it is confirmed that the former approach is much faster than the latter one.

  1. METHODS FOR CLUSTERING TIME SERIES DATA ACQUIRED FROM MOBILE HEALTH APPS.

    Science.gov (United States)

    Tignor, Nicole; Wang, Pei; Genes, Nicholas; Rogers, Linda; Hershman, Steven G; Scott, Erick R; Zweig, Micol; Yvonne Chan, Yu-Feng; Schadt, Eric E

    2017-01-01

    In our recent Asthma Mobile Health Study (AMHS), thousands of asthma patients across the country contributed medical data through the iPhone Asthma Health App on a daily basis for an extended period of time. The collected data included daily self-reported asthma symptoms, symptom triggers, and real time geographic location information. The AMHS is just one of many studies occurring in the context of now many thousands of mobile health apps aimed at improving wellness and better managing chronic disease conditions, leveraging the passive and active collection of data from mobile, handheld smart devices. The ability to identify patient groups or patterns of symptoms that might predict adverse outcomes such as asthma exacerbations or hospitalizations from these types of large, prospectively collected data sets, would be of significant general interest. However, conventional clustering methods cannot be applied to these types of longitudinally collected data, especially survey data actively collected from app users, given heterogeneous patterns of missing values due to: 1) varying survey response rates among different users, 2) varying survey response rates over time of each user, and 3) non-overlapping periods of enrollment among different users. To handle such complicated missing data structure, we proposed a probability imputation model to infer missing data. We also employed a consensus clustering strategy in tandem with the multiple imputation procedure. Through simulation studies under a range of scenarios reflecting real data conditions, we identified favorable performance of the proposed method over other strategies that impute the missing value through low-rank matrix completion. When applying the proposed new method to study asthma triggers and symptoms collected as part of the AMHS, we identified several patient groups with distinct phenotype patterns. Further validation of the methods described in this paper might be used to identify clinically important

  2. Development of the pressure-time method as a relative and absolute method for low-head hydraulic machines

    Energy Technology Data Exchange (ETDEWEB)

    Jonsson, Pontus [Poeyry SwedPower AB, Stockholm (Sweden); Cervantes, Michel [Luleaa Univ. of Technology, Luleaa (Sweden)

    2013-02-15

    The pressure-time method is an absolute method common for flow measurements in power plants. The method determines the flow rate by measuring the pressure and estimating the losses between two sections in the penstock during a closure of the guide vanes. The method has limitations according to the IEC41 standard, which makes it difficult to use at Swedish plants where the head is generally low. This means that there is limited experience/knowledge in Sweden on this method, where the Winter-Kennedy is usually used. Since several years, Luleaa University of Technology works actively in the development of the pressure-time method for low-head hydraulic machines with encouraging results. Focus has been in decreasing the distance between both measuring sections and evaluation of the viscous losses. Measurements were performed on a pipe test rig (D=0.3 m) in a laboratory under well controlled conditions with 7time dependent losses, allowing decreasing the error by up 0.4%. The present work presents pressure-time measurements (with L=5 m) performed on a 10 MW Kaplan turbine compared to transit-time flow measurements. The new formulation taking into account the unsteady losses allows a better estimation of the flow rate, up to 0.3%. As an alternative to the Winter-Kennedy widely used in Sweden, the pressure-time method was tested as a relative method by measuring the pressure between the free surface and a section in the penstock without knowing the exact geometry, i.e., pipe factor. Such measurements may be simple to perform as most of the inlet spiral casings have pressure taps. Furthermore, the viscous losses do not need to be accurately determined as long as they are handled similarly between the measurements. The pressure-time method may thus become an alternative to the Winter-Kennedy.

  3. Fast time- and frequency-domain finite-element methods for electromagnetic analysis

    Science.gov (United States)

    Lee, Woochan

    Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution

  4. Complete Tangent Stiffness for eXtended Finite Element Method by including crack growth parameters

    DEFF Research Database (Denmark)

    Mougaard, J.F.; Poulsen, P.N.; Nielsen, L.O.

    2013-01-01

    the crack geometry parameters, such as the crack length and the crack direction directly in the virtual work formulation. For efficiency, it is essential to obtain a complete tangent stiffness. A new method in this work is presented to include an incremental form the crack growth parameters on equal terms......The eXtended Finite Element Method (XFEM) is a useful tool for modeling the growth of discrete cracks in structures made of concrete and other quasi‐brittle and brittle materials. However, in a standard application of XFEM, the tangent stiffness is not complete. This is a result of not including...... with the degrees of freedom in the FEM‐equations. The complete tangential stiffness matrix is based on the virtual work together with the constitutive conditions at the crack tip. Introducing the crack growth parameters as direct unknowns, both equilibrium equations and the crack tip criterion can be handled...

  5. Formal methods for discrete-time dynamical systems

    CERN Document Server

    Belta, Calin; Aydin Gol, Ebru

    2017-01-01

    This book bridges fundamental gaps between control theory and formal methods. Although it focuses on discrete-time linear and piecewise affine systems, it also provides general frameworks for abstraction, analysis, and control of more general models. The book is self-contained, and while some mathematical knowledge is necessary, readers are not expected to have a background in formal methods or control theory. It rigorously defines concepts from formal methods, such as transition systems, temporal logics, model checking and synthesis. It then links these to the infinite state dynamical systems through abstractions that are intuitive and only require basic convex-analysis and control-theory terminology, which is provided in the appendix. Several examples and illustrations help readers understand and visualize the concepts introduced throughout the book.

  6. Evaluation of the filtered leapfrog-trapezoidal time integration method

    International Nuclear Information System (INIS)

    Roache, P.J.; Dietrich, D.E.

    1988-01-01

    An analysis and evaluation are presented for a new method of time integration for fluid dynamic proposed by Dietrich. The method, called the filtered leapfrog-trapezoidal (FLT) scheme, is analyzed for the one-dimensional constant-coefficient advection equation and is shown to have some advantages for quasi-steady flows. A modification (FLTW) using a weighted combination of FLT and leapfrog is developed which retains the advantages for steady flows, increases accuracy for time-dependent flows, and involves little coding effort. Merits and applicability are discussed

  7. A standard curve based method for relative real time PCR data processing

    Directory of Open Access Journals (Sweden)

    Krause Andreas

    2005-03-01

    Full Text Available Abstract Background Currently real time PCR is the most precise method by which to measure gene expression. The method generates a large amount of raw numerical data and processing may notably influence final results. The data processing is based either on standard curves or on PCR efficiency assessment. At the moment, the PCR efficiency approach is preferred in relative PCR whilst the standard curve is often used for absolute PCR. However, there are no barriers to employ standard curves for relative PCR. This article provides an implementation of the standard curve method and discusses its advantages and limitations in relative real time PCR. Results We designed a procedure for data processing in relative real time PCR. The procedure completely avoids PCR efficiency assessment, minimizes operator involvement and provides a statistical assessment of intra-assay variation. The procedure includes the following steps. (I Noise is filtered from raw fluorescence readings by smoothing, baseline subtraction and amplitude normalization. (II The optimal threshold is selected automatically from regression parameters of the standard curve. (III Crossing points (CPs are derived directly from coordinates of points where the threshold line crosses fluorescence plots obtained after the noise filtering. (IV The means and their variances are calculated for CPs in PCR replicas. (V The final results are derived from the CPs' means. The CPs' variances are traced to results by the law of error propagation. A detailed description and analysis of this data processing is provided. The limitations associated with the use of parametric statistical methods and amplitude normalization are specifically analyzed and found fit to the routine laboratory practice. Different options are discussed for aggregation of data obtained from multiple reference genes. Conclusion A standard curve based procedure for PCR data processing has been compiled and validated. It illustrates that

  8. The G′G-expansion method using modified Riemann–Liouville derivative for some space-time fractional differential equations

    Directory of Open Access Journals (Sweden)

    Ahmet Bekir

    2014-09-01

    Full Text Available In this paper, the fractional partial differential equations are defined by modified Riemann–Liouville fractional derivative. With the help of fractional derivative and traveling wave transformation, these equations can be converted into the nonlinear nonfractional ordinary differential equations. Then G′G-expansion method is applied to obtain exact solutions of the space-time fractional Burgers equation, the space-time fractional KdV-Burgers equation and the space-time fractional coupled Burgers’ equations. As a result, many exact solutions are obtained including hyperbolic function solutions, trigonometric function solutions and rational solutions. These results reveal that the proposed method is very effective and simple in performing a solution to the fractional partial differential equation.

  9. Method for assessment of stormwater treatment facilities - Synthetic road runoff addition including micro-pollutants and tracer.

    Science.gov (United States)

    Cederkvist, Karin; Jensen, Marina B; Holm, Peter E

    2017-08-01

    Stormwater treatment facilities (STFs) are becoming increasingly widespread but knowledge on their performance is limited. This is due to difficulties in obtaining representative samples during storm events and documenting removal of the broad range of contaminants found in stormwater runoff. This paper presents a method to evaluate STFs by addition of synthetic runoff with representative concentrations of contaminant species, including the use of tracer for correction of removal rates for losses not caused by the STF. A list of organic and inorganic contaminant species, including trace elements representative of runoff from roads is suggested, as well as relevant concentration ranges. The method was used for adding contaminants to three different STFs including a curbstone extension with filter soil, a dual porosity filter, and six different permeable pavements. Evaluation of the method showed that it is possible to add a well-defined mixture of contaminants despite different field conditions by having a flexibly system, mixing different stock-solutions on site, and use bromide tracer for correction of outlet concentrations. Bromide recovery ranged from only 12% in one of the permeable pavements to 97% in the dual porosity filter, stressing the importance of including a conservative tracer for correction of contaminant retention values. The method is considered useful in future treatment performance testing of STFs. The observed performance of the STFs is presented in coming papers. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Method and apparatus for controlling a powertrain system including a multi-mode transmission

    Science.gov (United States)

    Hessell, Steven M.; Morris, Robert L.; McGrogan, Sean W.; Heap, Anthony H.; Mendoza, Gil J.

    2015-09-08

    A powertrain including an engine and torque machines is configured to transfer torque through a multi-mode transmission to an output member. A method for controlling the powertrain includes employing a closed-loop speed control system to control torque commands for the torque machines in response to a desired input speed. Upon approaching a power limit of a power storage device transferring power to the torque machines, power limited torque commands are determined for the torque machines in response to the power limit and the closed-loop speed control system is employed to determine an engine torque command in response to the desired input speed and the power limited torque commands for the torque machines.

  11. Reliability and limitation of various diagnostic methods including nuclear medicine in myocardial disease

    International Nuclear Information System (INIS)

    Tokuyasu, Yoshiki; Kusakabe, Kiyoko; Yamazaki, Toshio

    1981-01-01

    Electrocardiography (ECG), echocardiography, nuclear method, cardiac catheterization, left ventriculography and endomyocardial biopsy (biopsy) were performed in 40 cases of cardiomyopathy (CM), 9 of endocardial fibroelastosis and 19 of specific heart muscle disease, and the usefulness and limitation of each method was comparatively estimated. In CM, various methods including biopsy were performed. The 40 patients were classified into 3 groups, i.e., hypertrophic (17), dilated (20) and non-hypertrophic.non-dilated (3) on the basis of left ventricular ejection fraction and hypertrophy of the ventricular wall. The hypertrophic group was divided into 4 subgroups: 9 septal, 4 apical, 2 posterior and 2 anterior. The nuclear study is useful in assessing the site of the abnormal ventricular thickening, perfusion defect and ventricular function. Echocardiography is most useful in detecting asymmetric septal hypertrophy. The biopsy gives the sole diagnostic clue, especially in non-hypertrophic.non-dilated cardiomyopathy. ECG is useful in all cases but correlation with the site of disproportional hypertrophy was not obtained. (J.P.N.)

  12. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  13. Time evolution of the wave equation using rapid expansion method

    KAUST Repository

    Pestana, Reynam C.; Stoffa, Paul L.

    2010-01-01

    Forward modeling of seismic data and reverse time migration are based on the time evolution of wavefields. For the case of spatially varying velocity, we have worked on two approaches to evaluate the time evolution of seismic wavefields. An exact solution for the constant-velocity acoustic wave equation can be used to simulate the pressure response at any time. For a spatially varying velocity, a one-step method can be developed where no intermediate time responses are required. Using this approach, we have solved for the pressure response at intermediate times and have developed a recursive solution. The solution has a very high degree of accuracy and can be reduced to various finite-difference time-derivative methods, depending on the approximations used. Although the two approaches are closely related, each has advantages, depending on the problem being solved. © 2010 Society of Exploration Geophysicists.

  14. Time evolution of the wave equation using rapid expansion method

    KAUST Repository

    Pestana, Reynam C.

    2010-07-01

    Forward modeling of seismic data and reverse time migration are based on the time evolution of wavefields. For the case of spatially varying velocity, we have worked on two approaches to evaluate the time evolution of seismic wavefields. An exact solution for the constant-velocity acoustic wave equation can be used to simulate the pressure response at any time. For a spatially varying velocity, a one-step method can be developed where no intermediate time responses are required. Using this approach, we have solved for the pressure response at intermediate times and have developed a recursive solution. The solution has a very high degree of accuracy and can be reduced to various finite-difference time-derivative methods, depending on the approximations used. Although the two approaches are closely related, each has advantages, depending on the problem being solved. © 2010 Society of Exploration Geophysicists.

  15. The RATIO method for time-resolved Laue crystallography

    International Nuclear Information System (INIS)

    Coppens, P.; Pitak, M.; Gembicky, M.; Messerschmidt, M.; Scheins, S.; Benedict, J.; Adachi, S.-I.; Sato, T.; Nozawa, S.; Ichiyanagi, K.; Chollet, M.; Koshihara, S.-Y.

    2009-01-01

    A RATIO method for analysis of intensity changes in time-resolved pump-probe Laue diffraction experiments is described. The method eliminates the need for scaling the data with a wavelength curve representing the spectral distribution of the source and removes the effect of possible anisotropic absorption. It does not require relative scaling of series of frames and removes errors due to all but very short term fluctuations in the synchrotron beam.

  16. Limitations in simulator time-based human reliability analysis methods

    International Nuclear Information System (INIS)

    Wreathall, J.

    1989-01-01

    Developments in human reliability analysis (HRA) methods have evolved slowly. Current methods are little changed from those of almost a decade ago, particularly in the use of time-reliability relationships. While these methods were suitable as an interim step, the time (and the need) has come to specify the next evolution of HRA methods. As with any performance-oriented data source, power plant simulator data have no direct connection to HRA models. Errors reported in data are normal deficiencies observed in human performance; failures are events modeled in probabilistic risk assessments (PRAs). Not all errors cause failures; not all failures are caused by errors. Second, the times at which actions are taken provide no measure of the likelihood of failures to act correctly within an accident scenario. Inferences can be made about human reliability, but they must be made with great care. Specific limitations are discussed. Simulator performance data are useful in providing qualitative evidence of the variety of error types and their potential influences on operating systems. More work is required to combine recent developments in the psychology of error with the qualitative data collected at stimulators. Until data become openly available, however, such an advance will not be practical

  17. An Accurate Method to Determine the Muzzle Leaving Time of Guns

    Directory of Open Access Journals (Sweden)

    H. X. Chao

    2014-11-01

    Full Text Available This paper states the importance of determining the muzzle leaving time of guns with a high degree of accuracy. Two commonly used methods are introduced, which are the high speed photography method and photoelectric transducer method, and the advantage and disadvantage of these two methods are analyzed. Furthermore, a new method to determine the muzzle leaving time of guns based on the combination of high speed photography and synchronized trigger technology is present in this paper, and its principle and uncertainty of measurement are evaluated. The firing experiments shows that the present method has distinguish advantage in accuracy and reliability from other methods.

  18. Arrival-time picking method based on approximate negentropy for microseismic data

    Science.gov (United States)

    Li, Yue; Ni, Zhuo; Tian, Yanan

    2018-05-01

    Accurate and dependable picking of the first arrival time for microseismic data is an important part in microseismic monitoring, which directly affects analysis results of post-processing. This paper presents a new method based on approximate negentropy (AN) theory for microseismic arrival time picking in condition of much lower signal-to-noise ratio (SNR). According to the differences in information characteristics between microseismic data and random noise, an appropriate approximation of negentropy function is selected to minimize the effect of SNR. At the same time, a weighted function of the differences between maximum and minimum value of AN spectrum curve is designed to obtain a proper threshold function. In this way, the region of signal and noise is distinguished to pick the first arrival time accurately. To demonstrate the effectiveness of AN method, we make many experiments on a series of synthetic data with different SNR from -1 dB to -12 dB and compare it with previously published Akaike information criterion (AIC) and short/long time average ratio (STA/LTA) methods. Experimental results indicate that these three methods can achieve well picking effect when SNR is from -1 dB to -8 dB. However, when SNR is as low as -8 dB to -12 dB, the proposed AN method yields more accurate and stable picking result than AIC and STA/LTA methods. Furthermore, the application results of real three-component microseismic data also show that the new method is superior to the other two methods in accuracy and stability.

  19. A permutation-based multiple testing method for time-course microarray experiments

    Directory of Open Access Journals (Sweden)

    George Stephen L

    2009-10-01

    Full Text Available Abstract Background Time-course microarray experiments are widely used to study the temporal profiles of gene expression. Storey et al. (2005 developed a method for analyzing time-course microarray studies that can be applied to discovering genes whose expression trajectories change over time within a single biological group, or those that follow different time trajectories among multiple groups. They estimated the expression trajectories of each gene using natural cubic splines under the null (no time-course and alternative (time-course hypotheses, and used a goodness of fit test statistic to quantify the discrepancy. The null distribution of the statistic was approximated through a bootstrap method. Gene expression levels in microarray data are often complicatedly correlated. An accurate type I error control adjusting for multiple testing requires the joint null distribution of test statistics for a large number of genes. For this purpose, permutation methods have been widely used because of computational ease and their intuitive interpretation. Results In this paper, we propose a permutation-based multiple testing procedure based on the test statistic used by Storey et al. (2005. We also propose an efficient computation algorithm. Extensive simulations are conducted to investigate the performance of the permutation-based multiple testing procedure. The application of the proposed method is illustrated using the Caenorhabditis elegans dauer developmental data. Conclusion Our method is computationally efficient and applicable for identifying genes whose expression levels are time-dependent in a single biological group and for identifying the genes for which the time-profile depends on the group in a multi-group setting.

  20. Introduction to the Finite-Difference Time-Domain (FDTD) Method for Electromagnetics

    CERN Document Server

    Gedney, Stephen

    2011-01-01

    Introduction to the Finite-Difference Time-Domain (FDTD) Method for Electromagnetics provides a comprehensive tutorial of the most widely used method for solving Maxwell's equations -- the Finite Difference Time-Domain Method. This book is an essential guide for students, researchers, and professional engineers who want to gain a fundamental knowledge of the FDTD method. It can accompany an undergraduate or entry-level graduate course or be used for self-study. The book provides all the background required to either research or apply the FDTD method for the solution of Maxwell's equations to p

  1. Homotopy decomposition method for solving one-dimensional time-fractional diffusion equation

    Science.gov (United States)

    Abuasad, Salah; Hashim, Ishak

    2018-04-01

    In this paper, we present the homotopy decomposition method with a modified definition of beta fractional derivative for the first time to find exact solution of one-dimensional time-fractional diffusion equation. In this method, the solution takes the form of a convergent series with easily computable terms. The exact solution obtained by the proposed method is compared with the exact solution obtained by using fractional variational homotopy perturbation iteration method via a modified Riemann-Liouville derivative.

  2. Optimal Design and Real Time Implementation of Autonomous Microgrid Including Active Load

    Directory of Open Access Journals (Sweden)

    Mohamed A. Hassan

    2018-05-01

    Full Text Available Controller gains and power-sharing parameters are the main parameters affect the dynamic performance of the microgrid. Considering an active load to the autonomous microgrid, the stability problem will be more involved. In this paper, the active load effect on microgrid dynamic stability is explored. An autonomous microgrid including three inverter-based distributed generations (DGs with an active load is modeled and the associated controllers are designed. Controller gains of the inverters and active load as well as Phase Locked Loop (PLL parameters are optimally tuned to guarantee overall system stability. A weighted objective function is proposed to minimize the error in both measured active power and DC voltage based on time-domain simulations. Different AC and DC disturbances are applied to verify and assess the effectiveness of the proposed control strategy. The results demonstrate the potential of the proposed controller to enhance the microgrid stability and to provide efficient damping characteristics. Additionally, the proposed controller is compared with the literature to demonstrate its superiority. Finally, the microgrid considered has been established and implemented on real time digital simulator (RTDS. The experimental results validate the simulation results and approve the effectiveness of the proposed controllers to enrich the stability of the considered microgrid.

  3. Most probable dimension value and most flat interval methods for automatic estimation of dimension from time series

    International Nuclear Information System (INIS)

    Corana, A.; Bortolan, G.; Casaleggio, A.

    2004-01-01

    We present and compare two automatic methods for dimension estimation from time series. Both methods, based on conceptually different approaches, work on the derivative of the bi-logarithmic plot of the correlation integral versus the correlation length (log-log plot). The first method searches for the most probable dimension values (MPDV) and associates to each of them a possible scaling region. The second one searches for the most flat intervals (MFI) in the derivative of the log-log plot. The automatic procedures include the evaluation of the candidate scaling regions using two reliability indices. The data set used to test the methods consists of time series from known model attractors with and without the addition of noise, structured time series, and electrocardiographic signals from the MIT-BIH ECG database. Statistical analysis of results was carried out by means of paired t-test, and no statistically significant differences were found in the large majority of the trials. Consistent results are also obtained dealing with 'difficult' time series. In general for a more robust and reliable estimate, the use of both methods may represent a good solution when time series from complex systems are analyzed. Although we present results for the correlation dimension only, the procedures can also be used for the automatic estimation of generalized q-order dimensions and pointwise dimension. We think that the proposed methods, eliminating the need of operator intervention, allow a faster and more objective analysis, thus improving the usefulness of dimension analysis for the characterization of time series obtained from complex dynamical systems

  4. A pseudospectral collocation time-domain method for diffractive optics

    DEFF Research Database (Denmark)

    Dinesen, P.G.; Hesthaven, J.S.; Lynov, Jens-Peter

    2000-01-01

    We present a pseudospectral method for the analysis of diffractive optical elements. The method computes a direct time-domain solution of Maxwell's equations and is applied to solving wave propagation in 2D diffractive optical elements. (C) 2000 IMACS. Published by Elsevier Science B.V. All rights...

  5. Flow-rate measurement using radioactive tracers and transit time method

    International Nuclear Information System (INIS)

    Turtiainen, Heikki

    1986-08-01

    The transit time method is a flow measurement method based on tracer techniques. Measurement is done by injecting to the flow a pulse of tracer and measuring its transit time between two detection positions. From the transit time the mean flow velosity and - using the pipe cross section area - the volume flow rate can be calculated. When a radioisotope tracer is used the measurement can be done from outside the pipe and without disturbing the process (excluding the tracer injection). The use of the transit time method has been limited because of difficulties associated with handling and availability of radioactive tracers and lack of equipment suitable for routine use in industrial environments. The purpose of this study was to find out if these difficulties may be overcome by using a portable isotope generator as a tracer source and automating the measurement. In the study a test rig and measuring equipment based on the use of a ''1''3''7Cs/''1''3''7''''mBa isotope generator were constructed. They were used to study the accuracy and error sources of the method and to compare different algorithms to calculate the transit time. The usability of the method and the equipment in industrial environments were studied by carrying out over 20 flow measurements in paper and pulp mills. On the basis of the results of the study, a project for constructing a compact radiatracer flowmeter for industrial use has been started. The application range of this kind of meter is very large. The most obvious applications are in situ calibration of flowmeters, material and energy balance studies, process equipment analyses (e.g. pump efficiency analyses). At the moment tracer techniques are the only methods applicable to these measurements on-line and with sufficient accuracy

  6. A robust anomaly based change detection method for time-series remote sensing images

    Science.gov (United States)

    Shoujing, Yin; Qiao, Wang; Chuanqing, Wu; Xiaoling, Chen; Wandong, Ma; Huiqin, Mao

    2014-03-01

    Time-series remote sensing images record changes happening on the earth surface, which include not only abnormal changes like human activities and emergencies (e.g. fire, drought, insect pest etc.), but also changes caused by vegetation phenology and climate changes. Yet, challenges occur in analyzing global environment changes and even the internal forces. This paper proposes a robust Anomaly Based Change Detection method (ABCD) for time-series images analysis by detecting abnormal points in data sets, which do not need to follow a normal distribution. With ABCD we can detect when and where changes occur, which is the prerequisite condition of global change studies. ABCD was tested initially with 10-day SPOT VGT NDVI (Normalized Difference Vegetation Index) times series tracking land cover type changes, seasonality and noise, then validated to real data in a large area in Jiangxi, south of China. Initial results show that ABCD can precisely detect spatial and temporal changes from long time series images rapidly.

  7. Optimal Strong-Stability-Preserving Runge–Kutta Time Discretizations for Discontinuous Galerkin Methods

    KAUST Repository

    Kubatko, Ethan J.; Yeager, Benjamin A.; Ketcheson, David I.

    2013-01-01

    Discontinuous Galerkin (DG) spatial discretizations are often used in a method-of-lines approach with explicit strong-stability-preserving (SSP) Runge–Kutta (RK) time steppers for the numerical solution of hyperbolic conservation laws. The time steps that are employed in this type of approach must satisfy Courant–Friedrichs–Lewy stability constraints that are dependent on both the region of absolute stability and the SSP coefficient of the RK method. While existing SSPRK methods have been optimized with respect to the latter, it is in fact the former that gives rise to stricter constraints on the time step in the case of RKDG stability. Therefore, in this work, we present the development of new “DG-optimized” SSPRK methods with stability regions that have been specifically designed to maximize the stable time step size for RKDG methods of a given order in one space dimension. These new methods represent the best available RKDG methods in terms of computational efficiency, with significant improvements over methods using existing SSPRK time steppers that have been optimized with respect to SSP coefficients. Second-, third-, and fourth-order methods with up to eight stages are presented, and their stability properties are verified through application to numerical test cases.

  8. Optimal Strong-Stability-Preserving Runge–Kutta Time Discretizations for Discontinuous Galerkin Methods

    KAUST Repository

    Kubatko, Ethan J.

    2013-10-29

    Discontinuous Galerkin (DG) spatial discretizations are often used in a method-of-lines approach with explicit strong-stability-preserving (SSP) Runge–Kutta (RK) time steppers for the numerical solution of hyperbolic conservation laws. The time steps that are employed in this type of approach must satisfy Courant–Friedrichs–Lewy stability constraints that are dependent on both the region of absolute stability and the SSP coefficient of the RK method. While existing SSPRK methods have been optimized with respect to the latter, it is in fact the former that gives rise to stricter constraints on the time step in the case of RKDG stability. Therefore, in this work, we present the development of new “DG-optimized” SSPRK methods with stability regions that have been specifically designed to maximize the stable time step size for RKDG methods of a given order in one space dimension. These new methods represent the best available RKDG methods in terms of computational efficiency, with significant improvements over methods using existing SSPRK time steppers that have been optimized with respect to SSP coefficients. Second-, third-, and fourth-order methods with up to eight stages are presented, and their stability properties are verified through application to numerical test cases.

  9. Neutron spectrum measurement using rise-time discrimination method

    International Nuclear Information System (INIS)

    Luo Zhiping; Suzuki, C.; Kosako, T.; Ma Jizeng

    2009-01-01

    PSD method can be used to measure the fast neutron spectrum in n/γ mixed field. A set of assemblies for measuring the pulse height distribution of neutrons is built up,based on a large volume NE213 liquid scintillator and standard NIM circuits,through the rise-time discrimination method. After that,the response matrix is calculated using Monte Carlo method. The energy calibration of the pulse height distribution is accomplished using 60 Co radioisotope. The neutron spectrum of the mono-energetic accelerator neutron source is achieved by unfolding process. Suggestions for further improvement of the system are presented at last. (authors)

  10. Approximate k-NN delta test minimization method using genetic algorithms: Application to time series

    CERN Document Server

    Mateo, F; Gadea, Rafael; Sovilj, Dusan

    2010-01-01

    In many real world problems, the existence of irrelevant input variables (features) hinders the predictive quality of the models used to estimate the output variables. In particular, time series prediction often involves building large regressors of artificial variables that can contain irrelevant or misleading information. Many techniques have arisen to confront the problem of accurate variable selection, including both local and global search strategies. This paper presents a method based on genetic algorithms that intends to find a global optimum set of input variables that minimize the Delta Test criterion. The execution speed has been enhanced by substituting the exact nearest neighbor computation by its approximate version. The problems of scaling and projection of variables have been addressed. The developed method works in conjunction with MATLAB's Genetic Algorithm and Direct Search Toolbox. The goodness of the proposed methodology has been evaluated on several popular time series examples, and also ...

  11. Cost and benefit including value of life, health and environmental damage measured in time units

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Friis-Hansen, Peter

    2009-01-01

    Key elements of the authors' work on money equivalent time allocation to costs and benefits in risk analysis are put together as an entity. This includes the data supported dimensionless analysis of an equilibrium relation between total population work time and gross domestic product leading...... of this societal value over the actual costs, used by the owner for economically optimizing an activity, motivates a simple risk accept criterion suited to be imposed on the owner by the public. An illustration is given concerning allocation of economical means for mitigation of loss of life and health on a ferry...

  12. A Compact Unconditionally Stable Method for Time-Domain Maxwell's Equations

    Directory of Open Access Journals (Sweden)

    Zhuo Su

    2013-01-01

    Full Text Available Higher order unconditionally stable methods are effective ways for simulating field behaviors of electromagnetic problems since they are free of Courant-Friedrich-Levy conditions. The development of accurate schemes with less computational expenditure is desirable. A compact fourth-order split-step unconditionally-stable finite-difference time-domain method (C4OSS-FDTD is proposed in this paper. This method is based on a four-step splitting form in time which is constructed by symmetric operator and uniform splitting. The introduction of spatial compact operator can further improve its performance. Analyses of stability and numerical dispersion are carried out. Compared with noncompact counterpart, the proposed method has reduced computational expenditure while keeping the same level of accuracy. Comparisons with other compact unconditionally-stable methods are provided. Numerical dispersion and anisotropy errors are shown to be lower than those of previous compact unconditionally-stable methods.

  13. Methods of forming aluminum oxynitride-comprising bodies, including methods of forming a sheet of transparent armor

    Science.gov (United States)

    Chu, Henry Shiu-Hung [Idaho Falls, ID; Lillo, Thomas Martin [Idaho Falls, ID

    2008-12-02

    The invention includes methods of forming an aluminum oxynitride-comprising body. For example, a mixture is formed which comprises A:B:C in a respective molar ratio in the range of 9:3.6-6.2:0.1-1.1, where "A" is Al.sub.2O.sub.3, "B" is AlN, and "C" is a total of one or more of B.sub.2O.sub.3, SiO.sub.2, Si--Al--O--N, and TiO.sub.2. The mixture is sintered at a temperature of at least 1,600.degree. C. at a pressure of no greater than 500 psia effective to form an aluminum oxynitride-comprising body which is at least internally transparent and has at least 99% maximum theoretical density.

  14. A finite element method for SSI time history calculation

    International Nuclear Information System (INIS)

    Ni, X.; Gantenbein, F.; Petit, M.

    1989-01-01

    The method which is proposed is based on a finite element modelization for the soil and the structure and a time history calculation. It has been developed for plane and axisymmetric geometries. The principle of this method is presented, then applications are given, first to a linear calculation for which results will be compared to those obtained by standard methods. Then results for a non linear behavior are described

  15. Time-dependent density-functional theory in the projector augmented-wave method

    DEFF Research Database (Denmark)

    Walter, Michael; Häkkinen, Hannu; Lehtovaara, Lauri

    2008-01-01

    We present the implementation of the time-dependent density-functional theory both in linear-response and in time-propagation formalisms using the projector augmented-wave method in real-space grids. The two technically very different methods are compared in the linear-response regime where we...

  16. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  17. Engine including hydraulically actuated valvetrain and method of valve overlap control

    Science.gov (United States)

    Cowgill, Joel [White Lake, MI

    2012-05-08

    An exhaust valve control method may include displacing an exhaust valve in communication with the combustion chamber of an engine to an open position using a hydraulic exhaust valve actuation system and returning the exhaust valve to a closed position using the hydraulic exhaust valve actuation assembly. During closing, the exhaust valve may be displaced for a first duration from the open position to an intermediate closing position at a first velocity by operating the hydraulic exhaust valve actuation assembly in a first mode. The exhaust valve may be displaced for a second duration greater than the first duration from the intermediate closing position to a fully closed position at a second velocity at least eighty percent less than the first velocity by operating the hydraulic exhaust valve actuation assembly in a second mode.

  18. Recognition of Time Stamps on Full-Disk Hα Images Using Machine Learning Methods

    Science.gov (United States)

    Xu, Y.; Huang, N.; Jing, J.; Liu, C.; Wang, H.; Fu, G.

    2016-12-01

    Observation and understanding of the physics of the 11-year solar activity cycle and 22-year magnetic cycle are among the most important research topics in solar physics. The solar cycle is responsible for magnetic field and particle fluctuation in the near-earth environment that have been found increasingly important in affecting the living of human beings in the modern era. A systematic study of large-scale solar activities, as made possible by our rich data archive, will further help us to understand the global-scale magnetic fields that are closely related to solar cycles. The long-time-span data archive includes both full-disk and high-resolution Hα images. Prior to the widely use of CCD cameras in 1990s, 35-mm films were the major media to store images. The research group at NJIT recently finished the digitization of film data obtained by the National Solar Observatory (NSO) and Big Bear Solar Observatory (BBSO) covering the period of 1953 to 2000. The total volume of data exceeds 60 TB. To make this huge database scientific valuable, some processing and calibration are required. One of the most important steps is to read the time stamps on all of the 14 million images, which is almost impossible to be done manually. We implemented three different methods to recognize the time stamps automatically, including Optical Character Recognition (OCR), Classification Tree and TensorFlow. The latter two are known as machine learning algorithms which are very popular now a day in pattern recognition area. We will present some sample images and the results of clock recognition from all three methods.

  19. A simple method to assess unsaturated zone time lag in the travel time from ground surface to receptor.

    Science.gov (United States)

    Sousa, Marcelo R; Jones, Jon P; Frind, Emil O; Rudolph, David L

    2013-01-01

    In contaminant travel from ground surface to groundwater receptors, the time taken in travelling through the unsaturated zone is known as the unsaturated zone time lag. Depending on the situation, this time lag may or may not be significant within the context of the overall problem. A method is presented for assessing the importance of the unsaturated zone in the travel time from source to receptor in terms of estimates of both the absolute and the relative advective times. A choice of different techniques for both unsaturated and saturated travel time estimation is provided. This method may be useful for practitioners to decide whether to incorporate unsaturated processes in conceptual and numerical models and can also be used to roughly estimate the total travel time between points near ground surface and a groundwater receptor. This method was applied to a field site located in a glacial aquifer system in Ontario, Canada. Advective travel times were estimated using techniques with different levels of sophistication. The application of the proposed method indicates that the time lag in the unsaturated zone is significant at this field site and should be taken into account. For this case, sophisticated and simplified techniques lead to similar assessments when the same knowledge of the hydraulic conductivity field is assumed. When there is significant uncertainty regarding the hydraulic conductivity, simplified calculations did not lead to a conclusive decision. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. A wavelet method for modeling and despiking motion artifacts from resting-state fMRI time series

    Science.gov (United States)

    Patel, Ameera X.; Kundu, Prantik; Rubinov, Mikail; Jones, P. Simon; Vértes, Petra E.; Ersche, Karen D.; Suckling, John; Bullmore, Edward T.

    2014-01-01

    The impact of in-scanner head movement on functional magnetic resonance imaging (fMRI) signals has long been established as undesirable. These effects have been traditionally corrected by methods such as linear regression of head movement parameters. However, a number of recent independent studies have demonstrated that these techniques are insufficient to remove motion confounds, and that even small movements can spuriously bias estimates of functional connectivity. Here we propose a new data-driven, spatially-adaptive, wavelet-based method for identifying, modeling, and removing non-stationary events in fMRI time series, caused by head movement, without the need for data scrubbing. This method involves the addition of just one extra step, the Wavelet Despike, in standard pre-processing pipelines. With this method, we demonstrate robust removal of a range of different motion artifacts and motion-related biases including distance-dependent connectivity artifacts, at a group and single-subject level, using a range of previously published and new diagnostic measures. The Wavelet Despike is able to accommodate the substantial spatial and temporal heterogeneity of motion artifacts and can consequently remove a range of high and low frequency artifacts from fMRI time series, that may be linearly or non-linearly related to physical movements. Our methods are demonstrated by the analysis of three cohorts of resting-state fMRI data, including two high-motion datasets: a previously published dataset on children (N = 22) and a new dataset on adults with stimulant drug dependence (N = 40). We conclude that there is a real risk of motion-related bias in connectivity analysis of fMRI data, but that this risk is generally manageable, by effective time series denoising strategies designed to attenuate synchronized signal transients induced by abrupt head movements. The Wavelet Despiking software described in this article is freely available for download at www

  1. A Real-Time Thermal Self-Elimination Method for Static Mode Operated Freestanding Piezoresistive Microcantilever-Based Biosensors.

    Science.gov (United States)

    Ku, Yu-Fu; Huang, Long-Sun; Yen, Yi-Kuang

    2018-02-28

    Here, we provide a method and apparatus for real-time compensation of the thermal effect of single free-standing piezoresistive microcantilever-based biosensors. The sensor chip contained an on-chip fixed piezoresistor that served as a temperature sensor, and a multilayer microcantilever with an embedded piezoresistor served as a biomolecular sensor. This method employed the calibrated relationship between the resistance and the temperature of piezoresistors to eliminate the thermal effect on the sensor, including the temperature coefficient of resistance (TCR) and bimorph effect. From experimental results, the method was verified to reduce the signal of thermal effect from 25.6 μV/°C to 0.3 μV/°C, which was approximately two orders of magnitude less than that before the processing of the thermal elimination method. Furthermore, the proposed approach and system successfully demonstrated its effective real-time thermal self-elimination on biomolecular detection without any thermostat device to control the environmental temperature. This method realizes the miniaturization of an overall measurement system of the sensor, which can be used to develop portable medical devices and microarray analysis platforms.

  2. Real-time stability monitoring method for boiling water reactor nuclear power plants

    International Nuclear Information System (INIS)

    Fukunishi, K.; Suzuki, S.

    1987-01-01

    A method for real-time stability monitoring is developed for supervising the steady-state operation of a boiling water reactor core. The decay ratio of the reactor power fluctuation is determined by measuring only the output neutron noise. The concept of an inverse system is introduced to identify the dynamic characteristics of the reactor core. The adoption of an adaptive digital filter is useful in real-time identification. A feasibility test that used measured output noise as an indication of reactor power suggests that this method is useful in a real-time stability monitoring system. Using this method, the tedious and difficult work for modeling reactor core dynamics can be reduced. The method employs a simple algorithm that eliminates the need for stochastic computation, thus making the method suitable for real-time computation with a simple microprocessor. In addition, there is no need to disturb the reactor core during operation. Real-time stability monitoring using the proposed algorithm may allow operation under less stable margins

  3. Study on scan timing using a test injection method in head CTA

    International Nuclear Information System (INIS)

    Sekito, Yuichi; Sanada, Hidenori

    2005-01-01

    In head computed tomographic angiography (CTA), circulation from arterial phase to venous phase is more rapid than that in other regions. Therefore, it is necessary to determine correct scan timing to obtain ideal CTA images. A test injection method makes it possible to set correct scan timing from the time density curve (TDC) for each subject. The method, however, has a weak point that is a time lag in an arrival time at peak point of contrast medium on TDC between the test injection and the primary examination because of the difference in total volume of contrast medium used. The purpose of this study calculated the delay time on the TDC in both scans. We used the test injection method and the bolus tracking method in the primary examination. The average errors in start time (Δt1) and slope change time (Δt2) of the contrast medium on the TDC between test injection and primary examination were 0.15 sec and 3.05 sec, respectively. The results indicated that it was important to grasp the delay time in start time and peak arrival time of the contrast medium between test injection and primary examination to obtain ideal images in head CTA. (author)

  4. A finite element method for SSI time history calculations

    International Nuclear Information System (INIS)

    Ni, X.M.; Gantenbein, F.; Petit, M.

    1989-01-01

    The method which is proposed is based on a finite element modelisation for the soil and the structure and a time history calculation. It has been developed for plane and axisymmetric geometries. The principle of this method will be presented, then applications will be given, first to a linear calculation for which results will be compared to those obtained by standard methods. Then results for a non linear behavior will be described

  5. The Markov chain method for solving dead time problems in the space dependent model of reactor noise

    International Nuclear Information System (INIS)

    Degweker, S.B.

    1997-01-01

    The discrete time Markov chain approach for deriving the statistics of time-correlated pulses, in the presence of a non-extending dead time, is extended to include the effect of space energy distribution of the neutron field. Equations for the singlet and doublet densities of follower neutrons are derived by neglecting correlations beyond the second order. These equations are solved by the modal method. It is shown that in the unimodal approximation, the equations reduce to the point model equations with suitably defined parameters. (author)

  6. A meshless method for solving two-dimensional variable-order time fractional advection-diffusion equation

    Science.gov (United States)

    Tayebi, A.; Shekari, Y.; Heydari, M. H.

    2017-07-01

    Several physical phenomena such as transformation of pollutants, energy, particles and many others can be described by the well-known convection-diffusion equation which is a combination of the diffusion and advection equations. In this paper, this equation is generalized with the concept of variable-order fractional derivatives. The generalized equation is called variable-order time fractional advection-diffusion equation (V-OTFA-DE). An accurate and robust meshless method based on the moving least squares (MLS) approximation and the finite difference scheme is proposed for its numerical solution on two-dimensional (2-D) arbitrary domains. In the time domain, the finite difference technique with a θ-weighted scheme and in the space domain, the MLS approximation are employed to obtain appropriate semi-discrete solutions. Since the newly developed method is a meshless approach, it does not require any background mesh structure to obtain semi-discrete solutions of the problem under consideration, and the numerical solutions are constructed entirely based on a set of scattered nodes. The proposed method is validated in solving three different examples including two benchmark problems and an applied problem of pollutant distribution in the atmosphere. In all such cases, the obtained results show that the proposed method is very accurate and robust. Moreover, a remarkable property so-called positive scheme for the proposed method is observed in solving concentration transport phenomena.

  7. Efficient methods for time-absorption (α) eigenvalue calculations

    International Nuclear Information System (INIS)

    Hill, T.R.

    1983-01-01

    The time-absorption eigenvalue (α) calculation is one of the options found in most discrete-ordinates transport codes. Several methods have been developed at Los Alamos to improve the efficiency of this calculation. Two procedures, based on coarse-mesh rebalance, to accelerate the α eigenvalue search are derived. A hybrid scheme to automatically choose the more-effective rebalance method is described. The α rebalance scheme permits some simple modifications to the iteration strategy that eliminates many unnecessary calculations required in the standard search procedure. For several fast supercritical test problems, these methods resulted in convergence with one-fifth the number of iterations required for the conventional eigenvalue search procedure

  8. Time discretization of the point kinetic equations using matrix exponential method and First-Order Hold

    International Nuclear Information System (INIS)

    Park, Yujin; Kazantzis, Nikolaos; Parlos, Alexander G.; Chong, Kil To

    2013-01-01

    Highlights: • Numerical solution for stiff differential equations using matrix exponential method. • The approximation is based on First Order Hold assumption. • Various input examples applied to the point kinetics equations. • The method shows superior useful and effective activity. - Abstract: A system of nonlinear differential equations is derived to model the dynamics of neutron density and the delayed neutron precursors within a point kinetics equation modeling framework for a nuclear reactor. The point kinetic equations are mathematically characterized as stiff, occasionally nonlinear, ordinary differential equations, posing significant challenges when numerical solutions are sought and traditionally resulting in the need for smaller time step intervals within various computational schemes. In light of the above realization, the present paper proposes a new discretization method inspired by system-theoretic notions and technically based on a combination of the matrix exponential method (MEM) and the First-Order Hold (FOH) assumption. Under the proposed time discretization structure, the sampled-data representation of the nonlinear point kinetic system of equations is derived. The performance of the proposed time discretization procedure is evaluated using several case studies with sinusoidal reactivity profiles and multiple input examples (reactivity and neutron source function). It is shown, that by applying the proposed method under a First-Order Hold for the neutron density and the precursor concentrations at each time step interval, the stiffness problem associated with the point kinetic equations can be adequately addressed and resolved. Finally, as evidenced by the aforementioned detailed simulation studies, the proposed method retains its validity and accuracy for a wide range of reactor operating conditions, including large sampling periods dictated by physical and/or technical limitations associated with the current state of sensor and

  9. Multiple time-scale methods in particle simulations of plasmas

    International Nuclear Information System (INIS)

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  10. Limitations of the time slide method of background estimation

    International Nuclear Information System (INIS)

    Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis

    2010-01-01

    Time shifting the output of gravitational wave detectors operating in coincidence is a convenient way of estimating the background in a search for short-duration signals. In this paper, we show how non-stationary data affect the background estimation precision. We present a method of measuring the fluctuations of the data and computing its effects on a coincident search. In particular, we show that for fluctuations of moderate amplitude, time slides larger than the fluctuation time scales can be used. We also recall how the false alarm variance saturates with the number of time shifts.

  11. Limitations of the time slide method of background estimation

    Energy Technology Data Exchange (ETDEWEB)

    Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis, E-mail: mwas@lal.in2p3.f [LAL, Universite Paris-Sud, CNRS/IN2P3, Orsay (France)

    2010-10-07

    Time shifting the output of gravitational wave detectors operating in coincidence is a convenient way of estimating the background in a search for short-duration signals. In this paper, we show how non-stationary data affect the background estimation precision. We present a method of measuring the fluctuations of the data and computing its effects on a coincident search. In particular, we show that for fluctuations of moderate amplitude, time slides larger than the fluctuation time scales can be used. We also recall how the false alarm variance saturates with the number of time shifts.

  12. Laser induced breakdown spectroscopy of the uranium including calcium. Time resolved measurement spectroscopic analysis (Contract research)

    International Nuclear Information System (INIS)

    Akaoka, Katsuaki; Maruyama, Youichiro; Oba, Masaki; Miyabe, Masabumi; Otobe, Haruyoshi; Wakaida, Ikuo

    2010-05-01

    For the remote analysis of low DF TRU (Decontamination Factor Transuranic) fuel, Laser Breakdown Spectroscopy (LIBS) was applied to uranium oxide including a small amount of calcium oxide. The characteristics, such as spectrum intensity and plasma excitation temperature, were measured using time-resolved spectroscopy. As a result, in order to obtain the stable intensity of calcium spectrum for the uranium spectrum, it was found out that the optimum observation delay time of spectrum is 4 microseconds or more after laser irradiation. (author)

  13. Teaching Methods in Biology Education and Sustainability Education Including Outdoor Education for Promoting Sustainability—A Literature Review

    Directory of Open Access Journals (Sweden)

    Eila Jeronen

    2016-12-01

    Full Text Available There are very few studies concerning the importance of teaching methods in biology education and environmental education including outdoor education for promoting sustainability at the levels of primary and secondary schools and pre-service teacher education. The material was selected using special keywords from biology and sustainable education in several scientific databases. The article provides an overview of 24 selected articles published in peer-reviewed scientific journals from 2006–2016. The data was analyzed using qualitative content analysis. Altogether, 16 journals were selected and 24 articles were analyzed in detail. The foci of the analyses were teaching methods, learning environments, knowledge and thinking skills, psychomotor skills, emotions and attitudes, and evaluation methods. Additionally, features of good methods were investigated and their implications for teaching were emphasized. In total, 22 different teaching methods were found to improve sustainability education in different ways. The most emphasized teaching methods were those in which students worked in groups and participated actively in learning processes. Research points toward the value of teaching methods that provide a good introduction and supportive guidelines and include active participation and interactivity.

  14. A New Time Calibration Method for Switched-capacitor-array-based Waveform Samplers.

    Science.gov (United States)

    Kim, H; Chen, C-T; Eclov, N; Ronzhin, A; Murat, P; Ramberg, E; Los, S; Moses, W; Choong, W-S; Kao, C-M

    2014-12-11

    We have developed a new time calibration method for the DRS4 waveform sampler that enables us to precisely measure the non-uniform sampling interval inherent in the switched-capacitor cells of the DRS4. The method uses the proportionality between the differential amplitude and sampling interval of adjacent switched-capacitor cells responding to a sawtooth-shape pulse. In the experiment, a sawtooth-shape pulse with a 40 ns period generated by a Tektronix AWG7102 is fed to a DRS4 evaluation board for calibrating the sampling intervals of all 1024 cells individually. The electronic time resolution of the DRS4 evaluation board with the new time calibration is measured to be ~2.4 ps RMS by using two simultaneous Gaussian pulses with 2.35 ns full-width at half-maximum and applying a Gaussian fit. The time resolution dependencies on the time difference with the new time calibration are measured and compared to results obtained by another method. The new method could be applicable for other switched-capacitor-array technology-based waveform samplers for precise time calibration.

  15. Animal DNA identification in food products and animal feed by real time polymerase chain reaction method

    Directory of Open Access Journals (Sweden)

    Людмила Мар’янівна Іщенко

    2016-11-01

    Full Text Available Approbation of diagnostic tests for species identification of beef, pork and chicken by real time polymerase chain reaction method was done. Meat food, including heat treated and animal feed, was used for research. The fact of inconsistencies was revealed for product composition of some meat products that is marked by manufacturer 

  16. Real-Time Detection Methods to Monitor TRU Compositions in UREX+Process Streams

    Energy Technology Data Exchange (ETDEWEB)

    McDeavitt, Sean; Charlton, William; Indacochea, J Ernesto; taleyarkhan, Rusi; Pereira, Candido

    2013-03-01

    The U.S. Department of Energy has developed advanced methods for reprocessing spent nuclear fuel. The majority of this development was accomplished under the Advanced Fuel Cycle Initiative (AFCI), building on the strong legacy of process development R&D over the past 50 years. The most prominent processing method under development is named UREX+. The name refers to a family of processing methods that begin with the Uranium Extraction (UREX) process and incorporate a variety of other methods to separate uranium, selected fission products, and the transuranic (TRU) isotopes from dissolved spent nuclear fuel. It is important to consider issues such as safeguards strategies and materials control and accountability methods. Monitoring of higher actinides during aqueous separations is a critical research area. By providing on-line materials accountability for the processes, covert diversion of the materials streams becomes much more difficult. The importance of the nuclear fuel cycle continues to rise on national and international agendas. The U.S. Department of Energy is evaluating and developing advanced methods for safeguarding nuclear materials along with instrumentation in various stages of the fuel cycle, especially in material balance areas (MBAs) and during reprocessing of used nuclear fuel. One of the challenges related to the implementation of any type of MBA and/or reprocessing technology (e.g., PUREX or UREX) is the real-time quantification and control of the transuranic (TRU) isotopes as they move through the process. Monitoring of higher actinides from their neutron emission (including multiplicity) and alpha signatures during transit in MBAs and in aqueous separations is a critical research area. By providing on-line real-time materials accountability, diversion of the materials becomes much more difficult. The objective of this consortium was to develop real time detection methods to monitor the efficacy of the UREX+ process and to safeguard the separated

  17. Producing accurate wave propagation time histories using the global matrix method

    International Nuclear Information System (INIS)

    Obenchain, Matthew B; Cesnik, Carlos E S

    2013-01-01

    This paper presents a reliable method for producing accurate displacement time histories for wave propagation in laminated plates using the global matrix method. The existence of inward and outward propagating waves in the general solution is highlighted while examining the axisymmetric case of a circular actuator on an aluminum plate. Problems with previous attempts to isolate the outward wave for anisotropic laminates are shown. The updated method develops a correction signal that can be added to the original time history solution to cancel the inward wave and leave only the outward propagating wave. The paper demonstrates the effectiveness of the new method for circular and square actuators bonded to the surface of isotropic laminates, and these results are compared with exact solutions. Results for circular actuators on cross-ply laminates are also presented and compared with experimental results, showing the ability of the new method to successfully capture the displacement time histories for composite laminates. (paper)

  18. Long-memory time series theory and methods

    CERN Document Server

    Palma, Wilfredo

    2007-01-01

    Wilfredo Palma, PhD, is Chairman and Professor of Statistics in the Department of Statistics at Pontificia Universidad Católica de Chile. Dr. Palma has published several refereed articles and has received over a dozen academic honors and awards. His research interests include time series analysis, prediction theory, state space systems, linear models, and econometrics.

  19. The large discretization step method for time-dependent partial differential equations

    Science.gov (United States)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  20. A method for the determination of detector channel dead time for a neutron time-of-flight spectrometer

    International Nuclear Information System (INIS)

    Adib, M.; Salama, M.; Abd-Kawi, A.; Sadek, S.; Hamouda, I.

    1975-01-01

    A new method is developed to measure the dead time of a detector channel for a neutron time-of-flight spectrometer. The method is based on the simultaneous use of two identical BF 3 detectors but with two different efficiencies, due to their different enrichment in B 10 . The measurements were performed using the T.O.F. spectrometer installed at channel No. 6 of the ET-RR-1 reactor. The main contribution to the dead time was found to be due to the time analyser and the neutron detector used. The analyser dead time has been determined using a square wave pulse generator with frequency of 1 MC/S. For channel widths of 24.4 us, 48.8 ud and 97.6 us, the weighted dead times for statistical pulse distribution were found to be 3.25 us, 1.87 us respectively. The dead time of the detector contributes mostly to the counting losses and its value was found to be (33+-3) us

  1. Real-time earthquake monitoring using a search engine method.

    Science.gov (United States)

    Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong

    2014-12-04

    When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.

  2. Seasonal adjustment methods and real time trend-cycle estimation

    CERN Document Server

    Bee Dagum, Estela

    2016-01-01

    This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...

  3. TRANSAT-- method for detecting the conserved helices of functional RNA structures, including transient, pseudo-knotted and alternative structures.

    Science.gov (United States)

    Wiebe, Nicholas J P; Meyer, Irmtraud M

    2010-06-24

    The prediction of functional RNA structures has attracted increased interest, as it allows us to study the potential functional roles of many genes. RNA structure prediction methods, however, assume that there is a unique functional RNA structure and also do not predict functional features required for in vivo folding. In order to understand how functional RNA structures form in vivo, we require sophisticated experiments or reliable prediction methods. So far, there exist only a few, experimentally validated transient RNA structures. On the computational side, there exist several computer programs which aim to predict the co-transcriptional folding pathway in vivo, but these make a range of simplifying assumptions and do not capture all features known to influence RNA folding in vivo. We want to investigate if evolutionarily related RNA genes fold in a similar way in vivo. To this end, we have developed a new computational method, Transat, which detects conserved helices of high statistical significance. We introduce the method, present a comprehensive performance evaluation and show that Transat is able to predict the structural features of known reference structures including pseudo-knotted ones as well as those of known alternative structural configurations. Transat can also identify unstructured sub-sequences bound by other molecules and provides evidence for new helices which may define folding pathways, supporting the notion that homologous RNA sequence not only assume a similar reference RNA structure, but also fold similarly. Finally, we show that the structural features predicted by Transat differ from those assuming thermodynamic equilibrium. Unlike the existing methods for predicting folding pathways, our method works in a comparative way. This has the disadvantage of not being able to predict features as function of time, but has the considerable advantage of highlighting conserved features and of not requiring a detailed knowledge of the cellular

  4. Time interval approach to the pulsed neutron logging method

    International Nuclear Information System (INIS)

    Zhao Jingwu; Su Weining

    1994-01-01

    The time interval of neighbouring neutrons emitted from a steady state neutron source can be treated as that from a time-dependent neutron source. In the rock space, the neutron flux is given by the neutron diffusion equation and is composed of an infinite terms. Each term s composed of two die-away curves. The delay action is discussed and used to measure the time interval with only one detector in the experiment. Nuclear reactions with the time distribution due to different types of radiations observed in the neutron well-logging methods are presented with a view to getting the rock nuclear parameters from the time interval technique

  5. Improved methods for nightside time domain Lunar Electromagnetic Sounding

    Science.gov (United States)

    Fuqua-Haviland, H.; Poppe, A. R.; Fatemi, S.; Delory, G. T.; De Pater, I.

    2017-12-01

    Time Domain Electromagnetic (TDEM) Sounding isolates induced magnetic fields to remotely deduce material properties at depth. The first step of performing TDEM Sounding at the Moon is to fully characterize the dynamic plasma environment, and isolate geophysically induced currents from concurrently present plasma currents. The transfer function method requires a two-point measurement: an upstream reference measuring the pristine solar wind, and one downstream near the Moon. This method was last performed during Apollo assuming the induced fields on the nightside of the Moon expand as in an undisturbed vacuum within the wake cavity [1]. Here we present an approach to isolating induction and performing TDEM with any two point magnetometer measurement at or near the surface of the Moon. Our models include a plasma induction model capturing the kinetic plasma environment within the wake cavity around a conducting Moon, and a geophysical forward model capturing induction in a vacuum. The combination of these two models enable the analysis of magnetometer data within the wake cavity. Plasma hybrid models use the upstream plasma conditions and interplanetary magnetic field (IMF) to capture the wake current systems formed around the Moon. The plasma kinetic equations are solved for ion particles with electrons as a charge-neutralizing fluid. These models accurately capture the large scale lunar wake dynamics for a variety of solar wind conditions: ion density, temperature, solar wind velocity, and IMF orientation [2]. Given the 3D orientation variability coupled with the large range of conditions seen within the lunar plasma environment, we characterize the environment one case at a time. The global electromagnetic induction response of the Moon in a vacuum has been solved numerically for a variety of electrical conductivity models using the finite-element method implemented within the COMSOL software. This model solves for the geophysically induced response in vacuum to

  6. Predicting hepatitis B monthly incidence rates using weighted Markov chains and time series methods.

    Science.gov (United States)

    Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam

    2015-01-01

    Hepatitis B (HB) is a major global mortality. Accurately predicting the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to predict monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The results of different applied methods were compared to correct percentages of predicted incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct predicted percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of results of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most accurate prediction of the incidence rates.

  7. 41 CFR 302-2.10 - Does the 2-year time period in § 302-2.8 include time that I cannot travel and/or transport my...

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false Does the 2-year time period in § 302-2.8 include time that I cannot travel and/or transport my household effects due to... time that I cannot travel and/or transport my household effects due to shipping restrictions to or from...

  8. Numerical simulation of electromagnetic wave propagation using time domain meshless method

    International Nuclear Information System (INIS)

    Ikuno, Soichiro; Fujita, Yoshihisa; Itoh, Taku; Nakata, Susumu; Nakamura, Hiroaki; Kamitani, Atsushi

    2012-01-01

    The electromagnetic wave propagation in various shaped wave guide is simulated by using meshless time domain method (MTDM). Generally, Finite Differential Time Domain (FDTD) method is applied for electromagnetic wave propagation simulation. However, the numerical domain should be divided into rectangle meshes if FDTD method is applied for the simulation. On the other hand, the node disposition of MTDM can easily describe the structure of arbitrary shaped wave guide. This is the large advantage of the meshless time domain method. The results of computations show that the damping rate is stably calculated in case with R < 0.03, where R denotes a support radius of the weight function for the shape function. And the results indicate that the support radius R of the weight functions should be selected small, and monomials must be used for calculating the shape functions. (author)

  9. Comparative analysis of clustering methods for gene expression time course data

    Directory of Open Access Journals (Sweden)

    Ivan G. Costa

    2004-01-01

    Full Text Available This work performs a data driven comparative study of clustering methods used in the analysis of gene expression time courses (or time series. Five clustering methods found in the literature of gene expression analysis are compared: agglomerative hierarchical clustering, CLICK, dynamical clustering, k-means and self-organizing maps. In order to evaluate the methods, a k-fold cross-validation procedure adapted to unsupervised methods is applied. The accuracy of the results is assessed by the comparison of the partitions obtained in these experiments with gene annotation, such as protein function and series classification.

  10. Impact of Rainfall, Sales Method, and Time on Land Prices

    OpenAIRE

    Stephens, Steve; Schurle, Bryan

    2013-01-01

    Land prices in Western Kansas are analyzed using regression to estimate the influence of rainfall, sales method, and time of sale. The estimates from regression indicate that land prices decreased about $27 for each range that was farther west which can be converted to about $75 per inch of average rainfall. In addition, the influence of method of sale (private sale or auction) is estimated along with the impact of time of sale. Auction sales prices are approximately $100 higher per acre than...

  11. Forecasts for the Canadian Lynx time series using method that bombine neural networks, wavelet shrinkage and decomposition

    Directory of Open Access Journals (Sweden)

    Levi Lopes Teixeira

    2015-12-01

    Full Text Available Time series forecasting is widely used in various areas of human knowledge, especially in the planning and strategic direction of companies. The success of this task depends on the forecasting techniques applied. In this paper, a hybrid approach to project time series is suggested. To validate the methodology, a time series already modeled by other authors was chosen, allowing the comparison of results. The proposed methodology includes the following techniques: wavelet shrinkage, wavelet decomposition at level r, and artificial neural networks (ANN. Firstly, a time series to be forecasted is submitted to the proposed wavelet filtering method, which decomposes it to components of trend and linear residue. Then, both are decomposed via level r wavelet decomposition, generating r + 1 Wavelet Components (WCs for each one; and then each WC is individually modeled by an ANN. Finally, the predictions for all WCs are linearly combined, producing forecasts to the underlying time series. For evaluating purposes, the time series of Canadian Lynx has been used, and all results achieved by the proposed method were better than others in existing literature.

  12. Introduction to numerical methods for time dependent differential equations

    CERN Document Server

    Kreiss, Heinz-Otto

    2014-01-01

    Introduces both the fundamentals of time dependent differential equations and their numerical solutions Introduction to Numerical Methods for Time Dependent Differential Equations delves into the underlying mathematical theory needed to solve time dependent differential equations numerically. Written as a self-contained introduction, the book is divided into two parts to emphasize both ordinary differential equations (ODEs) and partial differential equations (PDEs). Beginning with ODEs and their approximations, the authors provide a crucial presentation of fundamental notions, such as the t

  13. A systematic method for characterizing the time-range performance of ground penetrating radar

    International Nuclear Information System (INIS)

    Strange, A D

    2013-01-01

    The fundamental performance of ground penetrating radar (GPR) is linked to the ability to measure the signal time-of-flight in order to provide an accurate radar-to-target range estimate. Having knowledge of the actual time range and timing nonlinearities of a trace is therefore important when seeking to make quantitative range estimates. However, very few practical methods have been formally reported in the literature to characterize GPR time-range performance. This paper describes a method to accurately measure the true time range of a GPR to provide a quantitative assessment of the timing system performance and detect and quantify the effects of timing nonlinearity due to timing jitter. The effect of varying the number of samples per trace on the true time range has also been investigated and recommendations on how to minimize the effects of timing errors are described. The approach has been practically applied to characterize the timing performance of two commercial GPR systems. The importance of the method is that it provides the GPR community with a practical method to readily characterize the underlying accuracy of GPR systems. This in turn leads to enhanced target depth estimation as well as facilitating the accuracy of more sophisticated GPR signal processing methods. (paper)

  14. Methods for Detecting Early Warnings of Critical Transitions in Time Series Illustrated Using Simulated Ecological Data

    Science.gov (United States)

    Dakos, Vasilis; Carpenter, Stephen R.; Brock, William A.; Ellison, Aaron M.; Guttal, Vishwesha; Ives, Anthony R.; Kéfi, Sonia; Livina, Valerie; Seekell, David A.; van Nes, Egbert H.; Scheffer, Marten

    2012-01-01

    Many dynamical systems, including lakes, organisms, ocean circulation patterns, or financial markets, are now thought to have tipping points where critical transitions to a contrasting state can happen. Because critical transitions can occur unexpectedly and are difficult to manage, there is a need for methods that can be used to identify when a critical transition is approaching. Recent theory shows that we can identify the proximity of a system to a critical transition using a variety of so-called ‘early warning signals’, and successful empirical examples suggest a potential for practical applicability. However, while the range of proposed methods for predicting critical transitions is rapidly expanding, opinions on their practical use differ widely, and there is no comparative study that tests the limitations of the different methods to identify approaching critical transitions using time-series data. Here, we summarize a range of currently available early warning methods and apply them to two simulated time series that are typical of systems undergoing a critical transition. In addition to a methodological guide, our work offers a practical toolbox that may be used in a wide range of fields to help detect early warning signals of critical transitions in time series data. PMID:22815897

  15. Gauge-Invariant Formulation of Time-Dependent Configuration Interaction Singles Method

    Directory of Open Access Journals (Sweden)

    Takeshi Sato

    2018-03-01

    Full Text Available We propose a gauge-invariant formulation of the channel orbital-based time-dependent configuration interaction singles (TDCIS method [Phys. Rev. A, 74, 043420 (2006], one of the powerful ab initio methods to investigate electron dynamics in atoms and molecules subject to an external laser field. In the present formulation, we derive the equations of motion (EOMs in the velocity gauge using gauge-transformed time-dependent, not fixed, orbitals that are equivalent to the conventional EOMs in the length gauge using fixed orbitals. The new velocity-gauge EOMs avoid the use of the length-gauge dipole operator, which diverges at large distance, and allows us to exploit computational advantages of the velocity-gauge treatment over the length-gauge one, e.g., a faster convergence in simulations with intense and long-wavelength lasers, and the feasibility of exterior complex scaling as an absorbing boundary. The reformulated TDCIS method is applied to an exactly solvable model of one-dimensional helium atom in an intense laser field to numerically demonstrate the gauge invariance. We also discuss the consistent method for evaluating the time derivative of an observable, which is relevant, e.g., in simulating high-harmonic generation.

  16. Full Waveform Inversion Using Oriented Time Migration Method

    KAUST Repository

    Zhang, Zhendong

    2016-04-12

    Full waveform inversion (FWI) for reflection events is limited by its linearized update requirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate the resulting gradient can have an inaccurate update direction leading the inversion to converge into what we refer to as local minima of the objective function. In this thesis, I first look into the subject of full model wavenumber to analysis the root of local minima and suggest the possible ways to avoid this problem. And then I analysis the possibility of recovering the corresponding wavenumber components through the existing inversion and migration algorithms. Migration can be taken as a generalized inversion method which mainly retrieves the high wavenumber part of the model. Conventional impedance inversion method gives a mapping relationship between the migration image (high wavenumber) and model parameters (full wavenumber) and thus provides a possible cascade inversion strategy to retrieve the full wavenumber components from seismic data. In the proposed approach, consider a mild lateral variation in the model, I find an analytical Frechet derivation corresponding to the new objective function. In the proposed approach, the gradient is given by the oriented time-domain imaging method. This is independent of the background velocity. Specifically, I apply the oriented time-domain imaging (which depends on the reflection slope instead of a background velocity) on the data residual to obtain the geometrical features of the velocity perturbation. Assuming that density is constant, the conventional 1D impedance inversion method is also applicable for 2D or 3D velocity inversion within the process of FWI. This method is not only capable of inverting for velocity, but it is also capable of retrieving anisotropic parameters relying on linearized representations of the reflection response. To eliminate the cross-talk artifacts between different parameters, I

  17. Quality control methods in accelerometer data processing: defining minimum wear time.

    Directory of Open Access Journals (Sweden)

    Carly Rich

    Full Text Available BACKGROUND: When using accelerometers to measure physical activity, researchers need to determine whether subjects have worn their device for a sufficient period to be included in analyses. We propose a minimum wear criterion using population-based accelerometer data, and explore the influence of gender and the purposeful inclusion of children with weekend data on reliability. METHODS: Accelerometer data obtained during the age seven sweep of the UK Millennium Cohort Study were analysed. Children were asked to wear an ActiGraph GT1M accelerometer for seven days. Reliability coefficients(r of mean daily counts/minute were calculated using the Spearman-Brown formula based on the intraclass correlation coefficient. An r of 1.0 indicates that all the variation is between- rather than within-children and that measurement is 100% reliable. An r of 0.8 is often regarded as acceptable reliability. Analyses were repeated on data from children who met different minimum daily wear times (one to 10 hours and wear days (one to seven days. Analyses were conducted for all children, separately for boys and girls, and separately for children with and without weekend data. RESULTS: At least one hour of wear time data was obtained from 7,704 singletons. Reliability increased as the minimum number of days and the daily wear time increased. A high reliability (r = 0.86 and sample size (n = 6,528 was achieved when children with ≥ two days lasting ≥10 hours/day were included in analyses. Reliability coefficients were similar for both genders. Purposeful sampling of children with weekend data resulted in comparable reliabilities to those calculated independent of weekend wear. CONCLUSION: Quality control procedures should be undertaken before analysing accelerometer data in large-scale studies. Using data from children with ≥ two days lasting ≥10 hours/day should provide reliable estimates of physical activity. It's unnecessary to include only children

  18. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-10-01

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Improving method of real-time offset tuning for arterial signal coordination using probe trajectory data

    Directory of Open Access Journals (Sweden)

    Jian Zhang

    2016-12-01

    Full Text Available In the environment of intelligent transportation systems, traffic condition data would have higher resolution in time and space, which is especially valuable for managing the interrupted traffic at signalized intersections. There exist a lot of algorithms for offset tuning, but few of them take the advantage of modern traffic detection methods such as probe vehicle data. This study proposes a method using probe trajectory data to optimize and adjust offsets in real time. The critical point, representing the changing vehicle dynamics, is first defined as the basis of this approach. Using the critical points related to different states of traffic conditions, such as free flow, queue formation, and dissipation, various traffic status parameters can be estimated, including actual travel speed, queue dissipation rate, and standing queue length. The offset can then be adjusted on a cycle-by-cycle basis. The performance of this approach is evaluated using a simulation network. The results show that the trajectory-based approach can reduce travel time of the coordinated traffic flow when compared with using well-defined offline offset.

  20. Methods of using structures including catalytic materials disposed within porous zeolite materials to synthesize hydrocarbons

    Science.gov (United States)

    Rollins, Harry W [Idaho Falls, ID; Petkovic, Lucia M [Idaho Falls, ID; Ginosar, Daniel M [Idaho Falls, ID

    2011-02-01

    Catalytic structures include a catalytic material disposed within a zeolite material. The catalytic material may be capable of catalyzing a formation of methanol from carbon monoxide and/or carbon dioxide, and the zeolite material may be capable of catalyzing a formation of hydrocarbon molecules from methanol. The catalytic material may include copper and zinc oxide. The zeolite material may include a first plurality of pores substantially defined by a crystal structure of the zeolite material and a second plurality of pores dispersed throughout the zeolite material. Systems for synthesizing hydrocarbon molecules also include catalytic structures. Methods for synthesizing hydrocarbon molecules include contacting hydrogen and at least one of carbon monoxide and carbon dioxide with such catalytic structures. Catalytic structures are fabricated by forming a zeolite material at least partially around a template structure, removing the template structure, and introducing a catalytic material into the zeolite material.

  1. Evaluation of time integration methods for transient response analysis of nonlinear structures

    International Nuclear Information System (INIS)

    Park, K.C.

    1975-01-01

    Recent developments in the evaluation of direct time integration methods for the transient response analysis of nonlinear structures are presented. These developments, which are based on local stability considerations of an integrator, show that the interaction between temporal step size and nonlinearities of structural systems has a pronounced effect on both accuracy and stability of a given time integration method. The resulting evaluation technique is applied to a model nonlinear problem, in order to: 1) demonstrate that it eliminates the present costly process of evaluating time integrator for nonlinear structural systems via extensive numerical experiments; 2) identify the desirable characteristics of time integration methods for nonlinear structural problems; 3) develop improved stiffly-stable methods for application to nonlinear structures. Extension of the methodology for examination of the interaction between a time integrator and the approximate treatment of nonlinearities (such as due to pseudo-force or incremental solution procedures) is also discussed. (Auth.)

  2. Determination of the response time of pressure transducers using the direct method

    International Nuclear Information System (INIS)

    Perillo, S.R.P.

    1994-01-01

    The available methods to determine the response time of nuclear safety related pressure transducers are discussed, with emphasis to the direct method. In order to perform the experiments, a Hydraulic Ramp Generator was built. The equipment produces ramp pressure transients simultaneously to a reference transducer and to the transducer under test. The time lag between the output of the two transducers, when they reach a predetermined setpoint, is measured as the time delay of the transducer under test. Some results using the direct method to determine the time delay of pressure transducers (1 E Class Conventional) are presented. (author). 18 refs, 35 figs, 12 tabs

  3. Optimisation of chromatographic resolution using objective functions including both time and spectral information.

    Science.gov (United States)

    Torres-Lapasió, J R; Pous-Torres, S; Ortiz-Bolsico, C; García-Alvarez-Coque, M C

    2015-01-16

    The optimisation of the resolution in high-performance liquid chromatography is traditionally performed attending only to the time information. However, even in the optimal conditions, some peak pairs may remain unresolved. Such incomplete resolution can be still accomplished by deconvolution, which can be carried out with more guarantees of success by including spectral information. In this work, two-way chromatographic objective functions (COFs) that incorporate both time and spectral information were tested, based on the peak purity (analyte peak fraction free of overlapping) and the multivariate selectivity (figure of merit derived from the net analyte signal) concepts. These COFs are sensitive to situations where the components that coelute in a mixture show some spectral differences. Therefore, they are useful to find out experimental conditions where the spectrochromatograms can be recovered by deconvolution. Two-way multivariate selectivity yielded the best performance and was applied to the separation using diode-array detection of a mixture of 25 phenolic compounds, which remained unresolved in the chromatographic order using linear and multi-linear gradients of acetonitrile-water. Peak deconvolution was carried out using the combination of orthogonal projection approach and alternating least squares. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Spine surgeon's kinematics during discectomy, part II: operating table height and visualization methods, including microscope.

    Science.gov (United States)

    Park, Jeong Yoon; Kim, Kyung Hyun; Kuh, Sung Uk; Chin, Dong Kyu; Kim, Keun Su; Cho, Yong Eun

    2014-05-01

    Surgeon spine angle during surgery was studied ergonomically and the kinematics of the surgeon's spine was related with musculoskeletal fatigue and pain. Spine angles varied depending on operation table height and visualization method, and in a previous paper we showed that the use of a loupe and a table height at the midpoint between the umbilicus and the sternum are optimal for reducing musculoskeletal loading. However, no studies have previously included a microscope as a possible visualization method. The objective of this study is to assess differences in surgeon spine angles depending on operating table height and visualization method, including microscope. We enrolled 18 experienced spine surgeons for this study, who each performed a discectomy using a spine surgery simulator. Three different methods were used to visualize the surgical field (naked eye, loupe, microscope) and three different operating table heights (anterior superior iliac spine, umbilicus, the midpoint between the umbilicus and the sternum) were studied. Whole spine angles were compared for three different views during the discectomy simulation: midline, ipsilateral, and contralateral. A 16-camera optoelectronic motion analysis system was used, and 16 markers were placed from the head to the pelvis. Lumbar lordosis, thoracic kyphosis, cervical lordosis, and occipital angle were compared between the different operating table heights and visualization methods as well as a natural standing position. Whole spine angles differed significantly depending on visualization method. All parameters were closer to natural standing values when discectomy was performed with a microscope, and there were no differences between the naked eye and the loupe. Whole spine angles were also found to differ from the natural standing position depending on operating table height, and became closer to natural standing position values as the operating table height increased, independent of the visualization method

  5. A Real-Time Thermal Self-Elimination Method for Static Mode Operated Freestanding Piezoresistive Microcantilever-Based Biosensors

    Directory of Open Access Journals (Sweden)

    Yu-Fu Ku

    2018-02-01

    Full Text Available Here, we provide a method and apparatus for real-time compensation of the thermal effect of single free-standing piezoresistive microcantilever-based biosensors. The sensor chip contained an on-chip fixed piezoresistor that served as a temperature sensor, and a multilayer microcantilever with an embedded piezoresistor served as a biomolecular sensor. This method employed the calibrated relationship between the resistance and the temperature of piezoresistors to eliminate the thermal effect on the sensor, including the temperature coefficient of resistance (TCR and bimorph effect. From experimental results, the method was verified to reduce the signal of thermal effect from 25.6 μV/°C to 0.3 μV/°C, which was approximately two orders of magnitude less than that before the processing of the thermal elimination method. Furthermore, the proposed approach and system successfully demonstrated its effective real-time thermal self-elimination on biomolecular detection without any thermostat device to control the environmental temperature. This method realizes the miniaturization of an overall measurement system of the sensor, which can be used to develop portable medical devices and microarray analysis platforms.

  6. Robust scaling laws for energy confinement time, including radiated fraction, in Tokamaks

    Science.gov (United States)

    Murari, A.; Peluso, E.; Gaudio, P.; Gelfusa, M.

    2017-12-01

    In recent years, the limitations of scalings in power-law form that are obtained from traditional log regression have become increasingly evident in many fields of research. Given the wide gap in operational space between present-day and next-generation devices, robustness of the obtained models in guaranteeing reasonable extrapolability is a major issue. In this paper, a new technique, called symbolic regression, is reviewed, refined, and applied to the ITPA database for extracting scaling laws of the energy-confinement time at different radiated fraction levels. The main advantage of this new methodology is its ability to determine the most appropriate mathematical form of the scaling laws to model the available databases without the restriction of their having to be power laws. In a completely new development, this technique is combined with the concept of geodesic distance on Gaussian manifolds so as to take into account the error bars in the measurements and provide more reliable models. Robust scaling laws, including radiated fractions as regressor, have been found; they are not in power-law form, and are significantly better than the traditional scalings. These scaling laws, including radiated fractions, extrapolate quite differently to ITER, and therefore they require serious consideration. On the other hand, given the limitations of the existing databases, dedicated experimental investigations will have to be carried out to fully understand the impact of radiated fractions on the confinement in metallic machines and in the next generation of devices.

  7. Quantification of Artifact Reduction With Real-Time Cine Four-Dimensional Computed Tomography Acquisition Methods

    International Nuclear Information System (INIS)

    Langner, Ulrich W.; Keall, Paul J.

    2010-01-01

    Purpose: To quantify the magnitude and frequency of artifacts in simulated four-dimensional computed tomography (4D CT) images using three real-time acquisition methods- direction-dependent displacement acquisition, simultaneous displacement and phase acquisition, and simultaneous displacement and velocity acquisition- and to compare these methods with commonly used retrospective phase sorting. Methods and Materials: Image acquisition for the four 4D CT methods was simulated with different displacement and velocity tolerances for spheres with radii of 0.5 cm, 1.5 cm, and 2.5 cm, using 58 patient-measured tumors and respiratory motion traces. The magnitude and frequency of artifacts, CT doses, and acquisition times were computed for each method. Results: The mean artifact magnitude was 50% smaller for the three real-time methods than for retrospective phase sorting. The dose was ∼50% lower, but the acquisition time was 20% to 100% longer for the real-time methods than for retrospective phase sorting. Conclusions: Real-time acquisition methods can reduce the frequency and magnitude of artifacts in 4D CT images, as well as the imaging dose, but they increase the image acquisition time. The results suggest that direction-dependent displacement acquisition is the preferred real-time 4D CT acquisition method, because on average, the lowest dose is delivered to the patient and the acquisition time is the shortest for the resulting number and magnitude of artifacts.

  8. Charmonium-nucleon interactions from the time-dependent HAL QCD method

    Science.gov (United States)

    Sugiura, Takuya; Ikeda, Yoichi; Ishii, Noriyoshi

    2018-03-01

    The charmonium-nucleon effective central interactions have been computed by the time-dependent HAL QCD method. This gives an updated result of a previous study based on the time-independent method, which is now known to be problematic because of the difficulty in achieving the ground-state saturation. We discuss that the result is consistent with the heavy quark symmetry. No bound state is observed from the analysis of the scattering phase shift; however, this shall lead to a future search of the hidden-charm pentaquarks by considering channel-coupling effects.

  9. A method for detecting crack wave arrival time and crack localization in a tunnel by using moving window technique

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Young Chul; Park, Tae Jin [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    Source localization in a dispersive medium has been carried out based on the time-of-arrival-differences (TOADs) method: a triangulation method and a circle intersection technique. Recent signal processing advances have led to calculation TOAD using a joint time-frequency analysis of the signal, where a short-time Fourier transform(STFT) and wavelet transform can be included as popular algorithms. The time-frequency analysis method is able to provide various information and more reliable results such as seismic-attenuation estimation, dispersive characteristics, a wave mode analysis, and temporal energy distribution of signals compared with previous methods. These algorithms, however, have their own limitations for signal processing. In this paper, the effective use of proposed algorithm in detecting crack wave arrival time and source localization in rock masses suggest that the evaluation and real-time monitoring on the intensity of damages related to the tunnels or other underground facilities is possible. Calculation of variances resulted from moving windows as a function of their size differentiates the signature from noise and from crack signal, which lead us to determine the crack wave arrival time. Then, the source localization is determined to be where the variance of crack wave velocities from real and virtual crack localization becomes a minimum. To validate our algorithm, we have performed experiments at the tunnel, which resulted in successful determination of the wave arrival time and crack localization.

  10. Exact methods for time constrained routing and related scheduling problems

    DEFF Research Database (Denmark)

    Kohl, Niklas

    1995-01-01

    of customers. In the VRPTW customers must be serviced within a given time period - a so called time window. The objective can be to minimize operating costs (e.g. distance travelled), fixed costs (e.g. the number of vehicles needed) or a combination of these component costs. During the last decade optimization......This dissertation presents a number of optimization methods for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW is a generalization of the well known capacity constrained Vehicle Routing Problem (VRP), where a fleet of vehicles based at a central depot must service a set...... of J?rnsten, Madsen and S?rensen (1986), which has been tested computationally by Halse (1992). Both methods decompose the problem into a series of time and capacity constrained shotest path problems. This yields a tight lower bound on the optimal objective, and the dual gap can often be closed...

  11. A method for real-time implementation of HOG feature extraction

    Science.gov (United States)

    Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai

    2011-08-01

    Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.

  12. A new method for real-time monitoring of grout spread through fractured rocks

    International Nuclear Information System (INIS)

    Henderson, A. E.; Robertson, I. A.; Whitfield, J. M.; Garrard, G. F. G.; Swannell, N. G.; Fisch, H.

    2008-01-01

    Reducing water ingress into the Shaft at Dounreay is essential for the success of future intermediate level waste (ILW) recovery using the dry retrieval method. The reduction is being realised by forming an engineered barrier of ultrafine cementitious grout injected into the fractured rock surrounding the Shaft. Grout penetration of 6 m in <50μm fractures is being reliably achieved, with a pattern of repeated injections ultimately reducing rock mass permeability by up to three orders of magnitude. An extensive field trials period, involving over 200 grout mix designs and the construction of a full scale demonstration barrier, has yielded several new field techniques that improve the quality and reliability of cementitious grout injection for engineered barriers. In particular, a new method has been developed for tracking in real-time the spread of ultrafine cementitious grout through fractured rock and relating the injection characteristics to barrier design. Fieldwork by the multi-disciplinary international team included developing the injection and real-time monitoring techniques, pre- and post injection hydro-geological testing to quantify the magnitude and extent of changes in rock mass permeability, and correlation of grout spread with injection parameters to inform the main works grouting programme. (authors)

  13. A method to evaluate process performance by integrating time and resources

    Science.gov (United States)

    Wang, Yu; Wei, Qingjie; Jin, Shuang

    2017-06-01

    The purpose of process mining is to improve the existing process of the enterprise, so how to measure the performance of the process is particularly important. However, the current research on the performance evaluation method is still insufficient. The main methods of evaluation are mainly using time or resource. These basic statistics cannot evaluate process performance very well. In this paper, a method of evaluating the performance of the process based on time dimension and resource dimension is proposed. This method can be used to measure the utilization and redundancy of resources in the process. This paper will introduce the design principle and formula of the evaluation algorithm. Then, the design and the implementation of the evaluation method will be introduced. Finally, we will use the evaluating method to analyse the event log from a telephone maintenance process and propose an optimization plan.

  14. A comparison of moving object detection methods for real-time moving object detection

    Science.gov (United States)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  15. Development of discrete-time H∞ filtering method for time-delay compensation of rhodium incore detectors

    International Nuclear Information System (INIS)

    Park, Moon Kyu; Kim, Yong Hee; Cha, Kune Ho; Kim, Myung Ki

    1998-01-01

    A method is described to develop an H∞ filtering method for the dynamic compensation of self-powered neutron detectors normally used for fixed incore instruments. An H∞ norm of the filter transfer matrix is used as the optimization criteria in the worst-case estimation error sense. Filter modeling is performed for discrete-time model. The filter gains are optimized in the sense of noise attenuation level of H∞ setting. By introducing Bounded Real Lemma, the conventional algebraic Riccati inequalities are converted into Linear Matrix Inequalities (LMIs). Finally, the filter design problem is solved via the convex optimization framework using LMIs. The simulation results show that remarkable improvements are achieved in view of the filter response time and the filter design efficiency

  16. Power Supply Interruption Costs: Models and Methods Incorporating Time Dependent Patterns

    International Nuclear Information System (INIS)

    Kjoelle, G.H.

    1996-12-01

    This doctoral thesis develops models and methods for estimation of annual interruption costs for delivery points, emphasizing the handling of time dependent patterns and uncertainties in the variables determining the annual costs. It presents an analytical method for calculation of annual expected interruption costs for delivery points in radial systems, based on a radial reliability model, with time dependent variables. And a similar method for meshed systems, based on a list of outage events, assuming that these events are found in advance from load flow and contingency analyses. A Monte Carlo simulation model is given which handles both time variations and stochastic variations in the input variables and is based on the same list of outage events. This general procedure for radial and meshed systems provides expectation values and probability distributions for interruption costs from delivery points. There is also a procedure for handling uncertainties in input variables by a fuzzy description, giving annual interruption costs as a fuzzy membership function. The methods are developed for practical applications in radial and meshed systems, based on available data from failure statistics, load registrations and customer surveys. Traditional reliability indices such as annual interruption time, power- and energy not supplied, are calculated as by-products. The methods are presented as algorithms and/or procedures which are available as prototypes. 97 refs., 114 figs., 62 tabs

  17. Power Supply Interruption Costs: Models and Methods Incorporating Time Dependent Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Kjoelle, G.H.

    1996-12-01

    This doctoral thesis develops models and methods for estimation of annual interruption costs for delivery points, emphasizing the handling of time dependent patterns and uncertainties in the variables determining the annual costs. It presents an analytical method for calculation of annual expected interruption costs for delivery points in radial systems, based on a radial reliability model, with time dependent variables. And a similar method for meshed systems, based on a list of outage events, assuming that these events are found in advance from load flow and contingency analyses. A Monte Carlo simulation model is given which handles both time variations and stochastic variations in the input variables and is based on the same list of outage events. This general procedure for radial and meshed systems provides expectation values and probability distributions for interruption costs from delivery points. There is also a procedure for handling uncertainties in input variables by a fuzzy description, giving annual interruption costs as a fuzzy membership function. The methods are developed for practical applications in radial and meshed systems, based on available data from failure statistics, load registrations and customer surveys. Traditional reliability indices such as annual interruption time, power- and energy not supplied, are calculated as by-products. The methods are presented as algorithms and/or procedures which are available as prototypes. 97 refs., 114 figs., 62 tabs.

  18. 30 CFR 48.3 - Training plans; time of submission; where filed; information required; time for approval; method...

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Training plans; time of submission; where filed....3 Training plans; time of submission; where filed; information required; time for approval; method... training plan shall be filed with the District Manager for the area in which the mine is located. (c) Each...

  19. OpenPSTD : The open source implementation of the pseudospectral time-domain method

    NARCIS (Netherlands)

    Krijnen, T.; Hornikx, M.C.J.; Borkowski, B.

    2014-01-01

    An open source implementation of the pseudospectral time-domain method for the propagation of sound is presented, which is geared towards applications in the built environment. Being a wavebased method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory

  20. 5 CFR 610.404 - Requirement for time-accounting method.

    Science.gov (United States)

    2010-01-01

    ... REGULATIONS HOURS OF DUTY Flexible and Compressed Work Schedules § 610.404 Requirement for time-accounting method. An agency that authorizes a flexible work schedule or a compressed work schedule under this...

  1. Time-Frequency Methods for Structural Health Monitoring

    Directory of Open Access Journals (Sweden)

    Alexander L. Pyayt

    2014-03-01

    Full Text Available Detection of early warning signals for the imminent failure of large and complex engineered structures is a daunting challenge with many open research questions. In this paper we report on novel ways to perform Structural Health Monitoring (SHM of flood protection systems (levees, earthen dikes and concrete dams using sensor data. We present a robust data-driven anomaly detection method that combines time-frequency feature extraction, using wavelet analysis and phase shift, with one-sided classification techniques to identify the onset of failure anomalies in real-time sensor measurements. The methodology has been successfully tested at three operational levees. We detected a dam leakage in the retaining dam (Germany and “strange” behaviour of sensors installed in a Boston levee (UK and a Rhine levee (Germany.

  2. A wavelet method for modeling and despiking motion artifacts from resting-state fMRI time series.

    Science.gov (United States)

    Patel, Ameera X; Kundu, Prantik; Rubinov, Mikail; Jones, P Simon; Vértes, Petra E; Ersche, Karen D; Suckling, John; Bullmore, Edward T

    2014-07-15

    The impact of in-scanner head movement on functional magnetic resonance imaging (fMRI) signals has long been established as undesirable. These effects have been traditionally corrected by methods such as linear regression of head movement parameters. However, a number of recent independent studies have demonstrated that these techniques are insufficient to remove motion confounds, and that even small movements can spuriously bias estimates of functional connectivity. Here we propose a new data-driven, spatially-adaptive, wavelet-based method for identifying, modeling, and removing non-stationary events in fMRI time series, caused by head movement, without the need for data scrubbing. This method involves the addition of just one extra step, the Wavelet Despike, in standard pre-processing pipelines. With this method, we demonstrate robust removal of a range of different motion artifacts and motion-related biases including distance-dependent connectivity artifacts, at a group and single-subject level, using a range of previously published and new diagnostic measures. The Wavelet Despike is able to accommodate the substantial spatial and temporal heterogeneity of motion artifacts and can consequently remove a range of high and low frequency artifacts from fMRI time series, that may be linearly or non-linearly related to physical movements. Our methods are demonstrated by the analysis of three cohorts of resting-state fMRI data, including two high-motion datasets: a previously published dataset on children (N=22) and a new dataset on adults with stimulant drug dependence (N=40). We conclude that there is a real risk of motion-related bias in connectivity analysis of fMRI data, but that this risk is generally manageable, by effective time series denoising strategies designed to attenuate synchronized signal transients induced by abrupt head movements. The Wavelet Despiking software described in this article is freely available for download at www

  3. A TWO-MOMENT RADIATION HYDRODYNAMICS MODULE IN ATHENA USING A TIME-EXPLICIT GODUNOV METHOD

    Energy Technology Data Exchange (ETDEWEB)

    Skinner, M. Aaron; Ostriker, Eve C., E-mail: askinner@astro.umd.edu, E-mail: eco@astro.princeton.edu [Department of Astronomy, University of Maryland, College Park, MD 20742-2421 (United States)

    2013-06-01

    We describe a module for the Athena code that solves the gray equations of radiation hydrodynamics (RHD), based on the first two moments of the radiative transfer equation. We use a combination of explicit Godunov methods to advance the gas and radiation variables including the non-stiff source terms, and a local implicit method to integrate the stiff source terms. We adopt the M{sub 1} closure relation and include all leading source terms to O({beta}{tau}). We employ the reduced speed of light approximation (RSLA) with subcycling of the radiation variables in order to reduce computational costs. Our code is dimensionally unsplit in one, two, and three space dimensions and is parallelized using MPI. The streaming and diffusion limits are well described by the M{sub 1} closure model, and our implementation shows excellent behavior for a problem with a concentrated radiation source containing both regimes simultaneously. Our operator-split method is ideally suited for problems with a slowly varying radiation field and dynamical gas flows, in which the effect of the RSLA is minimal. We present an analysis of the dispersion relation of RHD linear waves highlighting the conditions of applicability for the RSLA. To demonstrate the accuracy of our method, we utilize a suite of radiation and RHD tests covering a broad range of regimes, including RHD waves, shocks, and equilibria, which show second-order convergence in most cases. As an application, we investigate radiation-driven ejection of a dusty, optically thick shell in the ISM. Finally, we compare the timing of our method with other well-known iterative schemes for the RHD equations. Our code implementation, Hyperion, is suitable for a wide variety of astrophysical applications and will be made freely available on the Web.

  4. One Improvement Method of Reducing Duration Directly to Solve Time-Cost Tradeoff Problem

    Science.gov (United States)

    Jian-xun, Qi; Dedong, Sun

    Time and cost are two of the most important factors for project plan and schedule management, and specially, time-cost tradeoff problem is one classical problem in project scheduling, which is also a difficult problem. Methods of solving the problem mainly contain method of network flow and method of mending the minimal cost. Thereinto, for the method of mending the minimal cost is intuitionistic, convenient and lesser computation, these advantages make the method being used widely in practice. But disadvantage of the method is that the result of each step is optimal but the terminal result maybe not optimal. In this paper, firstly, method of confirming the maximal effective quantity of reducing duration is designed; secondly, on the basis of above method and the method of mending the minimal cost, the main method of reducing duration directly is designed to solve time-cost tradeoff problem, and by analyzing validity of the method, the method could obtain more optimal result for the problem.

  5. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    Science.gov (United States)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  6. A new method of detection for a positron emission tomograph using a time of flight method

    International Nuclear Information System (INIS)

    Gresset, Christian.

    1981-05-01

    In the first chapter, it is shown the advantages of positron radioemitters (β + ) of low period, and the essential characteristics of positron tomographs realized at the present time. The second chapter presents the interest of an original technique of image reconstruction: the time of flight technique. The third chapter describes the characterization methods which were set for verifying the feasibility of cesium fluoride in tomography. Chapter four presents the results obtained by these methods. It appears that the cesium fluoride constitute presently the best positron emission associated to time of flight technique. The hypotheses made on eventual performances of such machines are validated by experiments with phantom. The results obtained with a detector (bismuth germanate) conserves all its interest in skull tomography [fr

  7. Numerical method for time-dependent localized corrosion analysis with moving boundaries by combining the finite volume method and voxel method

    International Nuclear Information System (INIS)

    Onishi, Yuki; Takiyasu, Jumpei; Amaya, Kenji; Yakuwa, Hiroshi; Hayabusa, Keisuke

    2012-01-01

    Highlights: ► A novel numerical method to analyze time dependent localized corrosion is developed. ► It takes electromigration, mass diffusion, chemical reactions, and moving boundaries. ► Our method perfectly satisfies the conservation of mass and electroneutrality. ► The behavior of typical crevice corrosion is successfully simulated. ► Both verification and validation of our method are carried out. - Abstract: A novel numerical method for time-dependent localized corrosion analysis is presented. Electromigration, mass diffusion, chemical reactions, and moving boundaries are considered in the numerical simulation of localized corrosion of engineering alloys in an underwater environment. Our method combines the finite volume method (FVM) and the voxel method. The FVM is adopted in the corrosion rate calculation so that the conservation of mass is satisfied. A newly developed decoupled algorithm with a projection method is introduced in the FVM to decouple the multiphysics problem into the electrostatic, mass transport, and chemical reaction analyses with electroneutrality maintained. The polarization curves for the corroding metal are used as boundary conditions for the metal surfaces to calculate the corrosion rates. The voxel method is adopted in updating the moving boundaries of cavities without remeshing and mesh-to-mesh solution mapping. Some modifications of the standard voxel method, which represents the boundaries as zigzag-shaped surfaces, are introduced to generate smooth surfaces. Our method successfully reproduces the numerical and experimental results of a capillary electrophoresis problem. Furthermore, the numerical results are qualitatively consistent with the experimental results for several examples of crevice corrosion.

  8. A general dead-time correction method based on live-time stamping. Application to the measurement of short-lived radionuclides.

    Science.gov (United States)

    Chauvenet, B; Bobin, C; Bouchard, J

    2017-12-01

    Dead-time correction formulae are established in the general case of superimposed non-homogeneous Poisson processes. Based on the same principles as conventional live-timed counting, this method exploits the additional information made available using digital signal processing systems, and especially the possibility to store the time stamps of live-time intervals. No approximation needs to be made to obtain those formulae. Estimates of the variances of corrected rates are also presented. This method is applied to the activity measurement of short-lived radionuclides. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A New Method for Calibrating the Time Delay of a Piezoelectric Probe

    DEFF Research Database (Denmark)

    Hansen, Bengt Hurup

    1974-01-01

    A simple method for calibrating the time delay of a piezoelectric probe of the type often used in plasma physics is described.......A simple method for calibrating the time delay of a piezoelectric probe of the type often used in plasma physics is described....

  10. Efficient exact-exchange time-dependent density-functional theory methods and their relation to time-dependent Hartree-Fock.

    Science.gov (United States)

    Hesselmann, Andreas; Görling, Andreas

    2011-01-21

    A recently introduced time-dependent exact-exchange (TDEXX) method, i.e., a response method based on time-dependent density-functional theory that treats the frequency-dependent exchange kernel exactly, is reformulated. In the reformulated version of the TDEXX method electronic excitation energies can be calculated by solving a linear generalized eigenvalue problem while in the original version of the TDEXX method a laborious frequency iteration is required in the calculation of each excitation energy. The lowest eigenvalues of the new TDEXX eigenvalue equation corresponding to the lowest excitation energies can be efficiently obtained by, e.g., a version of the Davidson algorithm appropriate for generalized eigenvalue problems. Alternatively, with the help of a series expansion of the new TDEXX eigenvalue equation, standard eigensolvers for large regular eigenvalue problems, e.g., the standard Davidson algorithm, can be used to efficiently calculate the lowest excitation energies. With the help of the series expansion as well, the relation between the TDEXX method and time-dependent Hartree-Fock is analyzed. Several ways to take into account correlation in addition to the exact treatment of exchange in the TDEXX method are discussed, e.g., a scaling of the Kohn-Sham eigenvalues, the inclusion of (semi)local approximate correlation potentials, or hybrids of the exact-exchange kernel with kernels within the adiabatic local density approximation. The lowest lying excitations of the molecules ethylene, acetaldehyde, and pyridine are considered as examples.

  11. ADVANCEMENTS IN TIME-SPECTRA ANALYSIS METHODS FOR LEAD SLOWING-DOWN SPECTROSCOPY

    International Nuclear Information System (INIS)

    Smith, Leon E.; Anderson, Kevin K.; Gesh, Christopher J.; Shaver, Mark W.

    2010-01-01

    Direct measurement of Pu in spent nuclear fuel remains a key challenge for safeguarding nuclear fuel cycles of today and tomorrow. Lead slowing-down spectroscopy (LSDS) is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic mass with an uncertainty lower than the approximately 10 percent typical of today's confirmatory assay methods. Pacific Northwest National Laboratory's (PNNL) previous work to assess the viability of LSDS for the assay of pressurized water reactor (PWR) assemblies indicated that the method could provide direct assay of Pu-239 and U-235 (and possibly Pu-240 and Pu-241) with uncertainties less than a few percent, assuming suitably efficient instrumentation, an intense pulsed neutron source, and improvements in the time-spectra analysis methods used to extract isotopic information from a complex LSDS signal. This previous simulation-based evaluation used relatively simple PWR fuel assembly definitions (e.g. constant burnup across the assembly) and a constant initial enrichment and cooling time. The time-spectra analysis method was founded on a preliminary analytical model of self-shielding intended to correct for assay-signal nonlinearities introduced by attenuation of the interrogating neutron flux within the assembly.

  12. Minimum entropy density method for the time series analysis

    Science.gov (United States)

    Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae

    2009-01-01

    The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poor’s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.

  13. Comparison of deterministic and stochastic methods for time-dependent Wigner simulations

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Sihong, E-mail: sihong@math.pku.edu.cn [LMAM and School of Mathematical Sciences, Peking University, Beijing 100871 (China); Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg [IICT, Bulgarian Academy of Sciences, Acad. G. Bonchev str. 25A, 1113 Sofia (Bulgaria)

    2015-11-01

    Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution of a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.

  14. Statistical methods of parameter estimation for deterministically chaotic time series

    Science.gov (United States)

    Pisarenko, V. F.; Sornette, D.

    2004-03-01

    We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).

  15. NUMERICAL METHODS FOR SOLVING THE MULTI-TERM TIME-FRACTIONAL WAVE-DIFFUSION EQUATION.

    Science.gov (United States)

    Liu, F; Meerschaert, M M; McGough, R J; Zhuang, P; Liu, Q

    2013-03-01

    In this paper, the multi-term time-fractional wave-diffusion equations are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and techniques can also be extended to other kinds of the multi-term fractional time-space models with fractional Laplacian.

  16. R/S method for evaluation of pollutant time series in environmental quality assessment

    Directory of Open Access Journals (Sweden)

    Bu Quanmin

    2008-12-01

    Full Text Available The significance of the fluctuation and randomness of the time series of each pollutant in environmental quality assessment is described for the first time in this paper. A comparative study was made of three different computing methods: the same starting point method, the striding averaging method, and the stagger phase averaging method. All of them can be used to calculate the Hurst index, which quantifies fluctuation and randomness. This study used real water quality data from Shazhu monitoring station on Taihu Lake in Wuxi, Jiangsu Province. The results show that, of the three methods, the stagger phase averaging method is best for calculating the Hurst index of a pollutant time series from the perspective of statistical regularity.

  17. A Time-Walk Correction Method for PET Detectors Based on Leading Edge Discriminators.

    Science.gov (United States)

    Du, Junwei; Schmall, Jeffrey P; Judenhofer, Martin S; Di, Kun; Yang, Yongfeng; Cherry, Simon R

    2017-09-01

    The leading edge timing pick-off technique is the simplest timing extraction method for PET detectors. Due to the inherent time-walk of the leading edge technique, corrections should be made to improve timing resolution, especially for time-of-flight PET. Time-walk correction can be done by utilizing the relationship between the threshold crossing time and the event energy on an event by event basis. In this paper, a time-walk correction method is proposed and evaluated using timing information from two identical detectors both using leading edge discriminators. This differs from other techniques that use an external dedicated reference detector, such as a fast PMT-based detector using constant fraction techniques to pick-off timing information. In our proposed method, one detector was used as reference detector to correct the time-walk of the other detector. Time-walk in the reference detector was minimized by using events within a small energy window (508.5 - 513.5 keV). To validate this method, a coincidence detector pair was assembled using two SensL MicroFB SiPMs and two 2.5 mm × 2.5 mm × 20 mm polished LYSO crystals. Coincidence timing resolutions using different time pick-off techniques were obtained at a bias voltage of 27.5 V and a fixed temperature of 20 °C. The coincidence timing resolution without time-walk correction were 389.0 ± 12.0 ps (425 -650 keV energy window) and 670.2 ± 16.2 ps (250-750 keV energy window). The timing resolution with time-walk correction improved to 367.3 ± 0.5 ps (425 - 650 keV) and 413.7 ± 0.9 ps (250 - 750 keV). For comparison, timing resolutions were 442.8 ± 12.8 ps (425 - 650 keV) and 476.0 ± 13.0 ps (250 - 750 keV) using constant fraction techniques, and 367.3 ± 0.4 ps (425 - 650 keV) and 413.4 ± 0.9 ps (250 - 750 keV) using a reference detector based on the constant fraction technique. These results show that the proposed leading edge based time-walk correction method works well. Timing resolution obtained

  18. NUMERICAL METHODS FOR SOLVING THE MULTI-TERM TIME-FRACTIONAL WAVE-DIFFUSION EQUATION

    OpenAIRE

    Liu, F.; Meerschaert, M.M.; McGough, R.J.; Zhuang, P.; Liu, Q.

    2013-01-01

    In this paper, the multi-term time-fractional wave-diffusion equations are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and technique...

  19. Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness

    Energy Technology Data Exchange (ETDEWEB)

    Chelouche, Doron; Pozo-Nuñez, Francisco [Department of Physics, Faculty of Natural Sciences, University of Haifa, Haifa 3498838 (Israel); Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il [Department of Geosciences, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 6997801 (Israel)

    2017-08-01

    A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discrete correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.

  20. Validation of a same-day real-time PCR method for screening of meat and carcass swabs for Salmonella

    Science.gov (United States)

    2009-01-01

    Background One of the major sources of human Salmonella infections is meat. Therefore, efficient and rapid monitoring of Salmonella in the meat production chain is necessary. Validation of alternative methods is needed to prove that the performance is equal to established methods. Very few of the published PCR methods for Salmonella have been validated in collaborative studies. This study describes a validation including comparative and collaborative trials, based on the recommendations from the Nordic organization for validation of alternative microbiological methods (NordVal) of a same-day, non-commercial real-time PCR method for detection of Salmonella in meat and carcass swabs. Results The comparative trial was performed against a reference method (NMKL-71:5, 1999) using artificially and naturally contaminated samples (60 minced veal and pork meat samples, 60 poultry neck-skins, and 120 pig carcass swabs). The relative accuracy was 99%, relative detection level 100%, relative sensitivity 103% and relative specificity 100%. The collaborative trial included six laboratories testing minced meat, poultry neck-skins, and carcass swabs as un-inoculated samples and samples artificially contaminated with 1–10 CFU/25 g, and 10–100 CFU/25 g. Valid results were obtained from five of the laboratories and used for the statistical analysis. Apart from one of the non-inoculated samples being false positive with PCR for one of the laboratories, no false positive or false negative results were reported. Partly based on results obtained in this study, the method has obtained NordVal approval for analysis of Salmonella in meat and carcass swabs. The PCR method was transferred to a production laboratory and the performance was compared with the BAX Salmonella test on 39 pork samples artificially contaminated with Salmonella. There was no significant difference in the results obtained by the two methods. Conclusion The real-time PCR method for detection of Salmonella in meat

  1. Multiplier method may be unreliable to predict the timing of temporary hemiepiphysiodesis for coronal angular deformity.

    Science.gov (United States)

    Wu, Zhenkai; Ding, Jing; Zhao, Dahang; Zhao, Li; Li, Hai; Liu, Jianlin

    2017-07-10

    The multiplier method was introduced by Paley to calculate the timing for temporary hemiepiphysiodesis. However, this method has not been verified in terms of clinical outcome measure. We aimed to (1) predict the rate of angular correction per year (ACPY) at the various corresponding ages by means of multiplier method and verify the reliability based on the data from the published studies and (2) screen out risk factors for deviation of prediction. A comprehensive search was performed in the following electronic databases: Cochrane, PubMed, and EMBASE™. A total of 22 studies met the inclusion criteria. If the actual value of ACPY from the collected date was located out of the range of the predicted value based on the multiplier method, it was considered as the deviation of prediction (DOP). The associations of patient characteristics with DOP were assessed with the use of univariate logistic regression. Only one article was evaluated as moderate evidence; the remaining articles were evaluated as poor quality. The rate of DOP was 31.82%. In the detailed individual data of included studies, the rate of DOP was 55.44%. The multiplier method is not reliable in predicting the timing for temporary hemiepiphysiodesis, even though it is prone to be more reliable for the younger patients with idiopathic genu coronal deformity.

  2. Trend analysis using non-stationary time series clustering based on the finite element method

    Science.gov (United States)

    Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M. K.; Liess, S.

    2014-05-01

    In order to analyze low-frequency variability of climate, it is useful to model the climatic time series with multiple linear trends and locate the times of significant changes. In this paper, we have used non-stationary time series clustering to find change points in the trends. Clustering in a multi-dimensional non-stationary time series is challenging, since the problem is mathematically ill-posed. Clustering based on the finite element method (FEM) is one of the methods that can analyze multidimensional time series. One important attribute of this method is that it is not dependent on any statistical assumption and does not need local stationarity in the time series. In this paper, it is shown how the FEM-clustering method can be used to locate change points in the trend of temperature time series from in situ observations. This method is applied to the temperature time series of North Carolina (NC) and the results represent region-specific climate variability despite higher frequency harmonics in climatic time series. Next, we investigated the relationship between the climatic indices with the clusters/trends detected based on this clustering method. It appears that the natural variability of climate change in NC during 1950-2009 can be explained mostly by AMO and solar activity.

  3. Computational electrodynamics the finite-difference time-domain method

    CERN Document Server

    Taflove, Allen

    2005-01-01

    This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.

  4. The Effect of Temperature and Drying Method on Drying Time and Color Quality of Mint

    Directory of Open Access Journals (Sweden)

    H Bahmanpour

    2017-10-01

    Full Text Available Introduction Mint (Mentha spicata L. cbelongs to the Lamiaceae family, is an herbaceous, perennial, aromatic and medicinal plant that cultivated for its essential oils and spices. Since the essential oil is extracted from dried plant, choosing the appropriate drying method is essential for gaining high quality essential oil.Vacuum drying technology is an alternative to conventional drying methods and reported by many authors as an efficient method for improving the drying quality especially color characteristics. On the other side, solar dryers are also useful for saving time and energy. In this study the effect of two method of dryings including vacuum-infrared versus solar at three different conventional temperatures (30, 40 and 50°C on mint plant is evaluated while factorial experiment with randomized complete block is applied. Drying time as well as color characteristics areconsidered for evaluation of each method of drying. Materials and Methods Factorial experiment with randomized complete block was applied in order to evaluate the effect of drying methods (vacuum-infrared versus solar and temperature (30, 40 and 50°C on drying time and color characteristics of mint. The initially moisture content of mint leaves measured according to the standard ASABE S358.2 during 24 hours inside an oven at 104 °C. Drying the samples continued until the moisture content (which real time measured reached to 10% wet basis. The components of a vacuum dryer consisted of a cylindrical vacuum chamber (0.335 m3 and a vacuum pump (piston version. The temperature of the chamber was controlled using three infrared bulbs using on-off controller. Temperature and weight of the products registered real time using a data acquisition system. The components of a solar dryer were consisting of a solar collector and a temperature control system which was turning the exhaust fan on and off in order to maintain the specific temperature. A date acquisition system was

  5. Methods of using real-time social media technologies for detection and remote monitoring of HIV outcomes.

    Science.gov (United States)

    Young, Sean D; Rivers, Caitlin; Lewis, Bryan

    2014-06-01

    Recent availability of "big data" might be used to study whether and how sexual risk behaviors are communicated on real-time social networking sites and how data might inform HIV prevention and detection. This study seeks to establish methods of using real-time social networking data for HIV prevention by assessing 1) whether geolocated conversations about HIV risk behaviors can be extracted from social networking data, 2) the prevalence and content of these conversations, and 3) the feasibility of using HIV risk-related real-time social media conversations as a method to detect HIV outcomes. In 2012, tweets (N=553,186,061) were collected online and filtered to include those with HIV risk-related keywords (e.g., sexual behaviors and drug use). Data were merged with AIDSVU data on HIV cases. Negative binomial regressions assessed the relationship between HIV risk tweeting and prevalence by county, controlling for socioeconomic status measures. Over 9800 geolocated tweets were extracted and used to create a map displaying the geographical location of HIV-related tweets. There was a significant positive relationship (psocial networking data as a method for evaluating and detecting Human immunodeficiency virus (HIV) risk behaviors and outcomes. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Integral transform method for solving time fractional systems and fractional heat equation

    Directory of Open Access Journals (Sweden)

    Arman Aghili

    2014-01-01

    Full Text Available In the present paper, time fractional partial differential equation is considered, where the fractional derivative is defined in the Caputo sense. Laplace transform method has been applied to obtain an exact solution. The authors solved certain homogeneous and nonhomogeneous time fractional heat equations using integral transform. Transform method is a powerful tool for solving fractional singular Integro - differential equations and PDEs. The result reveals that the transform method is very convenient and effective.

  7. Rapid detection of Salmonella in pet food: design and evaluation of integrated methods based on real-time PCR detection.

    Science.gov (United States)

    Balachandran, Priya; Friberg, Maria; Vanlandingham, V; Kozak, K; Manolis, Amanda; Brevnov, Maxim; Crowley, Erin; Bird, Patrick; Goins, David; Furtado, Manohar R; Petrauskene, Olga V; Tebbs, Robert S; Charbonneau, Duane

    2012-02-01

    Reducing the risk of Salmonella contamination in pet food is critical for both companion animals and humans, and its importance is reflected by the substantial increase in the demand for pathogen testing. Accurate and rapid detection of foodborne pathogens improves food safety, protects the public health, and benefits food producers by assuring product quality while facilitating product release in a timely manner. Traditional culture-based methods for Salmonella screening are laborious and can take 5 to 7 days to obtain definitive results. In this study, we developed two methods for the detection of low levels of Salmonella in pet food using real-time PCR: (i) detection of Salmonella in 25 g of dried pet food in less than 14 h with an automated magnetic bead-based nucleic acid extraction method and (ii) detection of Salmonella in 375 g of composite dry pet food matrix in less than 24 h with a manual centrifugation-based nucleic acid preparation method. Both methods included a preclarification step using a novel protocol that removes food matrix-associated debris and PCR inhibitors and improves the sensitivity of detection. Validation studies revealed no significant differences between the two real-time PCR methods and the standard U.S. Food and Drug Administration Bacteriological Analytical Manual (chapter 5) culture confirmation method.

  8. Dislocation concepts applied to fatigue properties of austenitic stainless steels including time-dependent modes

    Energy Technology Data Exchange (ETDEWEB)

    Tavassoli, A.A.

    1986-10-01

    Dislocation substructures formed in austenitic stainless steel 304L and 316L, fatigued at 673 K, 823 K and 873 K under total imposed strain ranges of 0.7 to 2.25%, and their correlation with mechanical properties have been investigated. In addition substructures formed at lower strain ranges have been examined using foils prepared from parts of the specimens with larger cross-sections. Investigation has also been extended to include the effect of intermittent hold-times up to 1.8 x 10/sup 4/s and sequential creep-fatigue and fatigue-creep. The experimental results obtained are analysed and their implications for current dislocation concepts and mechanical properties are discussed.

  9. A time-series method for automated measurement of changes in mitotic and interphase duration from time-lapse movies.

    Directory of Open Access Journals (Sweden)

    Frederic D Sigoillot

    Full Text Available Automated time-lapse microscopy can visualize proliferation of large numbers of individual cells, enabling accurate measurement of the frequency of cell division and the duration of interphase and mitosis. However, extraction of quantitative information by manual inspection of time-lapse movies is too time-consuming to be useful for analysis of large experiments.Here we present an automated time-series approach that can measure changes in the duration of mitosis and interphase in individual cells expressing fluorescent histone 2B. The approach requires analysis of only 2 features, nuclear area and average intensity. Compared to supervised learning approaches, this method reduces processing time and does not require generation of training data sets. We demonstrate that this method is as sensitive as manual analysis in identifying small changes in interphase or mitotic duration induced by drug or siRNA treatment.This approach should facilitate automated analysis of high-throughput time-lapse data sets to identify small molecules or gene products that influence timing of cell division.

  10. System and method for constructing filters for detecting signals whose frequency content varies with time

    Science.gov (United States)

    Qian, S.; Dunham, M.E.

    1996-11-12

    A system and method are disclosed for constructing a bank of filters which detect the presence of signals whose frequency content varies with time. The present invention includes a novel system and method for developing one or more time templates designed to match the received signals of interest and the bank of matched filters use the one or more time templates to detect the received signals. Each matched filter compares the received signal x(t) with a respective, unique time template that has been designed to approximate a form of the signals of interest. The robust time domain template is assumed to be of the order of w(t)=A(t)cos(2{pi}{phi}(t)) and the present invention uses the trajectory of a joint time-frequency representation of x(t) as an approximation of the instantaneous frequency function {phi}{prime}(t). First, numerous data samples of the received signal x(t) are collected. A joint time frequency representation is then applied to represent the signal, preferably using the time frequency distribution series. The joint time-frequency transformation represents the analyzed signal energy at time t and frequency f, P(t,f), which is a three-dimensional plot of time vs. frequency vs. signal energy. Then P(t,f) is reduced to a multivalued function f(t), a two dimensional plot of time vs. frequency, using a thresholding process. Curve fitting steps are then performed on the time/frequency plot, preferably using Levenberg-Marquardt curve fitting techniques, to derive a general instantaneous frequency function {phi}{prime}(t) which best fits the multivalued function f(t). Integrating {phi}{prime}(t) along t yields {phi}{prime}(t), which is then inserted into the form of the time template equation. A suitable amplitude A(t) is also preferably determined. Once the time template has been determined, one or more filters are developed which each use a version or form of the time template. 7 figs.

  11. Method of fabricating electrodes including high-capacity, binder-free anodes for lithium-ion batteries

    Science.gov (United States)

    Ban, Chunmei; Wu, Zhuangchun; Dillon, Anne C.

    2017-01-10

    An electrode (110) is provided that may be used in an electrochemical device (100) such as an energy storage/discharge device, e.g., a lithium-ion battery, or an electrochromic device, e.g., a smart window. Hydrothermal techniques and vacuum filtration methods were applied to fabricate the electrode (110). The electrode (110) includes an active portion (140) that is made up of electrochemically active nanoparticles, with one embodiment utilizing 3d-transition metal oxides to provide the electrochemical capacity of the electrode (110). The active material (140) may include other electrochemical materials, such as silicon, tin, lithium manganese oxide, and lithium iron phosphate. The electrode (110) also includes a matrix or net (170) of electrically conductive nanomaterial that acts to connect and/or bind the active nanoparticles (140) such that no binder material is required in the electrode (110), which allows more active materials (140) to be included to improve energy density and other desirable characteristics of the electrode. The matrix material (170) may take the form of carbon nanotubes, such as single-wall, double-wall, and/or multi-wall nanotubes, and be provided as about 2 to 30 percent weight of the electrode (110) with the rest being the active material (140).

  12. Uncertainty in real-time voltage stability assessment methods based on Thevenin equivalent due to PMU’s accuracy

    DEFF Research Database (Denmark)

    Perez, Angel; Møller, Jakob Glarbo; Jóhannsson, Hjörtur

    2014-01-01

    This article studies the influence of PMU’s accuracy in voltage stability assessment, considering the specific case of Th ́ evenin equivalent based methods that include wide-area information in its calculations. The objective was achieved by producing a set of synthesized PMU measurements from...... a time domain simulation and using the Monte Carlo method to reflect the accuracy for the PMUs. This is given by the maximum value for the Total Vector Error defined in the IEEE standard C37.118. Those measurements allowed to estimate the distribution pa- rameters (mean and standard deviation...

  13. Numerical method for solving the three-dimensional time-dependent neutron diffusion equation

    International Nuclear Information System (INIS)

    Khaled, S.M.; Szatmary, Z.

    2005-01-01

    A numerical time-implicit method has been developed for solving the coupled three-dimensional time-dependent multi-group neutron diffusion and delayed neutron precursor equations. The numerical stability of the implicit computation scheme and the convergence of the iterative associated processes have been evaluated. The computational scheme requires the solution of large linear systems at each time step. For this purpose, the point over-relaxation Gauss-Seidel method was chosen. A new scheme was introduced instead of the usual source iteration scheme. (author)

  14. Time-resolved measurements of luminescence

    Energy Technology Data Exchange (ETDEWEB)

    Collier, Bradley B. [Department of Biomedical Engineering, 408 Mechanical Engineering Office Building, Spence Street, Texas A and M University, College Station, TX 77843 (United States); McShane, Michael J., E-mail: mcshane@tamu.edu [Department of Biomedical Engineering, 408 Mechanical Engineering Office Building, Spence Street, Texas A and M University, College Station, TX 77843 (United States); Materials Science and Engineering Program, 408 Mechanical Engineering Office Building, Spence Street, Texas A and M University, College Station, TX 77843 (United States)

    2013-12-15

    Luminescence sensing and imaging has become more widespread in recent years in a variety of industries including the biomedical and environmental fields. Measurements of luminescence lifetime hold inherent advantages over intensity-based response measurements, and advances in both technology and methods have enabled their use in a broader spectrum of applications including real-time medical diagnostics. This review will focus on recent advances in analytical methods, particularly calculation techniques, including time- and frequency-domain lifetime approaches as well as other time-resolved measurements of luminescence. -- Highlights: • Developments in technology have led to widespread use of luminescence lifetime. • Growing interest for sensing and imaging applications. • Recent advances in approaches to lifetime calculations are reviewed. • Advantages and disadvantages of various methods are weighed. • Other methods for measurement of luminescence lifetime also described.

  15. Time-resolved measurements of luminescence

    International Nuclear Information System (INIS)

    Collier, Bradley B.; McShane, Michael J.

    2013-01-01

    Luminescence sensing and imaging has become more widespread in recent years in a variety of industries including the biomedical and environmental fields. Measurements of luminescence lifetime hold inherent advantages over intensity-based response measurements, and advances in both technology and methods have enabled their use in a broader spectrum of applications including real-time medical diagnostics. This review will focus on recent advances in analytical methods, particularly calculation techniques, including time- and frequency-domain lifetime approaches as well as other time-resolved measurements of luminescence. -- Highlights: • Developments in technology have led to widespread use of luminescence lifetime. • Growing interest for sensing and imaging applications. • Recent advances in approaches to lifetime calculations are reviewed. • Advantages and disadvantages of various methods are weighed. • Other methods for measurement of luminescence lifetime also described

  16. Combined Effects of Numerical Method Type and Time Step on Water Stressed Actual Crop ET

    Directory of Open Access Journals (Sweden)

    B. Ghahraman

    2016-02-01

    Full Text Available Introduction: Actual crop evapotranspiration (Eta is important in hydrologic modeling and irrigation water management issues. Actual ET depends on an estimation of a water stress index and average soil water at crop root zone, and so depends on a chosen numerical method and adapted time step. During periods with no rainfall and/or irrigation, actual ET can be computed analytically or by using different numerical methods. Overal, there are many factors that influence actual evapotranspiration. These factors are crop potential evapotranspiration, available root zone water content, time step, crop sensitivity, and soil. In this paper different numerical methods are compared for different soil textures and different crops sensitivities. Materials and Methods: During a specific time step with no rainfall or irrigation, change in soil water content would be equal to evapotranspiration, ET. In this approach, however, deep percolation is generally ignored due to deep water table and negligible unsaturated hydraulic conductivity below rooting depth. This differential equation may be solved analytically or numerically considering different algorithms. We adapted four different numerical methods, as explicit, implicit, and modified Euler, midpoint method, and 3-rd order Heun method to approximate the differential equation. Three general soil types of sand, silt, and clay, and three different crop types of sensitive, moderate, and resistant under Nishaboor plain were used. Standard soil fraction depletion (corresponding to ETc=5 mm.d-1, pstd, below which crop faces water stress is adopted for crop sensitivity. Three values for pstd were considered in this study to cover the common crops in the area, including winter wheat and barley, cotton, alfalfa, sugar beet, saffron, among the others. Based on this parameter, three classes for crop sensitivity was considered, sensitive crops with pstd=0.2, moderate crops with pstd=0.5, and resistive crops with pstd=0

  17. Statistical methods for elimination of guarantee-time bias in cohort studies: a simulation study

    Directory of Open Access Journals (Sweden)

    In Sung Cho

    2017-08-01

    Full Text Available Abstract Background Aspirin has been considered to be beneficial in preventing cardiovascular diseases and cancer. Several pharmaco-epidemiology cohort studies have shown protective effects of aspirin on diseases using various statistical methods, with the Cox regression model being the most commonly used approach. However, there are some inherent limitations to the conventional Cox regression approach such as guarantee-time bias, resulting in an overestimation of the drug effect. To overcome such limitations, alternative approaches, such as the time-dependent Cox model and landmark methods have been proposed. This study aimed to compare the performance of three methods: Cox regression, time-dependent Cox model and landmark method with different landmark times in order to address the problem of guarantee-time bias. Methods Through statistical modeling and simulation studies, the performance of the above three methods were assessed in terms of type I error, bias, power, and mean squared error (MSE. In addition, the three statistical approaches were applied to a real data example from the Korean National Health Insurance Database. Effect of cumulative rosiglitazone dose on the risk of hepatocellular carcinoma was used as an example for illustration. Results In the simulated data, time-dependent Cox regression outperformed the landmark method in terms of bias and mean squared error but the type I error rates were similar. The results from real-data example showed the same patterns as the simulation findings. Conclusions While both time-dependent Cox regression model and landmark analysis are useful in resolving the problem of guarantee-time bias, time-dependent Cox regression is the most appropriate method for analyzing cumulative dose effects in pharmaco-epidemiological studies.

  18. Generalized Runge-Kutta method for two- and three-dimensional space-time diffusion equations with a variable time step

    International Nuclear Information System (INIS)

    Aboanber, A.E.; Hamada, Y.M.

    2008-01-01

    An extensive knowledge of the spatial power distribution is required for the design and analysis of different types of current-generation reactors, and that requires the development of more sophisticated theoretical methods. Therefore, the need to develop new methods for multidimensional transient reactor analysis still exists. The objective of this paper is to develop a computationally efficient numerical method for solving the multigroup, multidimensional, static and transient neutron diffusion kinetics equations. A generalized Runge-Kutta method has been developed for the numerical integration of the stiff space-time diffusion equations. The method is fourth-order accurate, using an embedded third-order solution to arrive at an estimate of the truncation error for automatic time step control. In addition, the A(α)-stability properties of the method are investigated. The analyses of two- and three-dimensional benchmark problems as well as static and transient problems, demonstrate that very accurate solutions can be obtained with assembly-sized spatial meshes. Preliminary numerical evaluations using two- and three-dimensional finite difference codes showed that the presented generalized Runge-Kutta method is highly accurate and efficient when compared with other optimized iterative numerical and conventional finite difference methods

  19. Iterative Refinement Methods for Time-Domain Equalizer Design

    Directory of Open Access Journals (Sweden)

    Evans Brian L

    2006-01-01

    Full Text Available Commonly used time domain equalizer (TEQ design methods have been recently unified as an optimization problem involving an objective function in the form of a Rayleigh quotient. The direct generalized eigenvalue solution relies on matrix decompositions. To reduce implementation complexity, we propose an iterative refinement approach in which the TEQ length starts at two taps and increases by one tap at each iteration. Each iteration involves matrix-vector multiplications and vector additions with matrices and two-element vectors. At each iteration, the optimization of the objective function either improves or the approach terminates. The iterative refinement approach provides a range of communication performance versus implementation complexity tradeoffs for any TEQ method that fits the Rayleigh quotient framework. We apply the proposed approach to three such TEQ design methods: maximum shortening signal-to-noise ratio, minimum intersymbol interference, and minimum delay spread.

  20. Imaging Method Based on Time Reversal Channel Compensation

    Directory of Open Access Journals (Sweden)

    Bing Li

    2015-01-01

    Full Text Available The conventional time reversal imaging (TRI method builds imaging function by using the maximal value of signal amplitude. In this circumstance, some remote targets are missed (near-far problem or low resolution is obtained in lossy and/or dispersive media, and too many transceivers are employed to locate targets, which increases the complexity and cost of system. To solve these problems, a novel TRI algorithm is presented in this paper. In order to achieve a high resolution, the signal amplitude corresponding to focal time observed at target position is used to reconstruct the target image. For disposing near-far problem and suppressing spurious images, combining with cross-correlation property and amplitude compensation, channel compensation function (CCF is introduced. Moreover, the complexity and cost of system are reduced by employing only five transceivers to detect four targets whose number is close to that of transceivers. For the sake of demonstrating the practicability of the proposed analytical framework, the numerical experiments are actualized in both nondispersive-lossless (NDL media and dispersive-conductive (DPC media. Results show that the performance of the proposed method is superior to that of conventional TRI algorithm even under few echo signals.

  1. Detection of Yersinia Enterocolitica Species in Pig Tonsils and Raw Pork Meat by the Real-Time Pcr and Culture Methods.

    Science.gov (United States)

    Stachelska, M A

    2017-09-26

    The aim of the present study was to establish a rapid and accurate real-time PCR method to detect pathogenic Yersinia enterocolitica in pork. Yersinia enterocolitica is considered to be a crucial zoonosis, which can provoke diseases both in humans and animals. The classical culture methods designated to detect Y. enterocolitica species in food matrices are often very time-consuming. The chromosomal locus _tag CH49_3099 gene, that appears in pathogenic Y. enterocolitica strains, was applied as DNA target for the 5' nuclease PCR protocol. The probe was labelled at the 5' end with the fluorescent reporter dye (FAM) and at the 3' end with the quencher dye (TAMRA). The real-time PCR cycling parameters included 41 cycles. A Ct value which reached a value higher than 40 constituted a negative result. The developed for the needs of this study qualitative real-time PCR method appeared to give very specific and reliable results. The detection rate of locus _tag CH49_3099 - positive Y. enterocolitica in 150 pig tonsils was 85 % and 32 % with PCR and culture methods, respectively. Both the Real-time PCR results and culture method results were obtained from material that was enriched during overnight incubation. The subject of the study were also raw pork meat samples. Among 80 samples examined, 7 ones were positive when real-time PCR was applied, and 6 ones were positive when classical culture method was applied. The application of molecular techniques based on the analysis of DNA sequences such as the Real-time PCR enables to detect this pathogenic bacteria very rapidly and with higher specificity, sensitivity and reliability in comparison to classical culture methods.

  2. Application of the multigrid amplitude function method for time-dependent transport equation using MOC

    International Nuclear Information System (INIS)

    Tsujita, K.; Endo, T.; Yamamoto, A.

    2013-01-01

    An efficient numerical method for time-dependent transport equation, the mutigrid amplitude function (MAF) method, is proposed. The method of characteristics (MOC) is being widely used for reactor analysis thanks to the advances of numerical algorithms and computer hardware. However, efficient kinetic calculation method for MOC is still desirable since it requires significant computation time. Various efficient numerical methods for solving the space-dependent kinetic equation, e.g., the improved quasi-static (IQS) and the frequency transform methods, have been developed so far mainly for diffusion calculation. These calculation methods are known as effective numerical methods and they offer a way for faster computation. However, they have not been applied to the kinetic calculation method using MOC as the authors' knowledge. Thus, the MAF method is applied to the kinetic calculation using MOC aiming to reduce computation time. The MAF method is a unified numerical framework of conventional kinetic calculation methods, e.g., the IQS, the frequency transform, and the theta methods. Although the MAF method is originally developed for the space-dependent kinetic calculation based on the diffusion theory, it is extended to transport theory in the present study. The accuracy and computational time are evaluated though the TWIGL benchmark problem. The calculation results show the effectiveness of the MAF method. (authors)

  3. New analytical exact solutions of time fractional KdV-KZK equation by Kudryashov methods

    Science.gov (United States)

    S Saha, Ray

    2016-04-01

    In this paper, new exact solutions of the time fractional KdV-Khokhlov-Zabolotskaya-Kuznetsov (KdV-KZK) equation are obtained by the classical Kudryashov method and modified Kudryashov method respectively. For this purpose, the modified Riemann-Liouville derivative is used to convert the nonlinear time fractional KdV-KZK equation into the nonlinear ordinary differential equation. In the present analysis, the classical Kudryashov method and modified Kudryashov method are both used successively to compute the analytical solutions of the time fractional KdV-KZK equation. As a result, new exact solutions involving the symmetrical Fibonacci function, hyperbolic function and exponential function are obtained for the first time. The methods under consideration are reliable and efficient, and can be used as an alternative to establish new exact solutions of different types of fractional differential equations arising from mathematical physics. The obtained results are exhibited graphically in order to demonstrate the efficiencies and applicabilities of these proposed methods of solving the nonlinear time fractional KdV-KZK equation.

  4. A revised method to calculate the concentration time integral of atmospheric pollutants

    International Nuclear Information System (INIS)

    Voelz, E.; Schultz, H.

    1980-01-01

    It is possible to calculate the spreading of a plume in the atmosphere under nonstationary and nonhomogeneous conditions by introducing the ''particle-in-cell'' method (PIC). This is a numerical method by which the transport of and the diffusion in the plume is reproduced in such a way, that particles representing the concentration are moved time step-wise in restricted regions (cells) and separately with the advection velocity and the diffusion velocity. This has a systematical advantage over the steady state Gaussian plume model usually used. The fixed-point concentration time integral is calculated directly instead of being substituted by the locally integrated concentration at a constant time as is done in the Gaussian model. In this way inaccuracies due to the above mentioned computational techniques may be avoided for short-time emissions, as may be seen by the fact that both integrals do not lead to the same results. Also the PIC method enables one to consider the height-dependent wind speed and its variations while the Gaussian model can be used only with averaged wind data. The concentration time integral calculated by the PIC method results in higher maximum values in shorter distances to the source. This is an effect often observed in measurements. (author)

  5. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Directory of Open Access Journals (Sweden)

    Koivistoinen Teemu

    2007-01-01

    Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an -by-1 or 1-by- array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD.'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  6. Real Time Grouting Control Method. Development and application using Aespoe HRL data

    International Nuclear Information System (INIS)

    Kobayashi, Shinji; Stille, Haakan; Gustafson, Gunnar; Stille, Bjoern

    2008-10-01

    The spread of grout is governed by a number of complex relations. The desired results, such as grout penetration and sealing of fractures, cannot be directly measured during the grouting process. This means that the issue of how or when the injection of grout should be stopped cannot be answered by simple rules of thumb. This is also the background to the great variety of empirical rules used in the grouting sector worldwide. The research during recent years has given a better understanding of the water-bearing structures of the rock mass as well as analytical solutions. In this report the methodology has been further studied and a method for design and control of rock grouting has been proposed. The concept of what we call the 'Real Time Grouting Control Method' is to calculate the grout penetration and control grouting in real time by applying the development of the theories for grout spread. Our intention is to combine our method with a computerized logging tool to acquire an active tool in order to be able to govern the grout spread in real time during the grouting operation. The objectives of this report are: to further develop the theory concerning the relationship between grout penetration and grouting time to describe the real course of grouting, to establish the concept of 'Real Time Grouting Control Method' for design and control for rock grouting based on the developed theory, and to verify the concept by using the field data from the grouting experiment at the 450 m level in the Aespoe HRL. In this report, the approximations and analysis of dimensionality have been checked and further developments of the theory with respect to varying grouting pressure, time-dependent grout properties, changing grout mixes, and changing the flow dimension of the fracture have been carried out. The concept of 'Real Time Grouting Control Method' has been described in order to calculate the grout penetration and to control grouting in real time by applying developed

  7. A new method for measuring the response time of the high pressure ionization chamber

    International Nuclear Information System (INIS)

    Wang, Zhentao; Shen, Yixiong; An, Jigang

    2012-01-01

    Time response is an important performance characteristic for gas-pressurized ionization chambers. To study the time response, it is especially crucial to measure the ion drift time in high pressure ionization chambers. In this paper, a new approach is proposed to study the ion drift time in high pressure ionization chambers. It is carried out with a short-pulsed X-ray source and a high-speed digitizer. The ion drift time in the chamber is then determined from the digitized data. By measuring the ion drift time of a 15 atm xenon testing chamber, the method has been proven to be effective in the time response studies of ionization chambers. - Highlights: ► A method for measuring response time of high pressure ionization chamber is proposed. ► A pulsed X-ray producer and a digital oscilloscope are used in the method. ► The response time of a 15 atm Xenon testing ionization chamber has been measured. ► The method has been proved to be simple, feasible and effective.

  8. Optimizing some 3-stage W-methods for the time integration of PDEs

    Science.gov (United States)

    Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.

    2017-07-01

    The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.

  9. Direct determination of scattering time delays using the R-matrix propagation method

    International Nuclear Information System (INIS)

    Walker, R.B.; Hayes, E.F.

    1989-01-01

    A direct method for determining time delays for scattering processes is developed using the R-matrix propagation method. The procedure involves the simultaneous generation of the global R matrix and its energy derivative. The necessary expressions to obtain the energy derivative of the S matrix are relatively simple and involve many of the same matrix elements required for the R-matrix propagation method. This method is applied to a simple model for a chemical reaction that displays sharp resonance features. The test results of the direct method are shown to be in excellent agreement with the traditional numerical differentiation method for scattering energies near the resonance energy. However, for sharp resonances the numerical differentiation method requires calculation of the S-matrix elements at many closely spaced energies. Since the direct method presented here involves calculations at only a single energy, one is able to generate accurate energy derivatives and time delays much more efficiently and reliably

  10. Predicting Charging Time of Battery Electric Vehicles Based on Regression and Time-Series Methods: A Case Study of Beijing

    Directory of Open Access Journals (Sweden)

    Jun Bi

    2018-04-01

    Full Text Available Battery electric vehicles (BEVs reduce energy consumption and air pollution as compared with conventional vehicles. However, the limited driving range and potential long charging time of BEVs create new problems. Accurate charging time prediction of BEVs helps drivers determine travel plans and alleviate their range anxiety during trips. This study proposed a combined model for charging time prediction based on regression and time-series methods according to the actual data from BEVs operating in Beijing, China. After data analysis, a regression model was established by considering the charged amount for charging time prediction. Furthermore, a time-series method was adopted to calibrate the regression model, which significantly improved the fitting accuracy of the model. The parameters of the model were determined by using the actual data. Verification results confirmed the accuracy of the model and showed that the model errors were small. The proposed model can accurately depict the charging time characteristics of BEVs in Beijing.

  11. Method to implement the CCD timing generator based on FPGA

    Science.gov (United States)

    Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin

    2010-07-01

    With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.

  12. Perfectly matched layer for the time domain finite element method

    International Nuclear Information System (INIS)

    Rylander, Thomas; Jin Jianming

    2004-01-01

    A new perfectly matched layer (PML) formulation for the time domain finite element method is described and tested for Maxwell's equations. In particular, we focus on the time integration scheme which is based on Galerkin's method with a temporally piecewise linear expansion of the electric field. The time stepping scheme is constructed by forming a linear combination of exact and trapezoidal integration applied to the temporal weak form, which reduces to the well-known Newmark scheme in the case without PML. Extensive numerical tests on scattering from infinitely long metal cylinders in two dimensions show good accuracy and no signs of instabilities. For a circular cylinder, the proposed scheme indicates the expected second order convergence toward the analytic solution and gives less than 2% root-mean-square error in the bistatic radar cross section (RCS) for resolutions with more than 10 points per wavelength. An ogival cylinder, which has sharp corners supporting field singularities, shows similar accuracy in the monostatic RCS

  13. Improved time series prediction with a new method for selection of model parameters

    International Nuclear Information System (INIS)

    Jade, A M; Jayaraman, V K; Kulkarni, B D

    2006-01-01

    A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)

  14. Multiple-time-stepping generalized hybrid Monte Carlo methods

    Energy Technology Data Exchange (ETDEWEB)

    Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)

    2015-01-01

    Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.

  15. A robust two-node, 13 moment quadrature method of moments for dilute particle flows including wall bouncing

    Science.gov (United States)

    Sun, Dan; Garmory, Andrew; Page, Gary J.

    2017-02-01

    For flows where the particle number density is low and the Stokes number is relatively high, as found when sand or ice is ingested into aircraft gas turbine engines, streams of particles can cross each other's path or bounce from a solid surface without being influenced by inter-particle collisions. The aim of this work is to develop an Eulerian method to simulate these types of flow. To this end, a two-node quadrature-based moment method using 13 moments is proposed. In the proposed algorithm thirteen moments of particle velocity, including cross-moments of second order, are used to determine the weights and abscissas of the two nodes and to set up the association between the velocity components in each node. Previous Quadrature Method of Moments (QMOM) algorithms either use more than two nodes, leading to increased computational expense, or are shown here to give incorrect results under some circumstances. This method gives the computational efficiency advantages of only needing two particle phase velocity fields whilst ensuring that a correct combination of weights and abscissas is returned for any arbitrary combination of particle trajectories without the need for any further assumptions. Particle crossing and wall bouncing with arbitrary combinations of angles are demonstrated using the method in a two-dimensional scheme. The ability of the scheme to include the presence of drag from a carrier phase is also demonstrated, as is bouncing off surfaces with inelastic collisions. The method is also applied to the Taylor-Green vortex flow test case and is found to give results superior to the existing two-node QMOM method and is in good agreement with results from Lagrangian modelling of this case.

  16. Integration of image exposure time into a modified laser speckle imaging method

    Energy Technology Data Exchange (ETDEWEB)

    RamIrez-San-Juan, J C; Salazar-Hermenegildo, N; Ramos-Garcia, R; Munoz-Lopez, J [Optics Department, INAOE, Puebla (Mexico); Huang, Y C [Department of Electrical Engineering and Computer Science, University of California, Irvine, CA (United States); Choi, B, E-mail: jcram@inaoep.m [Beckman Laser Institute and Medical Clinic, University of California, Irvine, CA (United States)

    2010-11-21

    Speckle-based methods have been developed to characterize tissue blood flow and perfusion. One such method, called modified laser speckle imaging (mLSI), enables computation of blood flow maps with relatively high spatial resolution. Although it is known that the sensitivity and noise in LSI measurements depend on image exposure time, a fundamental disadvantage of mLSI is that it does not take into account this parameter. In this work, we integrate the exposure time into the mLSI method and provide experimental support of our approach with measurements from an in vitro flow phantom.

  17. Integration of image exposure time into a modified laser speckle imaging method

    International Nuclear Information System (INIS)

    RamIrez-San-Juan, J C; Salazar-Hermenegildo, N; Ramos-Garcia, R; Munoz-Lopez, J; Huang, Y C; Choi, B

    2010-01-01

    Speckle-based methods have been developed to characterize tissue blood flow and perfusion. One such method, called modified laser speckle imaging (mLSI), enables computation of blood flow maps with relatively high spatial resolution. Although it is known that the sensitivity and noise in LSI measurements depend on image exposure time, a fundamental disadvantage of mLSI is that it does not take into account this parameter. In this work, we integrate the exposure time into the mLSI method and provide experimental support of our approach with measurements from an in vitro flow phantom.

  18. Performance Evaluation of Machine Learning Methods for Leaf Area Index Retrieval from Time-Series MODIS Reflectance Data

    Science.gov (United States)

    Wang, Tongtong; Xiao, Zhiqiang; Liu, Zhigang

    2017-01-01

    Leaf area index (LAI) is an important biophysical parameter and the retrieval of LAI from remote sensing data is the only feasible method for generating LAI products at regional and global scales. However, most LAI retrieval methods use satellite observations at a specific time to retrieve LAI. Because of the impacts of clouds and aerosols, the LAI products generated by these methods are spatially incomplete and temporally discontinuous, and thus they cannot meet the needs of practical applications. To generate high-quality LAI products, four machine learning algorithms, including back-propagation neutral network (BPNN), radial basis function networks (RBFNs), general regression neutral networks (GRNNs), and multi-output support vector regression (MSVR) are proposed to retrieve LAI from time-series Moderate Resolution Imaging Spectroradiometer (MODIS) reflectance data in this study and performance of these machine learning algorithms is evaluated. The results demonstrated that GRNNs, RBFNs, and MSVR exhibited low sensitivity to training sample size, whereas BPNN had high sensitivity. The four algorithms performed slightly better with red, near infrared (NIR), and short wave infrared (SWIR) bands than red and NIR bands, and the results were significantly better than those obtained using single band reflectance data (red or NIR). Regardless of band composition, GRNNs performed better than the other three methods. Among the four algorithms, BPNN required the least training time, whereas MSVR needed the most for any sample size. PMID:28045443

  19. Use of the finite-difference time-domain method in electromagnetic dosimetry

    International Nuclear Information System (INIS)

    Sullivan, D.M.

    1987-01-01

    Although there are acceptable methods for calculating whole body electromagnetic absorption, no completely acceptable method for calculating the local specific absorption rate (SAR) at points within the body has been developed. Frequency domain methods, such as the method of moments (MoM) have achieved some success; however, the MoM requires computer storage on the order of (3N) 2 , and computation time on the order of (3N) 3 where N is the number of cells. The finite-difference time-domain (FDTD) method has been employed extensively in calculating the scattering from metallic objects, and recently is seeing some use in calculating the interaction of EM fields with complex, lossy dielectric bodies. Since the FDTD method has storage and time requirements proportional to N, it presents an attractive alternative to calculating SAR distribution in large bodies. This dissertation describes the FDTD method and evaluates it by comparing its results with analytic solutions in 2 and 3 dimensions. The results obtained demonstrate that the FDTD method is capable of calculating internal SAR distribution with acceptable accuracy. The construction of a data base to provide detailed, inhomogeneous man models for use with the FDTD method is described. Using this construction method, a model of 40,000 1.31 cm. cells is developed for use at 350 MHz, and another model consisting of 5000 2.62 cm. cells is developed for use at 100 MHz. To add more realism to the problem, a ground plane is added to the FDTD software. The needed changes to the software are described, along with a test which confirms its accuracy. Using the CRAY II supercomputer, SAR distributions in human models are calculated using incident frequencies of 100 MHz and 350 MHz for three different cases: (1) A homogeneous man model in free space, (2) an inhomogeneous man model in free space, and (3) an inhomogeneous man model standing on a ground plane

  20. Tensor-product preconditioners for higher-order space-time discontinuous Galerkin methods

    Science.gov (United States)

    Diosady, Laslo T.; Murman, Scott M.

    2017-02-01

    A space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equations. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high-order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.

  1. Tensor-Product Preconditioners for Higher-Order Space-Time Discontinuous Galerkin Methods

    Science.gov (United States)

    Diosady, Laslo T.; Murman, Scott M.

    2016-01-01

    space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equat ions. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.

  2. New analytical exact solutions of time fractional KdV–KZK equation by Kudryashov methods

    International Nuclear Information System (INIS)

    Saha Ray, S

    2016-01-01

    In this paper, new exact solutions of the time fractional KdV–Khokhlov–Zabolotskaya–Kuznetsov (KdV–KZK) equation are obtained by the classical Kudryashov method and modified Kudryashov method respectively. For this purpose, the modified Riemann–Liouville derivative is used to convert the nonlinear time fractional KdV–KZK equation into the nonlinear ordinary differential equation. In the present analysis, the classical Kudryashov method and modified Kudryashov method are both used successively to compute the analytical solutions of the time fractional KdV–KZK equation. As a result, new exact solutions involving the symmetrical Fibonacci function, hyperbolic function and exponential function are obtained for the first time. The methods under consideration are reliable and efficient, and can be used as an alternative to establish new exact solutions of different types of fractional differential equations arising from mathematical physics. The obtained results are exhibited graphically in order to demonstrate the efficiencies and applicabilities of these proposed methods of solving the nonlinear time fractional KdV–KZK equation. (paper)

  3. Investigation of the Adaptability of Transient Stability Assessment Methods to Real-Time Operation

    OpenAIRE

    Weckesser, Johannes Tilman Gabriel; Jóhannsson, Hjörtur; Sommer, Stefan; Østergaard, Jacob

    2012-01-01

    In this paper, an investigation of the adaptability of available transient stability assessment methods to real-time operation and their real-time performance is carried out. Two approaches based on Lyapunov’s method and the equal area criterion are analyzed. The results allow to determine the runtime of each method with respect to the number of inputs. Furthermore, it allows to identify, which method is preferable in case of changes in the power system such as the integration of distributed ...

  4. Nonadiabatic dynamics of electron transfer in solution: Explicit and implicit solvent treatments that include multiple relaxation time scales

    International Nuclear Information System (INIS)

    Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon

    2014-01-01

    The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents for a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible

  5. Time Discretization Techniques

    KAUST Repository

    Gottlieb, S.; Ketcheson, David I.

    2016-01-01

    The time discretization of hyperbolic partial differential equations is typically the evolution of a system of ordinary differential equations obtained by spatial discretization of the original problem. Methods for this time evolution include

  6. Method paper--distance and travel time to casualty clinics in Norway based on crowdsourced postcode coordinates: a comparison with other methods.

    Science.gov (United States)

    Raknes, Guttorm; Hunskaar, Steinar

    2014-01-01

    We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.

  7. Method paper--distance and travel time to casualty clinics in Norway based on crowdsourced postcode coordinates: a comparison with other methods.

    Directory of Open Access Journals (Sweden)

    Guttorm Raknes

    Full Text Available We describe a method that uses crowdsourced postcode coordinates and Google maps to estimate average distance and travel time for inhabitants of a municipality to a casualty clinic in Norway. The new method was compared with methods based on population centroids, median distance and town hall location, and we used it to examine how distance affects the utilisation of out-of-hours primary care services. At short distances our method showed good correlation with mean travel time and distance. The utilisation of out-of-hours services correlated with postcode based distances similar to previous research. The results show that our method is a reliable and useful tool for estimating average travel distances and travel times.

  8. Preconditioned iterative methods for space-time fractional advection-diffusion equations

    Science.gov (United States)

    Zhao, Zhi; Jin, Xiao-Qing; Lin, Matthew M.

    2016-08-01

    In this paper, we propose practical numerical methods for solving a class of initial-boundary value problems of space-time fractional advection-diffusion equations. First, we propose an implicit method based on two-sided Grünwald formulae and discuss its stability and consistency. Then, we develop the preconditioned generalized minimal residual (preconditioned GMRES) method and preconditioned conjugate gradient normal residual (preconditioned CGNR) method with easily constructed preconditioners. Importantly, because resulting systems are Toeplitz-like, fast Fourier transform can be applied to significantly reduce the computational cost. We perform numerical experiments to demonstrate the efficiency of our preconditioners, even in cases with variable coefficients.

  9. Research on Monte Carlo improved quasi-static method for reactor space-time dynamics

    International Nuclear Information System (INIS)

    Xu Qi; Wang Kan; Li Shirui; Yu Ganglin

    2013-01-01

    With large time steps, improved quasi-static (IQS) method can improve the calculation speed for reactor dynamic simulations. The Monte Carlo IQS method was proposed in this paper, combining the advantages of both the IQS method and MC method. Thus, the Monte Carlo IQS method is beneficial for solving space-time dynamics problems of new concept reactors. Based on the theory of IQS, Monte Carlo algorithms for calculating adjoint neutron flux, reactor kinetic parameters and shape function were designed and realized. A simple Monte Carlo IQS code and a corresponding diffusion IQS code were developed, which were used for verification of the Monte Carlo IQS method. (authors)

  10. Perturbed Strong Stability Preserving Time-Stepping Methods For Hyperbolic PDEs

    KAUST Repository

    Hadjimichael, Yiannis

    2017-09-30

    A plethora of physical phenomena are modelled by hyperbolic partial differential equations, for which the exact solution is usually not known. Numerical methods are employed to approximate the solution to hyperbolic problems; however, in many cases it is difficult to satisfy certain physical properties while maintaining high order of accuracy. In this thesis, we develop high-order time-stepping methods that are capable of maintaining stability constraints of the solution, when coupled with suitable spatial discretizations. Such methods are called strong stability preserving (SSP) time integrators, and we mainly focus on perturbed methods that use both upwind- and downwind-biased spatial discretizations. Firstly, we introduce a new family of third-order implicit Runge–Kuttas methods with arbitrarily large SSP coefficient. We investigate the stability and accuracy of these methods and we show that they perform well on hyperbolic problems with large CFL numbers. Moreover, we extend the analysis of SSP linear multistep methods to semi-discretized problems for which different terms on the right-hand side of the initial value problem satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain augmented monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding non-additive SSP linear multistep methods. Furthermore, we develop the first SSP linear multistep methods of order two and three with variable step size, and study their optimality. We describe an optimal step-size strategy and demonstrate the effectiveness of these methods on various one- and multi-dimensional problems. Finally, we establish necessary conditions

  11. Investigation of the Adaptability of Transient Stability Assessment Methods to Real-Time Operation

    DEFF Research Database (Denmark)

    Weckesser, Johannes Tilman Gabriel; Jóhannsson, Hjörtur; Sommer, Stefan

    2012-01-01

    In this paper, an investigation of the adaptability of available transient stability assessment methods to real-time operation and their real-time performance is carried out. Two approaches based on Lyapunov’s method and the equal area criterion are analyzed. The results allow to determine...

  12. Novel methods for real-time 3D facial recognition

    OpenAIRE

    Rodrigues, Marcos; Robinson, Alan

    2010-01-01

    In this paper we discuss our approach to real-time 3D face recognition. We argue the need for real time operation in a realistic scenario and highlight the required pre- and post-processing operations for effective 3D facial recognition. We focus attention to some operations including face and eye detection, and fast post-processing operations such as hole filling, mesh smoothing and noise removal. We consider strategies for hole filling such as bilinear and polynomial interpolation and Lapla...

  13. Wavelet and adaptive methods for time dependent problems and applications in aerosol dynamics

    Science.gov (United States)

    Guo, Qiang

    Time dependent partial differential equations (PDEs) are widely used as mathematical models of environmental problems. Aerosols are now clearly identified as an important factor in many environmental aspects of climate and radiative forcing processes, as well as in the health effects of air quality. The mathematical models for the aerosol dynamics with respect to size distribution are nonlinear partial differential and integral equations, which describe processes of condensation, coagulation and deposition. Simulating the general aerosol dynamic equations on time, particle size and space exhibits serious difficulties because the size dimension ranges from a few nanometer to several micrometer while the spatial dimension is usually described with kilometers. Therefore, it is an important and challenging task to develop efficient techniques for solving time dependent dynamic equations. In this thesis, we develop and analyze efficient wavelet and adaptive methods for the time dependent dynamic equations on particle size and further apply them to the spatial aerosol dynamic systems. Wavelet Galerkin method is proposed to solve the aerosol dynamic equations on time and particle size due to the fact that aerosol distribution changes strongly along size direction and the wavelet technique can solve it very efficiently. Daubechies' wavelets are considered in the study due to the fact that they possess useful properties like orthogonality, compact support, exact representation of polynomials to a certain degree. Another problem encountered in the solution of the aerosol dynamic equations results from the hyperbolic form due to the condensation growth term. We propose a new characteristic-based fully adaptive multiresolution numerical scheme for solving the aerosol dynamic equation, which combines the attractive advantages of adaptive multiresolution technique and the characteristics method. On the aspect of theoretical analysis, the global existence and uniqueness of

  14. The method of covariant symbols in curved space-time

    International Nuclear Information System (INIS)

    Salcedo, L.L.

    2007-01-01

    Diagonal matrix elements of pseudodifferential operators are needed in order to compute effective Lagrangians and currents. For this purpose the method of symbols is often used, which however lacks manifest covariance. In this work the method of covariant symbols, introduced by Pletnev and Banin, is extended to curved space-time with arbitrary gauge and coordinate connections. For the Riemannian connection we compute the covariant symbols corresponding to external fields, the covariant derivative and the Laplacian, to fourth order in a covariant derivative expansion. This allows one to obtain the covariant symbol of general operators to the same order. The procedure is illustrated by computing the diagonal matrix element of a nontrivial operator to second order. Applications of the method are discussed. (orig.)

  15. Validation of a same-day real-time PCR method for screening of meat and carcass swabs for Salmonella

    DEFF Research Database (Denmark)

    Löfström, Charlotta; Krause, Michael; Josefsen, Mathilde Hartmann

    2009-01-01

    of the published PCR methods for Salmonella have been validated in collaborative studies. This study describes a validation including comparative and collaborative trials, based on the recommendations from the Nordic organization for validation of alternative microbiological methods (NordVal) of a same-day, non....... Partly based on results obtained in this study, the method has obtained NordVal approval for analysis of Salmonella in meat and carcass swabs. The PCR method was transferred to a production laboratory and the performance was compared with the BAX Salmonella test on 39 pork samples artificially...... contaminated with Salmonella. There was no significant difference in the results obtained by the two methods. Conclusion: The real-time PCR method for detection of Salmonella in meat and carcass swabs was validated in comparative and collaborative trials according to NordVal recommendations. The PCR method...

  16. A Novel Real-Time Reference Key Frame Scan Matching Method

    Directory of Open Access Journals (Sweden)

    Haytham Mohamed

    2017-05-01

    Full Text Available Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF. RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.

  17. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Directory of Open Access Journals (Sweden)

    Alpo Värri

    2007-01-01

    Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an m-by-1 or 1-by-m array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ‘‘time-frequency moments singular value decomposition (TFM-SVD.’’ In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  18. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    Science.gov (United States)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  19. Methods for assessment of climate variability and climate changes in different time-space scales

    International Nuclear Information System (INIS)

    Lobanov, V.; Lobanova, H.

    2004-01-01

    climate changes indexes of such classification have been developed which included: statistical significance or non-significance of climate changes, direction of climate change tendency in conditions of its statistical significance, assessment of its contribution and a form of the tendency if it enough complex over the time. In detected homogeneous regions the spatial generalization is fulfilled which includes different approach in dependence on regularities of spatial features. They are: an averaging, development of spatial distribution functions or spatial simulation. New spatial linear model has been developed and suggested which includes two coefficients connected with a gradient and a level of space field and one parameter which characterizes the internal inhomogeneity of the field. The last step of the suggested methodology is a using of the detected point and field climate changes for determination of design hydrological value. Traditional design characteristics (as one random event in each year) as well as new ones (POT, rare extremes, characteristics of cycles of climate variability), which can be rare or often than one value per year have been chosen. Approach and methods for using of detected climate changes in hydrological computations have been developed. Application of developed methods has been shown on some examples of different hydrometeorological characteristics (floods, low flow, annual runoff, monthly and annual temperature and precipitation) in some regions with different climatic conditions.(Author)

  20. THE EFFECT OF DECOMPOSITION METHOD AS DATA PREPROCESSING ON NEURAL NETWORKS MODEL FOR FORECASTING TREND AND SEASONAL TIME SERIES

    Directory of Open Access Journals (Sweden)

    Subanar Subanar

    2006-01-01

    Full Text Available Recently, one of the central topics for the neural networks (NN community is the issue of data preprocessing on the use of NN. In this paper, we will investigate this topic particularly on the effect of Decomposition method as data processing and the use of NN for modeling effectively time series with both trend and seasonal patterns. Limited empirical studies on seasonal time series forecasting with neural networks show that some find neural networks are able to model seasonality directly and prior deseasonalization is not necessary, and others conclude just the opposite. In this research, we study particularly on the effectiveness of data preprocessing, including detrending and deseasonalization by applying Decomposition method on NN modeling and forecasting performance. We use two kinds of data, simulation and real data. Simulation data are examined on multiplicative of trend and seasonality patterns. The results are compared to those obtained from the classical time series model. Our result shows that a combination of detrending and deseasonalization by applying Decomposition method is the effective data preprocessing on the use of NN for forecasting trend and seasonal time series.

  1. Rapid Quadrupole-Time-of-Flight Mass Spectrometry Method Quantifies Oxygen-Rich Lignin Compound in Complex Mixtures

    Science.gov (United States)

    Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.

    2018-03-01

    Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.

  2. The Fourier decomposition method for nonlinear and non-stationary time series analysis.

    Science.gov (United States)

    Singh, Pushpendra; Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik

    2017-03-01

    for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of 'Fourier intrinsic band functions' (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time-frequency-energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms.

  3. Structured Feedback Training for Time-Out: Efficacy and Efficiency in Comparison to a Didactic Method.

    Science.gov (United States)

    Jensen, Scott A; Blumberg, Sean; Browning, Megan

    2017-09-01

    Although time-out has been demonstrated to be effective across multiple settings, little research exists on effective methods for training others to implement time-out. The present set of studies is an exploratory analysis of a structured feedback method for training time-out using repeated role-plays. The three studies examined (a) a between-subjects comparison to more a traditional didactic/video modeling method of time-out training, (b) a within-subjects comparison to traditional didactic/video modeling training for another skill, and (c) the impact of structured feedback training on in-home time-out implementation. Though findings are only preliminary and more research is needed, the structured feedback method appears across studies to be an efficient, effective method that demonstrates good maintenance of skill up to 3 months post training. Findings suggest, though do not confirm, a benefit of the structured feedback method over a more traditional didactic/video training model. Implications and further research on the method are discussed.

  4. Modelled hydraulic redistribution by sunflower (Helianthus annuus L.) matches observed data only after including night-time transpiration.

    Science.gov (United States)

    Neumann, Rebecca B; Cardon, Zoe G; Teshera-Levye, Jennifer; Rockwell, Fulton E; Zwieniecki, Maciej A; Holbrook, N Michele

    2014-04-01

    The movement of water from moist to dry soil layers through the root systems of plants, referred to as hydraulic redistribution (HR), occurs throughout the world and is thought to influence carbon and water budgets and ecosystem functioning. The realized hydrologic, biogeochemical and ecological consequences of HR depend on the amount of redistributed water, whereas the ability to assess these impacts requires models that correctly capture HR magnitude and timing. Using several soil types and two ecotypes of sunflower (Helianthus annuus L.) in split-pot experiments, we examined how well the widely used HR modelling formulation developed by Ryel et al. matched experimental determination of HR across a range of water potential driving gradients. H. annuus carries out extensive night-time transpiration, and although over the last decade it has become more widely recognized that night-time transpiration occurs in multiple species and many ecosystems, the original Ryel et al. formulation does not include the effect of night-time transpiration on HR. We developed and added a representation of night-time transpiration into the formulation, and only then was the model able to capture the dynamics and magnitude of HR we observed as soils dried and night-time stomatal behaviour changed, both influencing HR. © 2013 John Wiley & Sons Ltd.

  5. Two-relaxation-time lattice Boltzmann method and its application to advective-diffusive-reactive transport

    Science.gov (United States)

    Yan, Zhifeng; Yang, Xiaofan; Li, Siliang; Hilpert, Markus

    2017-11-01

    The lattice Boltzmann method (LBM) based on single-relaxation-time (SRT) or multiple-relaxation-time (MRT) collision operators is widely used in simulating flow and transport phenomena. The LBM based on two-relaxation-time (TRT) collision operators possesses strengths from the SRT and MRT LBMs, such as its simple implementation and good numerical stability, although tedious mathematical derivations and presentations of the TRT LBM hinder its application to a broad range of flow and transport phenomena. This paper describes the TRT LBM clearly and provides a pseudocode for easy implementation. Various transport phenomena were simulated using the TRT LBM to illustrate its applications in subsurface environments. These phenomena include advection-diffusion in uniform flow, Taylor dispersion in a pipe, solute transport in a packed column, reactive transport in uniform flow, and bacterial chemotaxis in porous media. The TRT LBM demonstrated good numerical performance in terms of accuracy and stability in predicting these transport phenomena. Therefore, the TRT LBM is a powerful tool to simulate various geophysical and biogeochemical processes in subsurface environments.

  6. Relation of exact Gaussian basis methods to the dephasing representation: Theory and application to time-resolved electronic spectra

    Science.gov (United States)

    Sulc, Miroslav; Hernandez, Henar; Martinez, Todd J.; Vanicek, Jiri

    2014-03-01

    We recently showed that the Dephasing Representation (DR) provides an efficient tool for computing ultrafast electronic spectra and that cellularization yields further acceleration [M. Šulc and J. Vaníček, Mol. Phys. 110, 945 (2012)]. Here we focus on increasing its accuracy by first implementing an exact Gaussian basis method (GBM) combining the accuracy of quantum dynamics and efficiency of classical dynamics. The DR is then derived together with ten other methods for computing time-resolved spectra with intermediate accuracy and efficiency. These include the Gaussian DR (GDR), an exact generalization of the DR, in which trajectories are replaced by communicating frozen Gaussians evolving classically with an average Hamiltonian. The methods are tested numerically on time correlation functions and time-resolved stimulated emission spectra in the harmonic potential, pyrazine S0 /S1 model, and quartic oscillator. Both the GBM and the GDR are shown to increase the accuracy of the DR. Surprisingly, in chaotic systems the GDR can outperform the presumably more accurate GBM, in which the two bases evolve separately. This research was supported by the Swiss NSF Grant No. 200021_124936/1 and NCCR Molecular Ultrafast Science & Technology (MUST), and by the EPFL.

  7. New Multigrid Method Including Elimination Algolithm Based on High-Order Vector Finite Elements in Three Dimensional Magnetostatic Field Analysis

    Science.gov (United States)

    Hano, Mitsuo; Hotta, Masashi

    A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.

  8. Normalization methods in time series of platelet function assays

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham

    2016-01-01

    Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217

  9. Relative quantification of mRNA: comparison of methods currently used for real-time PCR data analysis

    Directory of Open Access Journals (Sweden)

    Koppel Juraj

    2007-12-01

    Full Text Available Abstract Background Fluorescent data obtained from real-time PCR must be processed by some method of data analysis to obtain the relative quantity of target mRNA. The method chosen for data analysis can strongly influence results of the quantification. Results To compare the performance of six techniques which are currently used for analysing fluorescent data in real-time PCR relative quantification, we quantified four cytokine transcripts (IL-1β, IL-6 TNF-α, and GM-CSF in an in vivo model of colonic inflammation. Accuracy of the methods was tested by quantification on samples with known relative amounts of target mRNAs. Reproducibility of the methods was estimated by the determination of the intra-assay and inter-assay variability. Cytokine expression normalized to the expression of three reference genes (ACTB, HPRT, SDHA was then determined using the six methods for data analysis. The best results were obtained with the relative standard curve method, comparative Ct method and with DART-PCR, LinRegPCR and Liu & Saint exponential methods when average amplification efficiency was used. The use of individual amplification efficiencies in DART-PCR, LinRegPCR and Liu & Saint exponential methods significantly impaired the results. The sigmoid curve-fitting (SCF method produced medium performance; the results indicate that the use of appropriate type of fluorescence data and in some instances manual selection of the number of amplification cycles included in the analysis is necessary when the SCF method is applied. We also compared amplification efficiencies (E and found that although the E values determined by different methods of analysis were not identical, all the methods were capable to identify two genes whose E values significantly differed from other genes. Conclusion Our results show that all the tested methods can provide quantitative values reflecting the amounts of measured mRNA in samples, but they differ in their accuracy and

  10. Nonlinear Time Reversal Acoustic Method of Friction Stir Weld Assessment, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of the project is demonstration of the feasibility of Friction Stir Weld (FSW) assessment by novel Nonlinear Time Reversal Acoustic (TRA) method. Time...

  11. An integration time adaptive control method for atmospheric composition detection of occultation

    Science.gov (United States)

    Ding, Lin; Hou, Shuai; Yu, Fei; Liu, Cheng; Li, Chao; Zhe, Lin

    2018-01-01

    When sun is used as the light source for atmospheric composition detection, it is necessary to image sun for accurate identification and stable tracking. In the course of 180 second of the occultation, the magnitude of sun light intensity through the atmosphere changes greatly. It is nearly 1100 times illumination change between the maximum atmospheric and the minimum atmospheric. And the process of light change is so severe that 2.9 times per second of light change can be reached. Therefore, it is difficult to control the integration time of sun image camera. In this paper, a novel adaptive integration time control method for occultation is presented. In this method, with the distribution of gray value in the image as the reference variable, and the concepts of speed integral PID control, the integration time adaptive control problem of high frequency imaging. The large dynamic range integration time automatic control in the occultation can be achieved.

  12. Rapid expansion method (REM) for time‐stepping in reverse time migration (RTM)

    KAUST Repository

    Pestana, Reynam C.

    2009-01-01

    We show that the wave equation solution using a conventional finite‐difference scheme, derived commonly by the Taylor series approach, can be derived directly from the rapid expansion method (REM). After some mathematical manipulation we consider an analytical approximation for the Bessel function where we assume that the time step is sufficiently small. From this derivation we find that if we consider only the first two Chebyshev polynomials terms in the rapid expansion method we can obtain the second order time finite‐difference scheme that is frequently used in more conventional finite‐difference implementations. We then show that if we use more terms from the REM we can obtain a more accurate time integration of the wave field. Consequently, we have demonstrated that the REM is more accurate than the usual finite‐difference schemes and it provides a wave equation solution which allows us to march in large time steps without numerical dispersion and is numerically stable. We illustrate the method with post and pre stack migration results.

  13. Summary of Time Period-Based and Other Approximation Methods for Determining the Capacity Value of Wind and Solar in the United States: September 2010 - February 2012

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, J.; Porter, K.

    2012-03-01

    This paper updates previous work that describes time period-based and other approximation methods for estimating the capacity value of wind power and extends it to include solar power. The paper summarizes various methods presented in utility integrated resource plans, regional transmission organization methodologies, regional stakeholder initiatives, regulatory proceedings, and academic and industry studies. Time period-based approximation methods typically measure the contribution of a wind or solar plant at the time of system peak - sometimes over a period of months or the average of multiple years.

  14. Odd time formulation of the Batalin-Vilkovisky method of quantization

    International Nuclear Information System (INIS)

    Dayi, O.F.

    1988-08-01

    By using a Grassmann odd parameter which behaves like time, it is shown that the main features of the Batalin-Fradkin method of quantization of reducible gauge theories can be formulated systematically. (author). 6 refs

  15. Composite material including nanocrystals and methods of making

    Science.gov (United States)

    Bawendi, Moungi G.; Sundar, Vikram C.

    2010-04-06

    Temperature-sensing compositions can include an inorganic material, such as a semiconductor nanocrystal. The nanocrystal can be a dependable and accurate indicator of temperature. The intensity of emission of the nanocrystal varies with temperature and can be highly sensitive to surface temperature. The nanocrystals can be processed with a binder to form a matrix, which can be varied by altering the chemical nature of the surface of the nanocrystal. A nanocrystal with a compatibilizing outer layer can be incorporated into a coating formulation and retain its temperature sensitive emissive properties.

  16. Time motion study using mixed methods to assess service delivery by frontline health workers from South India: methods.

    Science.gov (United States)

    Singh, Samiksha; Upadhyaya, Sanjeev; Deshmukh, Pradeep; Dongre, Amol; Dwivedi, Neha; Dey, Deepak; Kumar, Vijay

    2018-04-02

    In India, amidst the increasing number of health programmes, there are concerns about the performance of frontline health workers (FLHW). We assessed the time utilisation and factors affecting the work of frontline health workers from South India. This is a mixed methods study using time and motion (TAM) direct observations and qualitative enquiry among frontline/community health workers. These included 43 female and 6 male multipurpose health workers (namely, auxiliary nurse midwives (ANMs) and male-MPHWs), 12 nutrition and health workers (Anganwadi workers, AWWs) and 53 incentive-based community health workers (accredited social health activists, ASHAs). We conducted the study in two phases. In the formative phase, we conducted an in-depth inductive investigation to develop observation checklists and qualitative tools. The main study involved deductive approach for TAM observations. This enabled us to observe a larger sample to capture variations across non-tribal and tribal regions and different health cadres. For the main study, we developed GPRS-enabled android-based application to precisely record time, multi-tasking and field movement. We conducted non-participatory direct observations (home to home) for consecutively 6 days for each participant. We conducted in-depth interviews with all the participants and 33 of their supervisors and relevant officials. We conducted six focus group discussions (FGDs) with ASHAs and one FGD with ANMs to validate preliminary findings. We established a mechanism for quality assurance of data collection and analysis. We analysed the data separately for each cadre and stratified for non-tribal and tribal regions. On any working day, the ANMs spent median 7:04 h, male-MPHWs spent median 5:44 h and AWWs spent median 6:50 h on the job. The time spent on the job was less among the FLHWs from tribal areas as compared to those from non-tribal areas. ANMs and AWWs prioritised maternal and child health, while male-MPHWs were

  17. Study of time resolution by digital methods with a DRS4 module

    Science.gov (United States)

    Du, Cheng-Ming, Du; Jin-Da, Chen; Xiu-Ling, Zhang; Yang, Hai-Bo; Cheng, Ke; Kong, Jie; Hu, Zheng-Guo; Sun, Zhi-Yu; Su, Hong; Xu, Hu-Shan

    2016-04-01

    A new Digital Pulse Processing (DPP) module has been developed, based on a domino ring sampler version 4 chip (DRS4), with good time resolution for LaBr3 detectors, and different digital timing analysis methods for processing the raw detector signals are reported. The module, composed of an eight channel DRS4 chip, was used as the readout electronics and acquisition system to process the output signals from XP20D0 photomultiplier tubes (PMTs). Two PMTs were coupled with LaBr3 scintillators and placed on opposite sides of a radioactive positron 22Na source for 511 keV γ-ray tests. By analyzing the raw data acquired by the module, the best coincidence timing resolution is about 194.7 ps (FWHM), obtained by the digital constant fraction discrimination (dCFD) method, which is better than other digital methods and analysis methods based on conventional analog systems which have been tested. The results indicate that it is a promising approach to better localize the positron annihilation in positron emission tomography (PET) with time of flight (TOF), as well as for scintillation timing measurement, such as in TOF-ΔE and TOF-E systems for particle identification, with picosecond accuracy timing measurement. Furthermore, this module is more simple and convenient than other systems. Supported by the Science Foundation of the Chinese Academy of Sciences (210340XBO), National Natural Science Foundation of China (11305233,11205222), General Program of National Natural Science Foundation of China (11475234), Specific Fund of National Key Scientific Instrument and Equipment Development Project (2011YQ12009604) and Joint Fund for Research Based on Large-Scale Scientific Facilities (U1532131).

  18. System and method for time synchronization in a wireless network

    Science.gov (United States)

    Gonia, Patrick S.; Kolavennu, Soumitri N.; Mahasenan, Arun V.; Budampati, Ramakrishna S.

    2010-03-30

    A system includes multiple wireless nodes forming a cluster in a wireless network, where each wireless node is configured to communicate and exchange data wirelessly based on a clock. One of the wireless nodes is configured to operate as a cluster master. Each of the other wireless nodes is configured to (i) receive time synchronization information from a parent node, (ii) adjust its clock based on the received time synchronization information, and (iii) broadcast time synchronization information based on the time synchronization information received by that wireless node. The time synchronization information received by each of the other wireless nodes is based on time synchronization information provided by the cluster master so that the other wireless nodes substantially synchronize their clocks with the clock of the cluster master.

  19. Increased efficacy for in-house validation of real-time PCR GMO detection methods.

    Science.gov (United States)

    Scholtens, I M J; Kok, E J; Hougs, L; Molenaar, B; Thissen, J T N M; van der Voet, H

    2010-03-01

    To improve the efficacy of the in-house validation of GMO detection methods (DNA isolation and real-time PCR, polymerase chain reaction), a study was performed to gain insight in the contribution of the different steps of the GMO detection method to the repeatability and in-house reproducibility. In the present study, 19 methods for (GM) soy, maize canola and potato were validated in-house of which 14 on the basis of an 8-day validation scheme using eight different samples and five on the basis of a more concise validation protocol. In this way, data was obtained with respect to the detection limit, accuracy and precision. Also, decision limits were calculated for declaring non-conformance (>0.9%) with 95% reliability. In order to estimate the contribution of the different steps in the GMO analysis to the total variation variance components were estimated using REML (residual maximum likelihood method). From these components, relative standard deviations for repeatability and reproducibility (RSD(r) and RSD(R)) were calculated. The results showed that not only the PCR reaction but also the factors 'DNA isolation' and 'PCR day' are important factors for the total variance and should therefore be included in the in-house validation. It is proposed to use a statistical model to estimate these factors from a large dataset of initial validations so that for similar GMO methods in the future, only the PCR step needs to be validated. The resulting data are discussed in the light of agreed European criteria for qualified GMO detection methods.

  20. SU-F-J-86: Method to Include Tissue Dose Response Effect in Deformable Image Registration

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, J; Liang, J; Chen, S; Qin, A; Yan, D [Beaumont Health Systeml, Royal Oak, MI (United States)

    2016-06-15

    Purpose: Organ changes shape and size during radiation treatment due to both mechanical stress and radiation dose response. However, the dose response induced deformation has not been considered in conventional deformable image registration (DIR). A novel DIR approach is proposed to include both tissue elasticity and radiation dose induced organ deformation. Methods: Assuming that organ sub-volume shrinkage was proportional to the radiation dose induced cell killing/absorption, the dose induced organ volume change was simulated applying virtual temperature on each sub-volume. Hence, both stress and heterogeneity temperature induced organ deformation. Thermal stress finite element method with organ surface boundary condition was used to solve deformation. Initial boundary correspondence on organ surface was created from conventional DIR. Boundary condition was updated by an iterative optimization scheme to minimize elastic deformation energy. The registration was validated on a numerical phantom. Treatment dose was constructed applying both the conventional DIR and the proposed method using daily CBCT image obtained from HN treatment. Results: Phantom study showed 2.7% maximal discrepancy with respect to the actual displacement. Compared with conventional DIR, subvolume displacement difference in a right parotid had the mean±SD (Min, Max) to be 1.1±0.9(−0.4∼4.8), −0.1±0.9(−2.9∼2.4) and −0.1±0.9(−3.4∼1.9)mm in RL/PA/SI directions respectively. Mean parotid dose and V30 constructed including the dose response induced shrinkage were 6.3% and 12.0% higher than those from the conventional DIR. Conclusion: Heterogeneous dose distribution in normal organ causes non-uniform sub-volume shrinkage. Sub-volume in high dose region has a larger shrinkage than the one in low dose region, therefore causing more sub-volumes to move into the high dose area during the treatment course. This leads to an unfavorable dose-volume relationship for the normal organ

  1. Time change

    DEFF Research Database (Denmark)

    Veraart, Almut; Winkel, Matthias

    2010-01-01

    The mathematical operation of time-changing continuous-time stochastic processes can be regarded as a standard method for building financial models. We briefly review the theory on time-changed stochastic processes and relate them to stochastic volatility models in finance. Popular models......, including time-changed Lévy processes, where the time-change process is given by a subordinator or an absolutely continuous time change, are presented. Finally, we discuss the potential and the limitations of using such processes for constructing multivariate financial models....

  2. A simple method to adapt time sampling of the analog signal

    International Nuclear Information System (INIS)

    Kalinin, Yu.G.; Martyanov, I.S.; Sadykov, Kh.; Zastrozhnova, N.N.

    2004-01-01

    In this paper we briefly describe the time sampling method, which is adapted to the speed of the signal change. Principally, this method is based on a simple idea--the combination of discrete integration with differentiation of the analog signal. This method can be used in nuclear electronics research into the characteristics of detectors and the shape of the pulse signal, pulse and transitive characteristics of inertial systems of processing of signals, etc

  3. Time coder for slow neutron time-of-flight spectrometer

    International Nuclear Information System (INIS)

    Grashilin, V.A.; Ofengenden, R.G.

    1988-01-01

    Time coder for slow neutron time-of-flight spectrometer is described. The time coder is of modular structure, is performed in the CAMAC standard and operates on line with DVK-2 computer. The main coder units include supporting generator, timers, time-to-digital converter, memory unit and crate controller. Method for measuring background symmetrically to the effect is proposed for a more correct background accounting. 4 refs.; 1 fig

  4. Development and evaluation of a real-time method for testing human enteroviruses and coxsackievirus A16.

    Science.gov (United States)

    Chen, Qian; Hu, Zheng; Zhang, Qihua; Yu, Minghui

    2016-05-01

    Hand, foot, and mouth disease (HFMD) is a common infectious disease caused by a group of the human enteroviruses (HEV), including coxsackievirus A16 (CA16) and enterovirus 71 (EV71). In recent years, another HEV-A serotype, CA6 or CA10, has emerged to be one of the major etiologic agents that can induce HFMD worldwide. The objective of this study is to develop specific, sensitive, and rapid methods to help diagnose HEV and CA16 specifically by using simultaneous amplification testing (SAT) based on isothermal amplification of RNA and real-time detection of fluorescence technique, which were named as SAT-HEV and SAT-CA16, respectively (SAT-HEV/SAT-CA16). The specificity and sensitivity of SAT were tested here. SAT-HEV/SAT-CA16 could measure viral titers that were at least 10-fold lower than those measured by real-time PCR. Non-false cross-reactive amplification indicated that SAT-HEV/SAT-CA16 were highly specific with the addition of internal control (IC) RNA (5000 copies/reaction). A total of 198 clinical specimens were assayed by SAT comparing with real-time PCR. The statistically robust assessment of SAT-HEV and HEV-specific real-time PCR plus sequencing reached 99.0% (196/198), with a kappa value of 0.97, and 99.5% (197/198) and a kappa value of 0.99 for CA16, respectively. Additionally, IC prevented false-negative readings and assured the SAT-HEV/SAT-CA16 method's accuracy. Overall, SAT-HEV/SAT-CA16 method may serve as a platform for the simple and rapid detection of HEV/CA16 in time of HFMD outbreak. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Trotting Gait of a Quadruped Robot Based on the Time-Pose Control Method

    Directory of Open Access Journals (Sweden)

    Cai RunBin

    2013-02-01

    Full Text Available We present the Time-Pose control method for the trotting gait of a quadruped robot on flat ground and up a slope. The method, with brief control structure, real-time operation ability and high adaptability, divides quadruped robot control into gait control and pose control. Virtual leg and intuitive controllers are introduced to simplify the model and generate the trajectory of mass centre and location of supporting legs in gait control, while redundancy optimization is used for solving the inverse kinematics in pose control. The models both on flat ground and up a slope are fully analysed, and different kinds of optimization methods are compared using the manipulability measure in order to select the best option. Simulations are performed, which prove that the Time-Pose control method is realizable for these two kinds of environment.

  6. Quantitative analysis of biological responses to low dose-rate γ-radiation, including dose, irradiation time, and dose-rate

    International Nuclear Information System (INIS)

    Magae, J.; Furukawa, C.; Kawakami, Y.; Hoshi, Y.; Ogata, H.

    2003-01-01

    Full text: Because biological responses to radiation are complex processes dependent on irradiation time as well as total dose, it is necessary to include dose, dose-rate and irradiation time simultaneously to predict the risk of low dose-rate irradiation. In this study, we analyzed quantitative relationship among dose, irradiation time and dose-rate, using chromosomal breakage and proliferation inhibition of human cells. For evaluation of chromosome breakage we assessed micronuclei induced by radiation. U2OS cells, a human osteosarcoma cell line, were exposed to gamma-ray in irradiation room bearing 50,000 Ci 60 Co. After the irradiation, they were cultured for 24 h in the presence of cytochalasin B to block cytokinesis, cytoplasm and nucleus were stained with DAPI and propidium iodide, and the number of binuclear cells bearing micronuclei was determined by fluorescent microscopy. For proliferation inhibition, cells were cultured for 48 h after the irradiation and [3H] thymidine was pulsed for 4 h before harvesting. Dose-rate in the irradiation room was measured with photoluminescence dosimeter. While irradiation time less than 24 h did not affect dose-response curves for both biological responses, they were remarkably attenuated as exposure time increased to more than 7 days. These biological responses were dependent on dose-rate rather than dose when cells were irradiated for 30 days. Moreover, percentage of micronucleus-forming cells cultured continuously for more than 60 days at the constant dose-rate, was gradually decreased in spite of the total dose accumulation. These results suggest that biological responses at low dose-rate, are remarkably affected by exposure time, that they are dependent on dose-rate rather than total dose in the case of long-term irradiation, and that cells are getting resistant to radiation after the continuous irradiation for 2 months. It is necessary to include effect of irradiation time and dose-rate sufficiently to evaluate risk

  7. Standard Test Method for Gel Time of Carbon Fiber-Epoxy Prepreg

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1999-01-01

    1.1 This test method covers the determination of gel time of carbon fiber-epoxy tape and sheet. The test method is suitable for the measurement of gel time of resin systems having either high or low viscosity. 1.2 The values stated in SI units are to be regarded as standard. The values in parentheses are for reference only. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  8. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles.

    Science.gov (United States)

    Ni, Jianjun; Wu, Liuying; Shi, Pengfei; Yang, Simon X

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently.

  9. Real time alpha value measurement with Feynman-α method utilizing time series data acquisition on low enriched uranium system

    International Nuclear Information System (INIS)

    Tonoike, Kotaro; Yamamoto, Toshihiro; Watanabe, Shoichi; Miyoshi, Yoshinori

    2003-01-01

    As a part of the development of a subcriticality monitoring system, a system which has a time series data acquisition function of detector signals and a real time evaluation function of alpha value with the Feynman-alpha method was established, with which the kinetic parameter (alpha value) was measured at the STACY heterogeneous core. The Hashimoto's difference filter was implemented in the system, which enables the measurement at a critical condition. The measurement result of the new system agreed with the pulsed neutron method. (author)

  10. A real-time spike sorting method based on the embedded GPU.

    Science.gov (United States)

    Zelan Yang; Kedi Xu; Xiang Tian; Shaomin Zhang; Xiaoxiang Zheng

    2017-07-01

    Microelectrode arrays with hundreds of channels have been widely used to acquire neuron population signals in neuroscience studies. Online spike sorting is becoming one of the most important challenges for high-throughput neural signal acquisition systems. Graphic processing unit (GPU) with high parallel computing capability might provide an alternative solution for increasing real-time computational demands on spike sorting. This study reported a method of real-time spike sorting through computing unified device architecture (CUDA) which was implemented on an embedded GPU (NVIDIA JETSON Tegra K1, TK1). The sorting approach is based on the principal component analysis (PCA) and K-means. By analyzing the parallelism of each process, the method was further optimized in the thread memory model of GPU. Our results showed that the GPU-based classifier on TK1 is 37.92 times faster than the MATLAB-based classifier on PC while their accuracies were the same with each other. The high-performance computing features of embedded GPU demonstrated in our studies suggested that the embedded GPU provide a promising platform for the real-time neural signal processing.

  11. Nanosecond time-resolved EPR in pulse radiolysis via the spin echo method

    International Nuclear Information System (INIS)

    Trifunac, A.D.; Norris, J.R.; Lawler, R.G.

    1979-01-01

    The design and operation of a time-resolved electron spin echo spectrometer suitable for detecting transient radicals produced by 3 MeV electron radiolysis is described. Two modes of operation are available: Field swept mode which generates a normal EPR spectrum and kinetic mode in which the time dependence of a single EPR line is monitored. Techniques which may be used to minimize the effects of nonideal microwave pulses and overlapping sample tube signals are described. The principal advantages of the spin echo method over other time-resolved EPR methods are: (1) Improved time resolution (presently approx.30--50 nsec) allows monitoring of fast changes in EPR signals of transient radicals, (2) Lower susceptibility to interference between the EPR signal and the electron beam pulse at short times, and (3) Lack of dependence of transient signals on microwave field amplitude or static field inhomogeneity at short times. The performance of the instrument is illustrated using CIDEP from acetate radical formed in pulsed radiolysis of aqueous solutions of potassium acetate. The relaxation time and CIDEP enhancement factor obtained for this radical using the spin echo method compare favorably with previous determinations using direct detection EPR. Radical decay rates yield estimates of initial radical concentrations of 10 -4 10 -3 M per electron pulse. The Bloch equations are solved to give an expression for the echo signal for samples exhibiting CIDEP using arbitrary microwave pulse widths and distributions of Larmor frequencies. Conditions are discussed under which the time-dependent signal would be distorted by deviations from an ideal nonselective 90 0 --tau--180 0 pulse sequence

  12. Determining when a fracture occurred: Does the method matter? Analysis of the similarity of three different methods for estimating time since fracture of juvenile long bones.

    Science.gov (United States)

    Drury, Anne; Cunningham, Craig

    2018-01-01

    Radiographic fracture date estimation is a critical component of skeletal trauma analysis in the living. Several timetables have been proposed for how the appearance of radiographic features can be interpreted to provide a likely time frame for fracture occurrence. This study compares three such timetables for pediatric fractures, by Islam et al. (2000), Malone et al. (2011), and Prosser et al. (2012), in order to determine whether the fracture date ranges produced by using these methods are in agreement with one another. Fracture date ranges were estimated for 112 long bone fractures in 96 children aged 1-17 years, using the three different timetables. The extent of similarity of the intervals was tested by statistically comparing the overlap between the ranges. Results showed that none of the methods were in perfect agreement with one another. Differences seen included the size of the estimated date range for when a fracture occurred, and the specific dates given for both the upper and lower ends of the fracture date range. There was greater similarity between the ranges produced by Malone et al. (2011) and both the other two studies than there was between Islam et al. (2000) and Prosser et al. (2012). The greatest similarity existed between Malone et al. (2011) and Islam et al. (2000). The extent of differences between methods can vary widely, depending on the fracture analysed. Using one timetable gives an average earliest possible fracture date of less than 2 days before another, but the range was extreme, with one method estimating minimum time since fracture as 25 days before another method for a given fracture. In most cases, one method gave maximum time since fracture as a week less than the other two methods, but range was extreme and some estimates were nearly two months different. The variability in fracture date estimates given by these timetables indicates that caution should be exercised when estimating the timing of a juvenile fracture if relying

  13. Two methods of space--time energy densification

    International Nuclear Information System (INIS)

    Sahlin, R.L.

    1976-01-01

    With a view to the goal of net energy production from a DT microexplosion, we study two ideas (methods) through which (separately or in combination) energy may be ''concentrated'' into a small volume and short period of time--the so-called space-time energy densification or compression. We first discuss the advantages and disadvantages of lasers and relativistic electron-beam (E-beam) machines as the sources of such energy and identify the amplification of laser pulses as a key factor in energy compression. The pulse length of present relativistic E-beam machines is the most serious limitation of this pulsed-power source. The first energy-compression idea we discuss is the reasonably efficient production of short-duration, high-current relativistic electron pulses by the self interruption and restrike of a current in a plasma pinch due to the rapid onset of strong turbulence. A 1-MJ plasma focus based on this method is nearing completion at this Laboratory. The second energy-compression idea is based on laser-pulse production through the parametric amplification of a self-similar or solitary wave pulse, for which analogs can be found in other wave processes. Specifically, the second energy-compression idea is a proposal for parametric amplification of a solitary, transverse magnetic pulse in a coaxial cavity with a Bennett dielectric rod as an inner coax. Amplifiers of this type can be driven by the pulsed power from a relativistic E-beam machine. If the end of the inner dielectric coax is made of LiDT or another fusionable material, the amplified pulse can directly drive a fusion reaction--there would be no need to switch the pulse out of the system toward a remote target

  14. Two methods of space-time energy densification

    International Nuclear Information System (INIS)

    Sahlin, H.L.

    1975-01-01

    With a view to the goal of net energy production from a DT microexplosion, two ideas (methods) are studied through which (separately or in combination) energy may be ''concentrated'' into a small volume and short period of time--the so-called space-time energy densification or compression. The advantages and disadvantages of lasers and relativistic electron-beam (E-beam) machines as the sources of such energy are studied and the amplification of laser pulses as a key factor in energy compression is discussed. The pulse length of present relativistic E-beam machines is the most serious limitation of this pulsed-power source. The first energy-compression idea discussed is the reasonably efficient production of short-duration, high-current relativistic electron pulses by the self interruption and restrike of a current in a plasma pinch due to the rapid onset of strong turbulence. A 1-MJ plasma focus based on this method is nearing completion at this Laboratory. The second energy-compression idea is based on laser-pulse production through the parametric amplification of a self-similar or solitary wave pulse, for which analogs can be found in other wave processes. Specifically, the second energy-compression idea is a proposal for parametric amplification of a solitary, transverse magnetic pulse in a coaxial cavity with a Bennett dielectric rod as an inner coax. Amplifiers of this type can be driven by the pulsed power from a relativistic E-beam machine. If the end of the inner dielectric coax is made of LiDT or another fusionable material, the amplified pulse can directly drive a fusion reaction--there would be no need to switch the pulse out of the system toward a remote target. (auth)

  15. A Novel Time-Varying Friction Compensation Method for Servomechanism

    Directory of Open Access Journals (Sweden)

    Bin Feng

    2015-01-01

    Full Text Available Friction is an inevitable nonlinear phenomenon existing in servomechanisms. Friction errors often affect their motion and contour accuracies during the reverse motion. To reduce friction errors, a novel time-varying friction compensation method is proposed to solve the problem that the traditional friction compensation methods hardly deal with. This problem leads to an unsatisfactory friction compensation performance and the motion and contour accuracies cannot be maintained effectively. In this method, a trapezoidal compensation pulse is adopted to compensate for the friction errors. A generalized regression neural network algorithm is used to generate the optimal pulse amplitude function. The optimal pulse duration function and the pulse amplitude function can be established by the pulse characteristic parameter learning and then the optimal friction compensation pulse can be generated. The feasibility of friction compensation method was verified on a high-precision X-Y worktable. The experimental results indicated that the motion and contour accuracies were improved greatly with reduction of the friction errors, in different working conditions. Moreover, the overall friction compensation performance indicators were decreased by more than 54% and this friction compensation method can be implemented easily on most of servomechanisms in industry.

  16. Method for determining thermal neutron decay times of earth formations

    International Nuclear Information System (INIS)

    Arnold, D.M.

    1976-01-01

    A method is disclosed for measuring the thermal neutron decay time of earth formations in the vicinity of a well borehole. A harmonically intensity modulated source of fast neutrons is used to irradiate the earth formations with fast neutrons at three different intensity modulation frequencies. The tangents of the relative phase angles of the fast neutrons and the resulting thermal neutrons at each of the three frequencies of modulation are measured. First and second approximations to the earth formation thermal neutron decay time are derived from the three tangent measurements. These approximations are then combined to derive a value for the true earth formation thermal neutron decay time

  17. Own-wage labor supply elasticities: variation across time and estimation methods

    Directory of Open Access Journals (Sweden)

    Olivier Bargain

    2016-10-01

    Full Text Available Abstract There is a huge variation in the size of labor supply elasticities in the literature, which hampers policy analysis. While recent studies show that preference heterogeneity across countries explains little of this variation, we focus on two other important features: observation period and estimation method. We start with a thorough survey of existing evidence for both Western Europe and the USA, over a long period and from different empirical approaches. Then, our meta-analysis attempts to disentangle the role of time changes and estimation methods. We highlight the key role of time changes, documenting the incredible fall in labor supply elasticities since the 1980s not only for the USA but also in the EU. In contrast, we find no compelling evidence that the choice of estimation method explains variation in elasticity estimates. From our analysis, we derive important guidelines for policy simulations.

  18. Design of time interval generator based on hybrid counting method

    International Nuclear Information System (INIS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-01-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  19. Design of time interval generator based on hybrid counting method

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Yuan [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Zhaoqi [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Lu, Houbing [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Hefei Electronic Engineering Institute, Hefei 230037 (China); Chen, Lian [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Jin, Ge, E-mail: goldjin@ustc.edu.cn [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  20. CP Methods for Scheduling and Routing with Time-Dependent Task Costs

    DEFF Research Database (Denmark)

    Tierney, Kevin; Kelareva, Elena; Kilby, Philip

    2013-01-01

    a cost function, and Mixed Integer Programming (MIP) are often used for solving such problems. However, Constraint Programming (CP), particularly with Lazy Clause Genera- tion (LCG), has been found to be faster than MIP for some scheduling problems with time-varying action costs. In this paper, we...... compare CP and LCG against a solve-and-improve approach for two recently introduced problems in maritime logistics with time-varying action costs: the Liner Shipping Fleet Repositioning Problem (LSFRP) and the Bulk Port Cargo Throughput Optimisation Problem (BPCTOP). We present a novel CP model...... for the LSFRP, which is faster than all previous methods and outperforms a simplified automated planning model without time-varying costs. We show that a LCG solver is faster for solving the BPCTOP than a standard finite domain CP solver with a simplified model. We find that CP and LCG are effective methods...

  1. Unconventional Consumption Methods and Enjoying Things Consumed: Recapturing the "First-Time" Experience.

    Science.gov (United States)

    O'Brien, Ed; Smith, Robert W

    2018-06-01

    People commonly lament the inability to re-experience familiar things as they were first experienced. Four experiments suggest that consuming familiar things in new ways can disrupt adaptation and revitalize enjoyment. Participants better enjoyed the same familiar food (Experiment 1), drink (Experiment 2), and video (Experiments 3a-3b) simply when re-experiencing the entity via unusual means (e.g., eating popcorn using chopsticks vs. hands). This occurs because unconventional methods invite an immersive "first-time" perspective on the consumption object: boosts in enjoyment were mediated by revitalized immersion into the consumption experience and were moderated by time such that they were strongest when using unconventional methods for the first time (Experiments 1-2); likewise, unconventional methods that actively disrupted immersion did not elicit the boost, despite being novel (Experiments 3a-3b). Before abandoning once-enjoyable entities, knowing to consume old things in new ways (vs. attaining new things altogether) might temporarily restore enjoyment and postpone wasteful replacement.

  2. Methodological comparison of marginal structural model, time-varying Cox regression, and propensity score methods : the example of antidepressant use and the risk of hip fracture

    NARCIS (Netherlands)

    Ali, M Sanni; Groenwold, Rolf H H; Belitser, Svetlana V; Souverein, Patrick C; Martín, Elisa; Gatto, Nicolle M; Huerta, Consuelo; Gardarsdottir, Helga; Roes, Kit C B; Hoes, Arno W; de Boer, Antonius; Klungel, Olaf H

    2016-01-01

    BACKGROUND: Observational studies including time-varying treatments are prone to confounding. We compared time-varying Cox regression analysis, propensity score (PS) methods, and marginal structural models (MSMs) in a study of antidepressant [selective serotonin reuptake inhibitors (SSRIs)] use and

  3. Prediction of landslide activation at locations in Beskidy Mountains using standard and real-time monitoring methods

    Science.gov (United States)

    Bednarczyk, Z.

    2012-04-01

    The paper presents landslide monitoring methods used for prediction of landslide activity at locations in the Carpathian Mountains (SE Poland). Different types of monitoring methods included standard and real-time early warning measurement with use of hourly data transfer to the Internet were used. Project financed from the EU funds was carried out for the purpose of public road reconstruction. Landslides with low displacement rates (varying from few mm to over 5cm/year) had size of 0.4-2.2mln m3. Flysch layers involved in mass movements represented mixture of clayey soils and sandstones of high moisture content and plasticity. Core sampling and GPR scanning were used for recognition of landslide size and depths. Laboratory research included index, IL oedometer, triaxial and direct shear laboratory tests. GPS-RTK mapping was employed for actualization of landslide morphology. Instrumentation consisted of standard inclinometers, piezometers and pore pressure transducers. Measurements were carried 2006-2011, every month. In May 2010 the first in Poland real-time monitoring system was installed at landslide complex over the Szymark-Bystra public road. It included in-place uniaxial sensors and 3D continuous inclinometers installed to the depths of 12-16m with tilt sensors every 0.5m. Vibrating wire pore pressure and groundwater level transducers together with automatic meteorological station analyzed groundwater and weather conditions. Obtained monitoring and field investigations data provided parameters for LEM and FEM slope stability analysis. They enabled prediction and control of landslide behaviour before, during and after stabilization or partly stabilization works. In May 2010 after the maximum precipitation (100mm/3hours) the rates of observed displacements accelerated to over 11cm in a few days and damaged few standard inclinometer installations. However permanent control of the road area was possible by continuous inclinometer installations. Comprehensive

  4. Discrete-fracture-model of multi–scale time-splitting two–phase flow including nanoparticles transport in fractured porous media

    KAUST Repository

    El-Amin, Mohamed

    2017-11-23

    In this article, we consider a two-phase immiscible incompressible flow including nanoparticles transport in fractured heterogeneous porous media. The system of the governing equations consists of water saturation, Darcy’s law, nanoparticles concentration in water, deposited nanoparticles concentration on the pore-wall, and entrapped nanoparticles concentration in the pore-throat, as well as, porosity and permeability variation due to the nanoparticles deposition/entrapment on/in the pores. The discrete-fracture model (DFM) is used to describe the flow and transport in fractured porous media. Moreover, multiscale time-splitting strategy has been employed to manage different time-step sizes for different physics, such as saturation, concentration, etc. Numerical examples are provided to demonstrate the efficiency of the proposed multi-scale time splitting approach.

  5. Discrete-fracture-model of multi–scale time-splitting two–phase flow including nanoparticles transport in fractured porous media

    KAUST Repository

    El-Amin, Mohamed; Kou, Jisheng; Sun, Shuyu

    2017-01-01

    In this article, we consider a two-phase immiscible incompressible flow including nanoparticles transport in fractured heterogeneous porous media. The system of the governing equations consists of water saturation, Darcy’s law, nanoparticles concentration in water, deposited nanoparticles concentration on the pore-wall, and entrapped nanoparticles concentration in the pore-throat, as well as, porosity and permeability variation due to the nanoparticles deposition/entrapment on/in the pores. The discrete-fracture model (DFM) is used to describe the flow and transport in fractured porous media. Moreover, multiscale time-splitting strategy has been employed to manage different time-step sizes for different physics, such as saturation, concentration, etc. Numerical examples are provided to demonstrate the efficiency of the proposed multi-scale time splitting approach.

  6. From Discrete Space-Time to Minkowski Space: Basic Mechanisms, Methods and Perspectives

    Science.gov (United States)

    Finster, Felix

    This survey article reviews recent results on fermion systems in discrete space-time and corresponding systems in Minkowski space. After a basic introduction to the discrete setting, we explain a mechanism of spontaneous symmetry breaking which leads to the emergence of a discrete causal structure. As methods to study the transition between discrete space-time and Minkowski space, we describe a lattice model for a static and isotropic space-time, outline the analysis of regularization tails of vacuum Dirac sea configurations, and introduce a Lorentz invariant action for the masses of the Dirac seas. We mention the method of the continuum limit, which allows to analyze interacting systems. Open problems are discussed.

  7. Comparing the mannitol-egg yolk-polymyxin agar plating method with the three-tube most-probable-number method for enumeration of Bacillus cereus spores in raw and high-temperature, short-time pasteurized milk.

    Science.gov (United States)

    Harper, Nigel M; Getty, Kelly J K; Schmidt, Karen A; Nutsch, Abbey L; Linton, Richard H

    2011-03-01

    The U.S. Food and Drug Administration's Bacteriological Analytical Manual recommends two enumeration methods for Bacillus cereus: (i) standard plate count method with mannitol-egg yolk-polymyxin (MYP) agar and (ii) a most-probable-number (MPN) method with tryptic soy broth (TSB) supplemented with 0.1% polymyxin sulfate. This study compared the effectiveness of MYP and MPN methods for detecting and enumerating B. cereus in raw and high-temperature, short-time pasteurized skim (0.5%), 2%, and whole (3.5%) bovine milk stored at 4°C for 96 h. Each milk sample was inoculated with B. cereus EZ-Spores and sampled at 0, 48, and 96 h after inoculation. There were no differences (P > 0.05) in B. cereus populations among sampling times for all milk types, so data were pooled to obtain overall mean values for each treatment. The overall B. cereus population mean of pooled sampling times for the MPN method (2.59 log CFU/ml) was greater (P milk samples ranged from 2.36 to 3.46 and 2.66 to 3.58 log CFU/ml for inoculated milk treatments for the MYP plate count and MPN methods, respectively, which is below the level necessary for toxin production. The MPN method recovered more B. cereus, which makes it useful for validation research. However, the MYP plate count method for enumeration of B. cereus also had advantages, including its ease of use and faster time to results (2 versus 5 days for MPN).

  8. Trend analysis using non-stationary time series clustering based on the finite element method

    OpenAIRE

    Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M. K.; Liess, S.

    2014-01-01

    In order to analyze low-frequency variability of climate, it is useful to model the climatic time series with multiple linear trends and locate the times of significant changes. In this paper, we have used non-stationary time series clustering to find change points in the trends. Clustering in a multi-dimensional non-stationary time series is challenging, since the problem is mathematically ill-posed. Clustering based on the finite element method (FEM) is one of the methods ...

  9. Power System Real-Time Monitoring by Using PMU-Based Robust State Estimation Method

    DEFF Research Database (Denmark)

    Zhao, Junbo; Zhang, Gexiang; Das, Kaushik

    2016-01-01

    Accurate real-time states provided by the state estimator are critical for power system reliable operation and control. This paper proposes a novel phasor measurement unit (PMU)-based robust state estimation method (PRSEM) to real-time monitor a power system under different operation conditions...... the system real-time states with good robustness and can address several kinds of BD.......-based bad data (BD) detection method, which can handle the smearing effect and critical measurement errors, is presented. We evaluate PRSEM by using IEEE benchmark test systems and a realistic utility system. The numerical results indicate that, in short computation time, PRSEM can effectively track...

  10. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  11. Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves

    Directory of Open Access Journals (Sweden)

    Shukui Liu

    2011-03-01

    Full Text Available Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.

  12. Correction to the count-rate detection limit and sample/blank time-allocation methods

    International Nuclear Information System (INIS)

    Alvarez, Joseph L.

    2013-01-01

    A common form of count-rate detection limits contains a propagation of uncertainty error. This error originated in methods to minimize uncertainty in the subtraction of the blank counts from the gross sample counts by allocation of blank and sample counting times. Correct uncertainty propagation showed that the time allocation equations have no solution. This publication presents the correct form of count-rate detection limits. -- Highlights: •The paper demonstrated a proper method of propagating uncertainty of count rate differences. •The standard count-rate detection limits were in error. •Count-time allocation methods for minimum uncertainty were in error. •The paper presented the correct form of the count-rate detection limit. •The paper discussed the confusion between count-rate uncertainty and count uncertainty

  13. A Method of Time-Varying Rayleigh Channel Tracking in MIMO Radio System

    Institute of Scientific and Technical Information of China (English)

    GONG Yan-fei; HE Zi-shu; HAN Chun-lin

    2005-01-01

    A method of MIMO channel tracking based on Kalman filter and MMSE-DFE is proposed. The Kalman filter tracks the time-varying channel by using the MMSE-DFE decision and the MMSE-DFE conducts the next decision by using the channel estimates produced by the Kalman filter. Polynomial fitting is used to bridge the gap between the channel estimates produced by the Kalman filter and those needed for the DFE decision. Computer simulation demonstrates that this method can track the MIMO time-varying channel effectively.

  14. Improved Riccati Transfer Matrix Method for Free Vibration of Non-Cylindrical Helical Springs Including Warping

    Directory of Open Access Journals (Sweden)

    A.M. Yu

    2012-01-01

    Full Text Available Free vibration equations for non-cylindrical (conical, barrel, and hyperboloidal types helical springs with noncircular cross-sections, which consist of 14 first-order ordinary differential equations with variable coefficients, are theoretically derived using spatially curved beam theory. In the formulation, the warping effect upon natural frequencies and vibrating mode shapes is first studied in addition to including the rotary inertia, the shear and axial deformation influences. The natural frequencies of the springs are determined by the use of improved Riccati transfer matrix method. The element transfer matrix used in the solution is calculated using the Scaling and Squaring method and Pad'e approximations. Three examples are presented for three types of springs with different cross-sectional shapes under clamped-clamped boundary condition. The accuracy of the proposed method has been compared with the FEM results using three-dimensional solid elements (Solid 45 in ANSYS code. Numerical results reveal that the warping effect is more pronounced in the case of non-cylindrical helical springs than that of cylindrical helical springs, which should be taken into consideration in the free vibration analysis of such springs.

  15. Measure Guideline: Summary of Interior Ducts in New Construction, Including an Efficient, Affordable Method to Install Fur-Down Interior Ducts

    Energy Technology Data Exchange (ETDEWEB)

    Beal, D. [BA-PIRC, Cocoa, FL (United States); McIlvaine, J. [BA-PIRC, Cocoa, FL (United States); Fonorow, K. [BA-PIRC, Cocoa, FL (United States); Martin, E. [BA-PIRC, Cocoa, FL (United States)

    2011-11-01

    This document illustrates guidelines for the efficient installation of interior duct systems in new housing, including the fur-up chase method, the fur-down chase method, and interior ducts positioned in sealed attics or sealed crawl spaces.

  16. Time Discretization Techniques

    KAUST Repository

    Gottlieb, S.

    2016-10-12

    The time discretization of hyperbolic partial differential equations is typically the evolution of a system of ordinary differential equations obtained by spatial discretization of the original problem. Methods for this time evolution include multistep, multistage, or multiderivative methods, as well as a combination of these approaches. The time step constraint is mainly a result of the absolute stability requirement, as well as additional conditions that mimic physical properties of the solution, such as positivity or total variation stability. These conditions may be required for stability when the solution develops shocks or sharp gradients. This chapter contains a review of some of the methods historically used for the evolution of hyperbolic PDEs, as well as cutting edge methods that are now commonly used.

  17. Supply and demand: application of Lean Six Sigma methods to improve drug round efficiency and release nursing time.

    Science.gov (United States)

    Kieran, Maríosa; Cleary, Mary; De Brún, Aoife; Igoe, Aileen

    2017-10-01

    To improve efficiency, reduce interruptions and reduce the time taken to complete oral drug rounds. Lean Six Sigma methods were applied to improve drug round efficiency using a pre- and post-intervention design. A 20-bed orthopaedic ward in a large teaching hospital in Ireland. Pharmacy, nursing and quality improvement staff. A multifaceted intervention was designed which included changes in processes related to drug trolley organization and drug supply planning. A communications campaign aimed at reducing interruptions during nurse-led during rounds was also developed and implemented. Average number of interruptions, average drug round time and variation in time taken to complete drug round. At baseline, the oral drug round took an average of 125 min. Following application of Lean Six Sigma methods, the average drug round time decreased by 51 min. The average number of interruptions per drug round reduced from an average of 12 at baseline to 11 following intervention, with a 75% reduction in drug supply interruptions. Lean Six Sigma methodology was successfully employed to reduce interruptions and to reduce time taken to complete the oral drug round. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  18. A higher order numerical method for time fractional partial differential equations with nonsmooth data

    Science.gov (United States)

    Xing, Yanyuan; Yan, Yubin

    2018-03-01

    Gao et al. [11] (2014) introduced a numerical scheme to approximate the Caputo fractional derivative with the convergence rate O (k 3 - α), 0 equation is sufficiently smooth, Lv and Xu [20] (2016) proved by using energy method that the corresponding numerical method for solving time fractional partial differential equation has the convergence rate O (k 3 - α), 0 equation has low regularity and in this case the numerical method fails to have the convergence rate O (k 3 - α), 0 quadratic interpolation polynomials. Based on this scheme, we introduce a time discretization scheme to approximate the time fractional partial differential equation and show by using Laplace transform methods that the time discretization scheme has the convergence rate O (k 3 - α), 0 0 for smooth and nonsmooth data in both homogeneous and inhomogeneous cases. Numerical examples are given to show that the theoretical results are consistent with the numerical results.

  19. A simple time-delayed method to control chaotic systems

    International Nuclear Information System (INIS)

    Chen Maoyin; Zhou Donghua; Shang Yun

    2004-01-01

    Based on the adaptive iterative learning strategy, a simple time-delayed controller is proposed to stabilize unstable periodic orbits (UPOs) embedded in chaotic attractors. This controller includes two parts: one is a linear feedback part; the other is an adaptive iterative learning estimation part. Theoretical analysis and numerical simulation show the effectiveness of this controller

  20. Teaching Methods in Biology Education and Sustainability Education Including Outdoor Education for Promoting Sustainability--A Literature Review

    Science.gov (United States)

    Jeronen, Eila; Palmberg, Irmeli; Yli-Panula, Eija

    2017-01-01

    There are very few studies concerning the importance of teaching methods in biology education and environmental education including outdoor education for promoting sustainability at the levels of primary and secondary schools and pre-service teacher education. The material was selected using special keywords from biology and sustainable education…

  1. A Fast Multi-layer Subnetwork Connection Method for Time Series InSAR Technique

    Directory of Open Access Journals (Sweden)

    WU Hong'an

    2016-10-01

    Full Text Available Nowadays, times series interferometric synthetic aperture radar (InSAR technique has been widely used in ground deformation monitoring, especially in urban areas where lots of stable point targets can be detected. However, in standard time series InSAR technique, affected by atmospheric correlation distance and the threshold of linear model coherence, the Delaunay triangulation for connecting point targets can be easily separated into many discontinuous subnetworks. Thus it is difficult to retrieve ground deformation in non-urban areas. In order to monitor ground deformation in large areas efficiently, a novel multi-layer subnetwork connection (MLSC method is proposed for connecting all subnetworks. The advantage of the method is that it can quickly reduce the number of subnetworks with valid edges layer-by-layer. This method is compared with the existing complex network connecting mehod. The experimental results demonstrate that the data processing time of the proposed method is only 32.56% of the latter one.

  2. Shining a light on LAMP assays--a comparison of LAMP visualization methods including the novel use of berberine.

    Science.gov (United States)

    Fischbach, Jens; Xander, Nina Carolin; Frohme, Marcus; Glökler, Jörn Felix

    2015-04-01

    The need for simple and effective assays for detecting nucleic acids by isothermal amplification reactions has led to a great variety of end point and real-time monitoring methods. Here we tested direct and indirect methods to visualize the amplification of potato spindle tuber viroid (PSTVd) by loop-mediated isothermal amplification (LAMP) and compared features important for one-pot in-field applications. We compared the performance of magnesium pyrophosphate, hydroxynaphthol blue (HNB), calcein, SYBR Green I, EvaGreen, and berberine. All assays could be used to distinguish between positive and negative samples in visible or UV light. Precipitation of magnesium-pyrophosphate resulted in a turbid reaction solution. The use of HNB resulted in a color change from violet to blue, whereas calcein induced a change from orange to yellow-green. We also investigated berberine as a nucleic acid-specific dye that emits a fluorescence signal under UV light after a positive LAMP reaction. It has a comparable sensitivity to SYBR Green I and EvaGreen. Based on our results, an optimal detection method can be chosen easily for isothermal real-time or end point screening applications.

  3. An Improved Clutter Suppression Method for Weather Radars Using Multiple Pulse Repetition Time Technique

    Directory of Open Access Journals (Sweden)

    Yingjie Yu

    2017-01-01

    Full Text Available This paper describes the implementation of an improved clutter suppression method for the multiple pulse repetition time (PRT technique based on simulated radar data. The suppression method is constructed using maximum likelihood methodology in time domain and is called parametric time domain method (PTDM. The procedure relies on the assumption that precipitation and clutter signal spectra follow a Gaussian functional form. The multiple interleaved pulse repetition frequencies (PRFs that are used in this work are set to four PRFs (952, 833, 667, and 513 Hz. Based on radar simulation, it is shown that the new method can provide accurate retrieval of Doppler velocity even in the case of strong clutter contamination. The obtained velocity is nearly unbiased for all the range of Nyquist velocity interval. Also, the performance of the method is illustrated on simulated radar data for plan position indicator (PPI scan. Compared with staggered 2-PRT transmission schemes with PTDM, the proposed method presents better estimation accuracy under certain clutter situations.

  4. OpenPSTD : The open source pseudospectral time-domain method for acoustic propagation

    NARCIS (Netherlands)

    Hornikx, M.C.J.; Krijnen, T.F.; van Harten, L.

    2016-01-01

    An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in

  5. An evaluation of dynamic mutuality measurements and methods in cyclic time series

    Science.gov (United States)

    Xia, Xiaohua; Huang, Guitian; Duan, Na

    2010-12-01

    Several measurements and techniques have been developed to detect dynamic mutuality and synchronicity of time series in econometrics. This study aims to compare the performances of five methods, i.e., linear regression, dynamic correlation, Markov switching models, concordance index and recurrence quantification analysis, through numerical simulations. We evaluate the abilities of these methods to capture structure changing and cyclicity in time series and the findings of this paper would offer guidance to both academic and empirical researchers. Illustration examples are also provided to demonstrate the subtle differences of these techniques.

  6. Analytical solutions for prediction of the ignition time of wood particles based on a time and space integral method

    NARCIS (Netherlands)

    Haseli, Y.; Oijen, van J.A.; Goey, de L.P.H.

    2012-01-01

    The main idea of this paper is to establish a simple approach for prediction of the ignition time of a wood particle assuming that the thermo-physical properties remain constant and ignition takes place at a characteristic ignition temperature. Using a time and space integral method, explicit

  7. Time-optimal path planning in uncertain flow fields using ensemble method

    KAUST Repository

    Wang, Tong

    2016-01-06

    An ensemble-based approach is developed to conduct time-optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where a set deterministic predictions is used to model and quantify uncertainty in the predictions. In the operational setting, much about dynamics, topography and forcing of the ocean environment is uncertain, and as a result a single path produced by a model simulation has limited utility. To overcome this limitation, we rely on a finitesize ensemble of deterministic forecasts to quantify the impact of variability in the dynamics. The uncertainty of flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each the resulting realizations of the uncertain current field, we predict the optimal path by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of sampling strategy, and develop insight into extensions dealing with regional or general circulation models. In particular, the ensemble method enables us to perform a statistical analysis of travel times, and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  8. A fully automated and scalable timing probe-based method for time alignment of the LabPET II scanners

    Science.gov (United States)

    Samson, Arnaud; Thibaudeau, Christian; Bouchard, Jonathan; Gaudin, Émilie; Paulin, Caroline; Lecomte, Roger; Fontaine, Réjean

    2018-05-01

    A fully automated time alignment method based on a positron timing probe was developed to correct the channel-to-channel coincidence time dispersion of the LabPET II avalanche photodiode-based positron emission tomography (PET) scanners. The timing probe was designed to directly detect positrons and generate an absolute time reference. The probe-to-channel coincidences are recorded and processed using firmware embedded in the scanner hardware to compute the time differences between detector channels. The time corrections are then applied in real-time to each event in every channel during PET data acquisition to align all coincidence time spectra, thus enhancing the scanner time resolution. When applied to the mouse version of the LabPET II scanner, the calibration of 6 144 channels was performed in less than 15 min and showed a 47% improvement on the overall time resolution of the scanner, decreasing from 7 ns to 3.7 ns full width at half maximum (FWHM).

  9. Method of parallel processing in SANPO real time system

    International Nuclear Information System (INIS)

    Ostrovnoj, A.I.; Salamatin, I.M.

    1981-01-01

    A method of parellel processing in SANPO real time system is described. Algorithms of data accumulation and preliminary processing in this system as a parallel processes using a specialized high level programming language are described. Hierarchy of elementary processes are also described. It provides the synchronization of concurrent processes without semaphors. The developed means are applied to the systems of experiment automation using SM-3 minicomputers [ru

  10. Perfectly Matched Layer for the Wave Equation Finite Difference Time Domain Method

    Science.gov (United States)

    Miyazaki, Yutaka; Tsuchiya, Takao

    2012-07-01

    The perfectly matched layer (PML) is introduced into the wave equation finite difference time domain (WE-FDTD) method. The WE-FDTD method is a finite difference method in which the wave equation is directly discretized on the basis of the central differences. The required memory of the WE-FDTD method is less than that of the standard FDTD method because no particle velocity is stored in the memory. In this study, the WE-FDTD method is first combined with the standard FDTD method. Then, Berenger's PML is combined with the WE-FDTD method. Some numerical demonstrations are given for the two- and three-dimensional sound fields.

  11. Long-time integration methods for mesoscopic models of pattern-forming systems

    International Nuclear Information System (INIS)

    Abukhdeir, Nasser Mohieddin; Vlachos, Dionisios G.; Katsoulakis, Markos; Plexousakis, Michael

    2011-01-01

    Spectral methods for simulation of a mesoscopic diffusion model of surface pattern formation are evaluated for long simulation times. Backwards-differencing time-integration, coupled with an underlying Newton-Krylov nonlinear solver (SUNDIALS-CVODE), is found to substantially accelerate simulations, without the typical requirement of preconditioning. Quasi-equilibrium simulations of patterned phases predicted by the model are shown to agree well with linear stability analysis. Simulation results of the effect of repulsive particle-particle interactions on pattern relaxation time and short/long-range order are discussed.

  12. Time-dependent density functional theory (TD-DFT) coupled with reference interaction site model self-consistent field explicitly including spatial electron density distribution (RISM-SCF-SEDD)

    Energy Technology Data Exchange (ETDEWEB)

    Yokogawa, D., E-mail: d.yokogawa@chem.nagoya-u.ac.jp [Department of Chemistry, Graduate School of Science, Nagoya University, Chikusa, Nagoya 464-8602 (Japan); Institute of Transformative Bio-Molecules (WPI-ITbM), Nagoya University, Chikusa, Nagoya 464-8602 (Japan)

    2016-09-07

    Theoretical approach to design bright bio-imaging molecules is one of the most progressing ones. However, because of the system size and computational accuracy, the number of theoretical studies is limited to our knowledge. To overcome the difficulties, we developed a new method based on reference interaction site model self-consistent field explicitly including spatial electron density distribution and time-dependent density functional theory. We applied it to the calculation of indole and 5-cyanoindole at ground and excited states in gas and solution phases. The changes in the optimized geometries were clearly explained with resonance structures and the Stokes shift was correctly reproduced.

  13. A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

    International Nuclear Information System (INIS)

    Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Löffler, Frank; Schnetter, Erik

    2012-01-01

    Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

  14. A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

    Energy Technology Data Exchange (ETDEWEB)

    Abdikamalov, Ernazar; Ott, Christian D.; O' Connor, Evan [TAPIR, California Institute of Technology, MC 350-17, 1200 E California Blvd., Pasadena, CA 91125 (United States); Burrows, Adam; Dolence, Joshua C. [Department of Astrophysical Sciences, Princeton University, Peyton Hall, Ivy Lane, Princeton, NJ 08544 (United States); Loeffler, Frank; Schnetter, Erik, E-mail: abdik@tapir.caltech.edu [Center for Computation and Technology, Louisiana State University, 216 Johnston Hall, Baton Rouge, LA 70803 (United States)

    2012-08-20

    Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

  15. Conducting spoken word recognition research online: Validation and a new timing method.

    Science.gov (United States)

    Slote, Joseph; Strand, Julia F

    2016-06-01

    Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.

  16. Order Patterns Networks (orpan – a method toestimate time-evolving functional connectivity frommultivariate time series

    Directory of Open Access Journals (Sweden)

    Stefan eSchinkel

    2012-11-01

    Full Text Available Complex networks provide an excellent framework for studying the functionof the human brain activity. Yet estimating functional networks from mea-sured signals is not trivial, especially if the data is non-stationary and noisyas it is often the case with physiological recordings. In this article we proposea method that uses the local rank structure of the data to define functionallinks in terms of identical rank structures. The method yields temporal se-quences of networks which permits to trace the evolution of the functionalconnectivity during the time course of the observation. We demonstrate thepotentials of this approach with model data as well as with experimentaldata from an electrophysiological study on language processing.

  17. An anti-disturbing real time pose estimation method and system

    Science.gov (United States)

    Zhou, Jian; Zhang, Xiao-hu

    2011-08-01

    Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new

  18. An integrative time-varying frequency detection and channel sounding method for dynamic plasma sheath

    Science.gov (United States)

    Shi, Lei; Yao, Bo; Zhao, Lei; Liu, Xiaotong; Yang, Min; Liu, Yanming

    2018-01-01

    The plasma sheath-surrounded hypersonic vehicle is a dynamic and time-varying medium and it is almost impossible to calculate time-varying physical parameters directly. The in-fight detection of the time-varying degree is important to understand the dynamic nature of the physical parameters and their effect on re-entry communication. In this paper, a constant envelope zero autocorrelation (CAZAC) sequence based on time-varying frequency detection and channel sounding method is proposed to detect the plasma sheath electronic density time-varying property and wireless channel characteristic. The proposed method utilizes the CAZAC sequence, which has excellent autocorrelation and spread gain characteristics, to realize dynamic time-varying detection/channel sounding under low signal-to-noise ratio in the plasma sheath environment. Theoretical simulation under a typical time-varying radio channel shows that the proposed method is capable of detecting time-variation frequency up to 200 kHz and can trace the channel amplitude and phase in the time domain well under -10 dB. Experimental results conducted in the RF modulation discharge plasma device verified the time variation detection ability in practical dynamic plasma sheath. Meanwhile, nonlinear phenomenon of dynamic plasma sheath on communication signal is observed thorough channel sounding result.

  19. A time-minimizing hybrid method for fitting complex Moessbauer spectra

    International Nuclear Information System (INIS)

    Steiner, K.J.

    2000-07-01

    The process of fitting complex Moessbauer-spectra is known to be time-consuming. The fitting process involves a mathematical model for the combined hyperfine interaction which can be solved by an iteration method only. The iteration method is very sensitive to its input-parameters. In other words, with arbitrary input-parameters it is most unlikely that the iteration method will converge. Up to now a scientist has to spent her/his time to guess appropriate input parameters for the iteration process. The idea is to replace the guessing phase by a genetic algorithm. The genetic algorithm starts with an initial population of arbitrary input parameters. Each parameter set is called an individual. The first step is to evaluate the fitness of all individuals. Afterwards the current population is recombined to form a new population. The process of recombination involves the successive application of genetic operators which are selection, crossover, and mutation. These operators mimic the process of natural evolution, i.e. the concept of the survival of the fittest. Even though there is no formal proof that the genetic algorithm will eventually converge, there is an excellent chance that there will be a population with very good individuals after some generations. The hybrid method presented in the following combines a very modern version of a genetic algorithm with a conventional least-square routine solving the combined interaction Hamiltonian i.e. providing a physical solution with the original Moessbauer parameters by a minimum of input. (author)

  20. Comparison of different methods to include recycling in LCAs of aluminium cans and disposable polystyrene cups.

    Science.gov (United States)

    van der Harst, Eugenie; Potting, José; Kroeze, Carolien

    2016-02-01

    Many methods have been reported and used to include recycling in life cycle assessments (LCAs). This paper evaluates six widely used methods: three substitution methods (i.e. substitution based on equal quality, a correction factor, and alternative material), allocation based on the number of recycling loops, the recycled-content method, and the equal-share method. These six methods were first compared, with an assumed hypothetical 100% recycling rate, for an aluminium can and a disposable polystyrene (PS) cup. The substitution and recycled-content method were next applied with actual rates for recycling, incineration and landfilling for both product systems in selected countries. The six methods differ in their approaches to credit recycling. The three substitution methods stimulate the recyclability of the product and assign credits for the obtained recycled material. The choice to either apply a correction factor, or to account for alternative substituted material has a considerable influence on the LCA results, and is debatable. Nevertheless, we prefer incorporating quality reduction of the recycled material by either a correction factor or an alternative substituted material over simply ignoring quality loss. The allocation-on-number-of-recycling-loops method focusses on the life expectancy of material itself, rather than on a specific separate product. The recycled-content method stimulates the use of recycled material, i.e. credits the use of recycled material in products and ignores the recyclability of the products. The equal-share method is a compromise between the substitution methods and the recycled-content method. The results for the aluminium can follow the underlying philosophies of the methods. The results for the PS cup are additionally influenced by the correction factor or credits for the alternative material accounting for the drop in PS quality, the waste treatment management (recycling rate, incineration rate, landfilling rate), and the

  1. Study on APD real time compensation methods of laser Detection system

    International Nuclear Information System (INIS)

    Feng Ying; Zhang He; Zhang Xiangjin; Liu Kun

    2011-01-01

    their operating principles. The constant false alarm rate compensation can't detect the pulse signal which comes randomly. Therefore real-time performance can't be realized. The noise compensation can meet the request of real-time performance. If it is used in the environment where background light is intense or changes acutely, there is a better effect. The temperature compensation can also achieve the real-time performance request. If it is used in the environment where temperature changes acutely, there is also a better effect. Aim at such problems, this paper presents that different APD real-time compensations should be adopt to adapt to different environments. The exiting temperature compensation adjusts output voltage by using variable resistance to regulate input voltage. Its structure is complex; the real-time performance is worse. In order to remedy these defects, a real-time temperature compensation which is based on switch on-off time of switching power supply is designed. Its feasibility and operating stability are confirmed by plate making and experiment. At last, the comparison experiments between the real-time noise compensation and the real-time temperature compensation is carried out in the environments where temperature is almost invariant and background light acutely changes from5lux to150lux . The result shows that the operating effect of the real-time noise compensation is better here, the noise minifies to a sixth of original noise. The comparison experiments between the real-time noise compensation and the real-time temperature compensation is carried out in darkroom where background light is 5lux and temperature almost rapidly changes from -20 deg. C to 80 deg. C. The result shows that the operating effect of the real-time temperature compensation is better here, the noise minifies to a seventh of original noise. Moreover, these methods can be applied to other type detection systems of weak photoelectric signal; they have high actual application

  2. Study on APD real time compensation methods of laser Detection system

    Energy Technology Data Exchange (ETDEWEB)

    Feng Ying; Zhang He; Zhang Xiangjin; Liu Kun, E-mail: fy_caimi@163.com [ZNDY of Ministerial Key Laboratory, Nanjing University of Science and Technology, Nanjing 210094 (China)

    2011-02-01

    by analyzing their operating principles. The constant false alarm rate compensation can't detect the pulse signal which comes randomly. Therefore real-time performance can't be realized. The noise compensation can meet the request of real-time performance. If it is used in the environment where background light is intense or changes acutely, there is a better effect. The temperature compensation can also achieve the real-time performance request. If it is used in the environment where temperature changes acutely, there is also a better effect. Aim at such problems, this paper presents that different APD real-time compensations should be adopt to adapt to different environments. The exiting temperature compensation adjusts output voltage by using variable resistance to regulate input voltage. Its structure is complex; the real-time performance is worse. In order to remedy these defects, a real-time temperature compensation which is based on switch on-off time of switching power supply is designed. Its feasibility and operating stability are confirmed by plate making and experiment. At last, the comparison experiments between the real-time noise compensation and the real-time temperature compensation is carried out in the environments where temperature is almost invariant and background light acutely changes from5lux to150lux . The result shows that the operating effect of the real-time noise compensation is better here, the noise minifies to a sixth of original noise. The comparison experiments between the real-time noise compensation and the real-time temperature compensation is carried out in darkroom where background light is 5lux and temperature almost rapidly changes from -20 deg. C to 80 deg. C. The result shows that the operating effect of the real-time temperature compensation is better here, the noise minifies to a seventh of original noise. Moreover, these methods can be applied to other type detection systems of weak photoelectric signal; they

  3. Study on APD real time compensation methods of laser Detection system

    Science.gov (United States)

    Ying, Feng; He, Zhang; Xiangjin, Zhang; Kun, Liu

    2011-02-01

    their operating principles. The constant false alarm rate compensation can't detect the pulse signal which comes randomly. Therefore real-time performance can't be realized. The noise compensation can meet the request of real-time performance. If it is used in the environment where background light is intense or changes acutely, there is a better effect. The temperature compensation can also achieve the real-time performance request. If it is used in the environment where temperature changes acutely, there is also a better effect. Aim at such problems, this paper presents that different APD real-time compensations should be adopt to adapt to different environments. The exiting temperature compensation adjusts output voltage by using variable resistance to regulate input voltage. Its structure is complex; the real-time performance is worse. In order to remedy these defects, a real-time temperature compensation which is based on switch on-off time of switching power supply is designed. Its feasibility and operating stability are confirmed by plate making and experiment. At last, the comparison experiments between the real-time noise compensation and the real-time temperature compensation is carried out in the environments where temperature is almost invariant and background light acutely changes from5lux to150lux . The result shows that the operating effect of the real-time noise compensation is better here, the noise minifies to a sixth of original noise. The comparison experiments between the real-time noise compensation and the real-time temperature compensation is carried out in darkroom where background light is 5lux and temperature almost rapidly changes from -20°C to 80°C. The result shows that the operating effect of the real-time temperature compensation is better here, the noise minifies to a seventh of original noise. Moreover, these methods can be applied to other type detection systems of weak photoelectric signal; they have high actual application value.

  4. Three-Dimensional Passive-Source Reverse-Time Migration of Converted Waves: The Method

    Science.gov (United States)

    Li, Jiahang; Shen, Yang; Zhang, Wei

    2018-02-01

    At seismic discontinuities in the crust and mantle, part of the compressional wave energy converts to shear wave, and vice versa. These converted waves have been widely used in receiver function (RF) studies to image discontinuity structures in the Earth. While generally successful, the conventional RF method has its limitations and is suited mostly to flat or gently dipping structures. Among the efforts to overcome the limitations of the conventional RF method is the development of the wave-theory-based, passive-source reverse-time migration (PS-RTM) for imaging complex seismic discontinuities and scatters. To date, PS-RTM has been implemented only in 2D in the Cartesian coordinate for local problems and thus has limited applicability. In this paper, we introduce a 3D PS-RTM approach in the spherical coordinate, which is better suited for regional and global problems. New computational procedures are developed to reduce artifacts and enhance migrated images, including back-propagating the main arrival and the coda containing the converted waves separately, using a modified Helmholtz decomposition operator to separate the P and S modes in the back-propagated wavefields, and applying an imaging condition that maintains a consistent polarity for a given velocity contrast. Our new approach allows us to use migration velocity models with realistic velocity discontinuities, improving accuracy of the migrated images. We present several synthetic experiments to demonstrate the method, using regional and teleseismic sources. The results show that both regional and teleseismic sources can illuminate complex structures and this method is well suited for imaging dipping interfaces and sharp lateral changes in discontinuity structures.

  5. Transformation Matrix for Time Discretization Based on Tustin’s Method

    Directory of Open Access Journals (Sweden)

    Yiming Jiang

    2014-01-01

    Full Text Available This paper studies rules in transformation of transfer function through time discretization. A method of using transformation matrix to realize bilinear transform (also known as Tustin’s method is presented. This method can be described as the conversion between the coefficients of transfer functions, which are expressed as transform by certain matrix. For a polynomial of degree n, the corresponding transformation matrix of order n exists and is unique. Furthermore, the transformation matrix can be decomposed into an upper triangular matrix multiplied with another lower triangular matrix. And both have obvious regularity. The proposed method can achieve rapid bilinear transform used in automatic design of digital filter. The result of numerical simulation verifies the correctness of the theoretical results. Moreover, it also can be extended to other similar problems. Example in the last throws light on this point.

  6. A Multivariate Time Series Method for Monte Carlo Reactor Analysis

    International Nuclear Information System (INIS)

    Taro Ueki

    2008-01-01

    A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor

  7. Assessment of the methods for determining net radiation at different time-scales of meteorological variables

    Directory of Open Access Journals (Sweden)

    Ni An

    2017-04-01

    Full Text Available When modeling the soil/atmosphere interaction, it is of paramount importance to determine the net radiation flux. There are two common calculation methods for this purpose. Method 1 relies on use of air temperature, while Method 2 relies on use of both air and soil temperatures. Nowadays, there has been no consensus on the application of these two methods. In this study, the half-hourly data of solar radiation recorded at an experimental embankment are used to calculate the net radiation and long-wave radiation at different time-scales (half-hourly, hourly, and daily using the two methods. The results show that, compared with Method 2 which has been widely adopted in agronomical, geotechnical and geo-environmental applications, Method 1 is more feasible for its simplicity and accuracy at shorter time-scale. Moreover, in case of longer time-scale, daily for instance, less variations of net radiation and long-wave radiation are obtained, suggesting that no detailed soil temperature variations can be obtained. In other words, shorter time-scales are preferred in determining net radiation flux.

  8. A travel time forecasting model based on change-point detection method

    Science.gov (United States)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  9. Fuji apple storage time rapid determination method using Vis/NIR spectroscopy

    Science.gov (United States)

    Liu, Fuqi; Tang, Xuxiang

    2015-01-01

    Fuji apple storage time rapid determination method using visible/near-infrared (Vis/NIR) spectroscopy was studied in this paper. Vis/NIR diffuse reflection spectroscopy responses to samples were measured for 6 days. Spectroscopy data were processed by stochastic resonance (SR). Principal component analysis (PCA) was utilized to analyze original spectroscopy data and SNR eigen value. Results demonstrated that PCA could not totally discriminate Fuji apples using original spectroscopy data. Signal-to-noise ratio (SNR) spectrum clearly classified all apple samples. PCA using SNR spectrum successfully discriminated apple samples. Therefore, Vis/NIR spectroscopy was effective for Fuji apple storage time rapid discrimination. The proposed method is also promising in condition safety control and management for food and environmental laboratories. PMID:25874818

  10. How to measure time preferences: An experimental comparison of three methods

    Directory of Open Access Journals (Sweden)

    David J. Hardisty

    2013-05-01

    Full Text Available In two studies, time preferences for financial gains and losses at delays of up to 50 years were elicited using three different methods: matching, fixed-sequence choice titration, and a dynamic ``staircase'' choice method. Matching was found to create fewer demand characteristics and to produce better fits with the hyperbolic model of discounting. The choice-based measures better predicted real-world outcomes such as smoking and payment of credit card debt. No consistent advantages were found for the dynamic staircase method over fixed-sequence titration.

  11. On the solution of high order stable time integration methods

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Blaheta, Radim; Sysala, Stanislav; Ahmad, B.

    2013-01-01

    Roč. 108, č. 1 (2013), s. 1-22 ISSN 1687-2770 Institutional support: RVO:68145535 Keywords : evolution equations * preconditioners for quadratic matrix polynomials * a stiffly stable time integration method Subject RIV: BA - General Mathematics Impact factor: 0.836, year: 2013 http://www.boundaryvalueproblems.com/content/2013/1/108

  12. Neutron Scattering in Hydrogenous Moderators, Studied by Time Dependent Reaction Rate Method

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, L G; Moeller, E; Purohit, S N

    1966-03-15

    The moderation and absorption of a neutron burst in water, poisoned with the non-1/v absorbers cadmium and gadolinium, has been followed on the time scale by multigroup calculations, using scattering kernels for the proton gas and the Nelkin model. The time dependent reaction rate curves for each absorber display clear differences for the two models, and the separation between the curves does not depend much on the absorber concentration. An experimental method for the measurement of infinite medium reaction rate curves in a limited geometry has been investigated. This method makes the measurement of the time dependent reaction rate generally useful for thermalization studies in a small geometry of a liquid hydrogenous moderator, provided that the experiment is coupled to programs for the calculation of scattering kernels and time dependent neutron spectra. Good agreement has been found between the reaction rate curve, measured with cadmium in water, and a calculated curve, where the Haywood kernel has been used.

  13. Decay-time measurements on 'pure' CsI scintillators prepared by different methods

    International Nuclear Information System (INIS)

    Keszthelyi-Landori, S.; Foeldvari, I.; Voszka, R.; Fodor, Z.; Seres, Z.

    1990-05-01

    The discovery of the fast decay time of the pure CsI and the various results of the measured samples led to the investigation of decay time of CsI crystals prepared by different methods. Carefully grown or prepared pure CsI behaves as fast scintillators with well or totally suppressed slow decay component. The estimated fast/slow or fast/total ratio is related to the preparation method and to the remaining built-in contamination of the samples. The fast decay of pure CsI consists of two components with decay times of ≅1 and ≅10 ns - with an intensity ratio of 0.3 and 0.65 for gamma and for alpha radiation, respectively. This new ≅1 ns component and the ≅0.8 fast/total ratio may achieve an important role in many applications where fast timing properties are needed, substituting BaF 2 . (author) 18 refs.; 8 figs.; 3 tabs

  14. Innovative methods for calculation of freeway travel time using limited data : executive summary report.

    Science.gov (United States)

    2008-08-01

    ODOTs policy for Dynamic Message Sign : utilization requires travel time(s) to be displayed as : a default message. The current method of : calculating travel time involves a workstation : operator estimating the travel time based upon : observati...

  15. Detection and enumeration of Salmonella enteritidis in homemade ice cream associated with an outbreak: comparison of conventional and real-time PCR methods.

    Science.gov (United States)

    Seo, K H; Valentin-Bon, I E; Brackett, R E

    2006-03-01

    Salmonellosis caused by Salmonella Enteritidis (SE) is a significant cause of foodborne illnesses in the United States. Consumption of undercooked eggs and egg-containing products has been the primary risk factor for the disease. The importance of the bacterial enumeration technique has been enormously stressed because of the quantitative risk analysis of SE in shell eggs. Traditional enumeration methods mainly depend on slow and tedious most-probable-number (MPN) methods. Therefore, specific, sensitive, and rapid methods for SE quantitation are needed to collect sufficient data for risk assessment and food safety policy development. We previously developed a real-time quantitative PCR assay for the direct detection and enumeration of SE and, in this study, applied it to naturally contaminated ice cream samples with and without enrichment. The detection limit of the real-time PCR assay was determined with artificially inoculated ice cream. When applied to the direct detection and quantification of SE in ice cream, the real-time PCR assay was as sensitive as the conventional plate count method in frequency of detection. However, populations of SE derived from real-time quantitative PCR were approximately 1 log higher than provided by MPN and CFU values obtained by conventional culture methods. The detection and enumeration of SE in naturally contaminated ice cream can be completed in 3 h by this real-time PCR method, whereas the cultural enrichment method requires 5 to 7 days. A commercial immunoassay for the specific detection of SE was also included in the study. The real-time PCR assay proved to be a valuable tool that may be useful to the food industry in monitoring its processes to improve product quality and safety.

  16. Seismic response of three-dimensional topographies using a time-domain boundary element method

    Science.gov (United States)

    Janod, François; Coutant, Olivier

    2000-08-01

    We present a time-domain implementation for a boundary element method (BEM) to compute the diffraction of seismic waves by 3-D topographies overlying a homogeneous half-space. This implementation is chosen to overcome the memory limitations arising when solving the boundary conditions with a frequency-domain approach. This formulation is flexible because it allows one to make an adaptive use of the Green's function time translation properties: the boundary conditions solving scheme can be chosen as a trade-off between memory and cpu requirements. We explore here an explicit method of solution that requires little memory but a high cpu cost in order to run on a workstation computer. We obtain good results with four points per minimum wavelength discretization for various topographies and plane wave excitations. This implementation can be used for two different aims: the time-domain approach allows an easier implementation of the BEM in hybrid methods (e.g. coupling with finite differences), and it also allows one to run simple BEM models with reasonable computer requirements. In order to keep reasonable computation times, we do not introduce any interface and we only consider homogeneous models. Results are shown for different configurations: an explosion near a flat free surface, a plane wave vertically incident on a Gaussian hill and on a hemispherical cavity, and an explosion point below the surface of a Gaussian hill. Comparison is made with other numerical methods, such as finite difference methods (FDMs) and spectral elements.

  17. Stability analysis and time-step limits for a Monte Carlo Compton-scattering method

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.

    2010-01-01

    A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.

  18. Time Delay Systems Methods, Applications and New Trends

    CERN Document Server

    Vyhlídal, Tomáš; Niculescu, Silviu-Iulian; Pepe, Pierdomenico

    2012-01-01

    This volume is concerned with the control and dynamics of time delay systems; a research field with at least six-decade long history that has been very active especially in the past two decades. In parallel to the new challenges emerging from engineering, physics, mathematics, and economics, the volume covers several new directions including topology induced stability, large-scale interconnected systems, roles of networks in stability, and new trends in predictor-based control and consensus dynamics. The associated applications/problems are described by highly complex models, and require solving inverse problems as well as the development of new theories, mathematical tools, numerically-tractable algorithms for real-time control. The volume, which is targeted to present these developments in this rapidly evolving field, captures a careful selection of the most recent papers contributed by experts and collected under five parts: (i) Methodology: From Retarded to Neutral Continuous Delay Models, (ii) Systems, S...

  19. Forecasting with quantitative methods the impact of special events in time series

    OpenAIRE

    Nikolopoulos, Konstantinos

    2010-01-01

    Abstract Quantitative methods are very successful for producing baseline forecasts of time series; however these models fail to forecast neither the timing nor the impact of special events such as promotions or strikes. In most of the cases the timing of such events is not known so they are usually referred as shocks (economics) or special events (forecasting). Sometimes the timing of such events is known a priori (i.e. a future promotion); but even then the impact of the forthcom...

  20. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    Science.gov (United States)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  1. Comparison of time-series registration methods in breast dynamic infrared imaging

    Science.gov (United States)

    Riyahi-Alam, S.; Agostini, V.; Molinari, F.; Knaflitz, M.

    2015-03-01

    Automated motion reduction in dynamic infrared imaging is on demand in clinical applications, since movement disarranges time-temperature series of each pixel, thus originating thermal artifacts that might bias the clinical decision. All previously proposed registration methods are feature based algorithms requiring manual intervention. The aim of this work is to optimize the registration strategy specifically for Breast Dynamic Infrared Imaging and to make it user-independent. We implemented and evaluated 3 different 3D time-series registration methods: 1. Linear affine, 2. Non-linear Bspline, 3. Demons applied to 12 datasets of healthy breast thermal images. The results are evaluated through normalized mutual information with average values of 0.70 ±0.03, 0.74 ±0.03 and 0.81 ±0.09 (out of 1) for Affine, Bspline and Demons registration, respectively, as well as breast boundary overlap and Jacobian determinant of the deformation field. The statistical analysis of the results showed that symmetric diffeomorphic Demons' registration method outperforms also with the best breast alignment and non-negative Jacobian values which guarantee image similarity and anatomical consistency of the transformation, due to homologous forces enforcing the pixel geometric disparities to be shortened on all the frames. We propose Demons' registration as an effective technique for time-series dynamic infrared registration, to stabilize the local temperature oscillation.

  2. Zirconium-based alloys, nuclear fuel rods and nuclear reactors including such alloys, and related methods

    Science.gov (United States)

    Mariani, Robert Dominick

    2014-09-09

    Zirconium-based metal alloy compositions comprise zirconium, a first additive in which the permeability of hydrogen decreases with increasing temperatures at least over a temperature range extending from 350.degree. C. to 750.degree. C., and a second additive having a solubility in zirconium over the temperature range extending from 350.degree. C. to 750.degree. C. At least one of a solubility of the first additive in the second additive over the temperature range extending from 350.degree. C. to 750.degree. C. and a solubility of the second additive in the first additive over the temperature range extending from 350.degree. C. to 750.degree. C. is higher than the solubility of the second additive in zirconium over the temperature range extending from 350.degree. C. to 750.degree. C. Nuclear fuel rods include a cladding material comprising such metal alloy compositions, and nuclear reactors include such fuel rods. Methods are used to fabricate such zirconium-based metal alloy compositions.

  3. Dynamic Allan Variance Analysis Method with Time-Variant Window Length Based on Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Shanshan Gu

    2015-01-01

    Full Text Available To solve the problem that dynamic Allan variance (DAVAR with fixed length of window cannot meet the identification accuracy requirement of fiber optic gyro (FOG signal over all time domains, a dynamic Allan variance analysis method with time-variant window length based on fuzzy control is proposed. According to the characteristic of FOG signal, a fuzzy controller with the inputs of the first and second derivatives of FOG signal is designed to estimate the window length of the DAVAR. Then the Allan variances of the signals during the time-variant window are simulated to obtain the DAVAR of the FOG signal to describe the dynamic characteristic of the time-varying FOG signal. Additionally, a performance evaluation index of the algorithm based on radar chart is proposed. Experiment results show that, compared with different fixed window lengths DAVAR methods, the change of FOG signal with time can be identified effectively and the evaluation index of performance can be enhanced by 30% at least by the DAVAR method with time-variant window length based on fuzzy control.

  4. Time-Resolved Gravimetric Method To Assess Degassing of Roasted Coffee.

    Science.gov (United States)

    Smrke, Samo; Wellinger, Marco; Suzuki, Tomonori; Balsiger, Franz; Opitz, Sebastian E W; Yeretzian, Chahan

    2018-05-30

    During the roasting of coffee, thermally driven chemical reactions lead to the formation of gases, of which a large fraction is carbon dioxide (CO 2 ). Part of these gases is released during roasting while part is retained inside the porous structure of the roasted beans and is steadily released during storage or more abruptly during grinding and extraction. The release of CO 2 during the various phases from roasting to consumption is linked to many important properties and characteristics of coffee. It is an indicator for freshness, plays an important role in shelf life and in packaging, impacts the extraction process, is involved in crema formation, and may affect the sensory profile in the cup. Indeed, and in view of the multiple roles it plays, CO 2 is a much underappreciated and little examined molecule in coffee. Here, we introduce an accurate, quantitative, and time-resolved method to measure the release kinetics of gases from whole beans and ground coffee using a gravimetric approach. Samples were placed in a container with a fitted capillary to allow gases to escape. The time-resolved release of gases was measured via the weight loss of the container filled with coffee. Long-term stability was achieved using a customized design of a semimicro balance, including periodic and automatic zero value measurements and calibration procedures. The novel gravimetric methodology was applied to a range of coffee samples: (i) whole Arabica beans and (ii) ground Arabica and Robusta, roasted to different roast degrees and at different speeds (roast air temperatures). Modeling the degassing rates allowed structural and mechanistic interpretation of the degassing process.

  5. An Investigation of Pulse Transit Time as a Non-Invasive Blood Pressure Measurement Method

    International Nuclear Information System (INIS)

    McCarthy, B M; O'Flynn, B; Mathewson, A

    2011-01-01

    The objective of this paper is to examine the Pulse Transit Method (PTT) as a non-invasive means to track Blood Pressure over a short period of time. PTT was measured as the time it takes for an ECG R-wave to propagate to the finger, where it is detected by a photoplethysmograph sensor. The PTT method is ideal for continuous 24-hour Blood Pressure Measurement (BPM) since it is both cuff-less and non-invasive and therefore comfortable and unobtrusive for the patient. Other techniques, such as the oscillometric method, have shown to be accurate and reliable but require a cuff for operation, making them unsuitable for long term monitoring. Although a relatively new technique, the PTT method has shown to be able to accurately track blood pressure changes over short periods of time, after which re-calibration is necessary. The purpose of this study is to determine the accuracy of the method.

  6. Research on Control Method Based on Real-Time Operational Reliability Evaluation for Space Manipulator

    Directory of Open Access Journals (Sweden)

    Yifan Wang

    2014-05-01

    Full Text Available A control method based on real-time operational reliability evaluation for space manipulator is presented for improving the success rate of a manipulator during the execution of a task. In this paper, a method for quantitative analysis of operational reliability is given when manipulator is executing a specified task; then a control model which could control the quantitative operational reliability is built. First, the control process is described by using a state space equation. Second, process parameters are estimated in real time using Bayesian method. Third, the expression of the system's real-time operational reliability is deduced based on the state space equation and process parameters which are estimated using Bayesian method. Finally, a control variable regulation strategy which considers the cost of control is given based on the Theory of Statistical Process Control. It is shown via simulations that this method effectively improves the operational reliability of space manipulator control system.

  7. Empirical method to measure stochasticity and multifractality in nonlinear time series

    Science.gov (United States)

    Lin, Chih-Hao; Chang, Chia-Seng; Li, Sai-Ping

    2013-12-01

    An empirical algorithm is used here to study the stochastic and multifractal nature of nonlinear time series. A parameter can be defined to quantitatively measure the deviation of the time series from a Wiener process so that the stochasticity of different time series can be compared. The local volatility of the time series under study can be constructed using this algorithm, and the multifractal structure of the time series can be analyzed by using this local volatility. As an example, we employ this method to analyze financial time series from different stock markets. The result shows that while developed markets evolve very much like an Ito process, the emergent markets are far from efficient. Differences about the multifractal structures and leverage effects between developed and emergent markets are discussed. The algorithm used here can be applied in a similar fashion to study time series of other complex systems.

  8. Methods and tools to support real time risk-based flood forecasting - a UK pilot application

    Directory of Open Access Journals (Sweden)

    Brown Emma

    2016-01-01

    Full Text Available Flood managers have traditionally used probabilistic models to assess potential flood risk for strategic planning and non-operational applications. Computational restrictions on data volumes and simulation times have meant that information on the risk of flooding has not been available for operational flood forecasting purposes. In practice, however, the operational flood manager has probabilistic questions to answer, which are not completely supported by the outputs of traditional, deterministic flood forecasting systems. In a collaborative approach, HR Wallingford and Deltares have developed methods, tools and techniques to extend existing flood forecasting systems with elements of strategic flood risk analysis, including probabilistic failure analysis, two dimensional flood spreading simulation and the analysis of flood impacts and consequences. This paper presents the results of the application of these new operational flood risk management tools to a pilot catchment in the UK. It discusses the problems of performing probabilistic flood risk assessment in real time and how these have been addressed in this study. It also describes the challenges of the communication of risk to operational flood managers and to the general public, and how these new methods and tools can provide risk-based supporting evidence to assist with this process.

  9. A time reversal damage imaging method for structure health monitoring using Lamb waves

    International Nuclear Information System (INIS)

    Zhang Hai-Yan; Cao Ya-Ping; Sun Xiu-Li; Chen Xian-Hua; Yu Jian-Bo

    2010-01-01

    This paper investigates the Lamb wave imaging method combining time reversal for health monitoring of a metallic plate structure. The temporal focusing effect of the time reversal Lamb waves is investigated theoretically. It demonstrates that the focusing effect is related to the frequency dependency of the time reversal operation. Numerical simulations are conducted to study the time reversal behaviour of Lamb wave modes under broadband and narrowband excitations. The results show that the reconstructed time reversed wave exhibits close similarity to the reversed narrowband tone burst signal validating the theoretical model. To enhance the similarity, the cycle number of the excited signal should be increased. Experiments combining finite element model are then conducted to study the imaging method in the presence of damage like hole in the plate structure. In this work, the time reversal technique is used for the recompression of Lamb wave signals. Damage imaging results with time reversal using broadband and narrowband excitations are compared to those without time reversal. It suggests that the narrowband excitation combined time reversal can locate and determine the size of structural damage more precisely, but the cycle number of the excited signal should be chosen reasonably

  10. Comparison of LMFBR piping response obtained using response spectrum and time history methods

    International Nuclear Information System (INIS)

    Hulbert, G.M.

    1981-04-01

    The dynamic response to a seismic event is calculated for a piping system using a response spectrum analysis method and two time history analysis methods. The results from the analytical methods are compared to identify causes for the differences between the sets of analytical results. Comparative methods are also presented which help to gain confidence in the accuracy of the analytical methods in predicting piping system structure response during seismic events

  11. Full-waveform detection of non-impulsive seismic events based on time-reversal methods

    Science.gov (United States)

    Solano, Ericka Alinne; Hjörleifsdóttir, Vala; Liu, Qinya

    2017-12-01

    We present a full-waveform detection method for non-impulsive seismic events, based on time-reversal principles. We use the strain Green's tensor as a matched filter, correlating it with continuous observed seismograms, to detect non-impulsive seismic events. We show that this is mathematically equivalent to an adjoint method for detecting earthquakes. We define the detection function, a scalar valued function, which depends on the stacked correlations for a group of stations. Event detections are given by the times at which the amplitude of the detection function exceeds a given value relative to the noise level. The method can make use of the whole seismic waveform or any combination of time-windows with different filters. It is expected to have an advantage compared to traditional detection methods for events that do not produce energetic and impulsive P waves, for example glacial events, landslides, volcanic events and transform-fault earthquakes for events which velocity structure along the path is relatively well known. Furthermore, the method has advantages over empirical Greens functions template matching methods, as it does not depend on records from previously detected events, and therefore is not limited to events occurring in similar regions and with similar focal mechanisms as these events. The method is not specific to any particular way of calculating the synthetic seismograms, and therefore complicated structural models can be used. This is particularly beneficial for intermediate size events that are registered on regional networks, for which the effect of lateral structure on the waveforms can be significant. To demonstrate the feasibility of the method, we apply it to two different areas located along the mid-oceanic ridge system west of Mexico where non-impulsive events have been reported. The first study area is between Clipperton and Siqueiros transform faults (9°N), during the time of two earthquake swarms, occurring in March 2012 and May

  12. A systematic method for constructing time discretizations of integrable lattice systems: local equations of motion

    International Nuclear Information System (INIS)

    Tsuchida, Takayuki

    2010-01-01

    We propose a new method for discretizing the time variable in integrable lattice systems while maintaining the locality of the equations of motion. The method is based on the zero-curvature (Lax pair) representation and the lowest-order 'conservation laws'. In contrast to the pioneering work of Ablowitz and Ladik, our method allows the auxiliary dependent variables appearing in the stage of time discretization to be expressed locally in terms of the original dependent variables. The time-discretized lattice systems have the same set of conserved quantities and the same structures of the solutions as the continuous-time lattice systems; only the time evolution of the parameters in the solutions that correspond to the angle variables is discretized. The effectiveness of our method is illustrated using examples such as the Toda lattice, the Volterra lattice, the modified Volterra lattice, the Ablowitz-Ladik lattice (an integrable semi-discrete nonlinear Schroedinger system) and the lattice Heisenberg ferromagnet model. For the modified Volterra lattice, we also present its ultradiscrete analogue.

  13. A Timed Colored Petri Net Simulation-Based Self-Adaptive Collaboration Method for Production-Logistics Systems

    Directory of Open Access Journals (Sweden)

    Zhengang Guo

    2017-03-01

    Full Text Available Complex and customized manufacturing requires a high level of collaboration between production and logistics in a flexible production system. With the widespread use of Internet of Things technology in manufacturing, a great amount of real-time and multi-source manufacturing data and logistics data is created, that can be used to perform production-logistics collaboration. To solve the aforementioned problems, this paper proposes a timed colored Petri net simulation-based self-adaptive collaboration method for Internet of Things-enabled production-logistics systems. The method combines the schedule of token sequences in the timed colored Petri net with real-time status of key production and logistics equipment. The key equipment is made ‘smart’ to actively publish or request logistics tasks. An integrated framework based on a cloud service platform is introduced to provide the basis for self-adaptive collaboration of production-logistics systems. A simulation experiment is conducted by using colored Petri nets (CPN Tools to validate the performance and applicability of the proposed method. Computational experiments demonstrate that the proposed method outperforms the event-driven method in terms of reductions of waiting time, makespan, and electricity consumption. This proposed method is also applicable to other manufacturing systems to implement production-logistics collaboration.

  14. A real-time neutron-gamma discriminator based on the support vector machine method for the time-of-flight neutron spectrometer

    Science.gov (United States)

    Wei, ZHANG; Tongyu, WU; Bowen, ZHENG; Shiping, LI; Yipo, ZHANG; Zejie, YIN

    2018-04-01

    A new neutron-gamma discriminator based on the support vector machine (SVM) method is proposed to improve the performance of the time-of-flight neutron spectrometer. The neutron detector is an EJ-299-33 plastic scintillator with pulse-shape discrimination (PSD) property. The SVM algorithm is implemented in field programmable gate array (FPGA) to carry out the real-time sifting of neutrons in neutron-gamma mixed radiation fields. This study compares the ability of the pulse gradient analysis method and the SVM method. The results show that this SVM discriminator can provide a better discrimination accuracy of 99.1%. The accuracy and performance of the SVM discriminator based on FPGA have been evaluated in the experiments. It can get a figure of merit of 1.30.

  15. A method to increase optical timing spectra measurement rates using a multi-hit TDC

    International Nuclear Information System (INIS)

    Moses, W.W.

    1993-01-01

    A method is presented for using a modern time to digital converter (TDC) to increase the data collection rate for optical timing measurements such as scintillator decay times. It extends the conventional delayed coincidence method, where a synchronization signal ''starts'' a TDC and a photomultiplier tube (PMT) sampling the optical signal ''stops'' the TDC. Data acquisition rates are low with the conventional method because ε, the light collection efficiency of the ''stop'' PMT, is artificially limited to ε∼0.01 photons per ''start'' signal to reduce the probability of detecting more than one photon during the sampling period. With conventional TDCs, these multiple photon events bias the time spectrum since only the first ''stop'' pulse is digitized. The new method uses a modern TDC to detect whether additional ''stop'' signals occur during the sampling period, and actively reject these multiple photon events. This allows ε to be increased to almost 1 photon per ''start'' signal, which maximizes the data acquisition rate at a value nearly 20 times higher. Multi-hit TDCs can digitize the arrival times of n ''stop'' signals per ''start'' signal, which allows ε to be increased to ∼3n/4. While overlap of the ''stop'' signals prevents the full gain in data collection rate to be realized, significant improvements are possible for most applications. (orig.)

  16. The dynamic method for time-of-flight measurement of thermal neutron spectra from pulsed sources

    International Nuclear Information System (INIS)

    Pepyolyshev, Yu.N.; Chuklyaev, S.V.; Tulaev, A.B.; Bobrakov, V.F.

    1995-01-01

    A time-of-flight method for measurement of thermal neutron spectra in pulsed neutron sources with an efficiency more than 10 5 times higher than the standard method is described. The main problems associated with the electric current technique for time-of-flight spectra measurement are examined. The methodical errors, problems of special neutron detector design and other questions are discussed. Some experimental results for spectra from the surfaces of water and solid methane moderators obtained at the IBR-2 pulsed reactor (Dubna, Russia) are presented. (orig.)

  17. Correlation Coefficients Between Different Methods of Expressing Bacterial Quantification Using Real Time PCR

    Directory of Open Access Journals (Sweden)

    Bahman Navidshad

    2012-02-01

    Full Text Available The applications of conventional culture-dependent assays to quantify bacteria populations are limited by their dependence on the inconsistent success of the different culture-steps involved. In addition, some bacteria can be pathogenic or a source of endotoxins and pose a health risk to the researchers. Bacterial quantification based on the real-time PCR method can overcome the above-mentioned problems. However, the quantification of bacteria using this approach is commonly expressed as absolute quantities even though the composition of samples (like those of digesta can vary widely; thus, the final results may be affected if the samples are not properly homogenized, especially when multiple samples are to be pooled together before DNA extraction. The objective of this study was to determine the correlation coefficients between four different methods of expressing the output data of real-time PCR-based bacterial quantification. The four methods were: (i the common absolute method expressed as the cell number of specific bacteria per gram of digesta; (ii the Livak and Schmittgen, ΔΔCt method; (iii the Pfaffl equation; and (iv a simple relative method based on the ratio of cell number of specific bacteria to the total bacterial cells. Because of the effect on total bacteria population in the results obtained using ΔCt-based methods (ΔΔCt and Pfaffl, these methods lack the acceptable consistency to be used as valid and reliable methods in real-time PCR-based bacterial quantification studies. On the other hand, because of the variable compositions of digesta samples, a simple ratio of cell number of specific bacteria to the corresponding total bacterial cells of the same sample can be a more accurate method to quantify the population.

  18. 3D airborne EM modeling based on the spectral-element time-domain (SETD) method

    Science.gov (United States)

    Cao, X.; Yin, C.; Huang, X.; Liu, Y.; Zhang, B., Sr.; Cai, J.; Liu, L.

    2017-12-01

    In the field of 3D airborne electromagnetic (AEM) modeling, both finite-difference time-domain (FDTD) method and finite-element time-domain (FETD) method have limitations that FDTD method depends too much on the grids and time steps, while FETD requires large number of grids for complex structures. We propose a time-domain spectral-element (SETD) method based on GLL interpolation basis functions for spatial discretization and Backward Euler (BE) technique for time discretization. The spectral-element method is based on a weighted residual technique with polynomials as vector basis functions. It can contribute to an accurate result by increasing the order of polynomials and suppressing spurious solution. BE method is a stable tine discretization technique that has no limitation on time steps and can guarantee a higher accuracy during the iteration process. To minimize the non-zero number of sparse matrix and obtain a diagonal mass matrix, we apply the reduced order integral technique. A direct solver with its speed independent of the condition number is adopted for quickly solving the large-scale sparse linear equations system. To check the accuracy of our SETD algorithm, we compare our results with semi-analytical solutions for a three-layered earth model within the time lapse 10-6-10-2s for different physical meshes and SE orders. The results show that the relative errors for magnetic field B and magnetic induction are both around 3-5%. Further we calculate AEM responses for an AEM system over a 3D earth model in Figure 1. From numerical experiments for both 1D and 3D model, we draw the conclusions that: 1) SETD can deliver an accurate results for both dB/dt and B; 2) increasing SE order improves the modeling accuracy for early to middle time channels when the EM field diffuses fast so the high-order SE can model the detailed variation; 3) at very late time channels, increasing SE order has little improvement on modeling accuracy, but the time interval plays

  19. A method for real-time three-dimensional vector velocity imaging

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Nikolov, Svetoslav

    2003-01-01

    The paper presents an approach for making real-time three-dimensional vector flow imaging. Synthetic aperture data acquisition is used, and the data is beamformed along the flow direction to yield signals usable for flow estimation. The signals are cross-related to determine the shift in position...... are done using 16 × 16 = 256 elements at a time and the received signals from the same elements are sampled. Access to the individual elements is done through 16-to-1 multiplexing, so that only a 256 channels transmitting and receiving system are needed. The method has been investigated using Field II...

  20. Single photon imaging and timing array sensor apparatus and method

    Science.gov (United States)

    Smith, R. Clayton

    2003-06-24

    An apparatus and method are disclosed for generating a three-dimension image of an object or target. The apparatus is comprised of a photon source for emitting a photon at a target. The emitted photons are received by a photon receiver for receiving the photon when reflected from the target. The photon receiver determines a reflection time of the photon and further determines an arrival position of the photon on the photon receiver. An analyzer is communicatively coupled to the photon receiver, wherein the analyzer generates a three-dimensional image of the object based upon the reflection time and the arrival position.