Peci, Adriana; Winter, Anne-Luise; Gubbay, Jonathan B.
2016-01-01
Legionella is a Gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture, and polymerase chain reaction (PCR) test methods and to determine if sputum is an acceptable alternative to the use of more invasive bronchoalveolar lavage (BAL). Data for this study included specimens tested for Legionella at Public Health Ontario Laboratories from 1st January, 2010 to 30th April, 2014, as part of routine clinical testing. We found sensitivity of urinary antigen test (UAT) compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV) 63.8%, and negative predictive value (NPV) 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7%, and NPV 98.1%. Out of 146 patients who had a Legionella-positive result by PCR, only 66 (45.2%) also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%); sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results regardless testing methods (Fisher Exact p-values = 1.0, for each test). In summary, all test methods have inherent weaknesses in identifying Legionella; therefore, more than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical from patients being tested for Legionella. PMID:27630979
Directory of Open Access Journals (Sweden)
Adriana Peci
2016-08-01
Full Text Available Legionella is a gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture and PCR test methods and to determine if sputum is an alternative to the use of more invasive bronchoalveolar lavage (BAL. Data for this study included specimens tested for Legionella at PHOL from January 1, 2010 to April 30, 2014, as part of routine clinical testing. We found sensitivity of UAT compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV 63.8% and negative predictive value (NPV 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7% and NPV 98.1%. Of 146 patients who had a Legionella positive result by PCR, only 66(45.2% also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%; sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results despite testing methods (Fisher Exact p-values=1.0, for each test. In summary, all test methods have inherent weaknesses in identifying Legionella; thereforemore than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection, and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical, from patients being tested for Legionella.
Dechartres, Agnes; Trinquart, Ludovic; Atal, Ignacio; Moher, David; Dickersin, Kay; Boutron, Isabelle; Perrodeau, Elodie; Altman, Douglas G; Ravaud, Philippe
2017-06-08
Objective To examine how poor reporting and inadequate methods for key methodological features in randomised controlled trials (RCTs) have changed over the past three decades. Design Mapping of trials included in Cochrane reviews. Data sources Data from RCTs included in all Cochrane reviews published between March 2011 and September 2014 reporting an evaluation of the Cochrane risk of bias items: sequence generation, allocation concealment, blinding, and incomplete outcome data. Data extraction For each RCT, we extracted consensus on risk of bias made by the review authors and identified the primary reference to extract publication year and journal. We matched journal names with Journal Citation Reports to get 2014 impact factors. Main outcomes measures We considered the proportions of trials rated by review authors at unclear and high risk of bias as surrogates for poor reporting and inadequate methods, respectively. Results We analysed 20 920 RCTs (from 2001 reviews) published in 3136 journals. The proportion of trials with unclear risk of bias was 48.7% for sequence generation and 57.5% for allocation concealment; the proportion of those with high risk of bias was 4.0% and 7.2%, respectively. For blinding and incomplete outcome data, 30.6% and 24.7% of trials were at unclear risk and 33.1% and 17.1% were at high risk, respectively. Higher journal impact factor was associated with a lower proportion of trials at unclear or high risk of bias. The proportion of trials at unclear risk of bias decreased over time, especially for sequence generation, which fell from 69.1% in 1986-1990 to 31.2% in 2011-14 and for allocation concealment (70.1% to 44.6%). After excluding trials at unclear risk of bias, use of inadequate methods also decreased over time: from 14.8% to 4.6% for sequence generation and from 32.7% to 11.6% for allocation concealment. Conclusions Poor reporting and inadequate methods have decreased over time, especially for sequence generation
Inferring time derivatives including cell growth rates using Gaussian processes
Swain, Peter S.; Stevenson, Keiran; Leary, Allen; Montano-Gutierrez, Luis F.; Clark, Ivan B. N.; Vogel, Jackie; Pilizota, Teuta
2016-12-01
Often the time derivative of a measured variable is of as much interest as the variable itself. For a growing population of biological cells, for example, the population's growth rate is typically more important than its size. Here we introduce a non-parametric method to infer first and second time derivatives as a function of time from time-series data. Our approach is based on Gaussian processes and applies to a wide range of data. In tests, the method is at least as accurate as others, but has several advantages: it estimates errors both in the inference and in any summary statistics, such as lag times, and allows interpolation with the corresponding error estimation. As illustrations, we infer growth rates of microbial cells, the rate of assembly of an amyloid fibril and both the speed and acceleration of two separating spindle pole bodies. Our algorithm should thus be broadly applicable.
Van Norman, Staci A.; Aston, Victoria J.; Weimer, Alan W.
2017-05-09
Structures, catalysts, and reactors suitable for use for a variety of applications, including gas-to-liquid and coal-to-liquid processes and methods of forming the structures, catalysts, and reactors are disclosed. The catalyst material can be deposited onto an inner wall of a microtubular reactor and/or onto porous tungsten support structures using atomic layer deposition techniques.
Microfluidic devices and methods including porous polymer monoliths
Hatch, Anson V; Sommer, Gregory J; Singh, Anup K; Wang, Ying-Chih; Abhyankar, Vinay V
2014-04-22
Microfluidic devices and methods including porous polymer monoliths are described. Polymerization techniques may be used to generate porous polymer monoliths having pores defined by a liquid component of a fluid mixture. The fluid mixture may contain iniferters and the resulting porous polymer monolith may include surfaces terminated with iniferter species. Capture molecules may then be grafted to the monolith pores.
Methods of producing adsorption media including a metal oxide
Mann, Nicholas R; Tranter, Troy J
2014-03-04
Methods of producing a metal oxide are disclosed. The method comprises dissolving a metal salt in a reaction solvent to form a metal salt/reaction solvent solution. The metal salt is converted to a metal oxide and a caustic solution is added to the metal oxide/reaction solvent solution to adjust the pH of the metal oxide/reaction solvent solution to less than approximately 7.0. The metal oxide is precipitated and recovered. A method of producing adsorption media including the metal oxide is also disclosed, as is a precursor of an active component including particles of a metal oxide.
Unsteady panel method for complex configurations including wake modeling
CSIR Research Space (South Africa)
Van Zyl, Lourens H
2008-01-01
Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...
Initiation devices, initiation systems including initiation devices and related methods
Energy Technology Data Exchange (ETDEWEB)
Daniels, Michael A.; Condit, Reston A.; Rasmussen, Nikki; Wallace, Ronald S.
2018-04-10
Initiation devices may include at least one substrate, an initiation element positioned on a first side of the at least one substrate, and a spark gap electrically coupled to the initiation element and positioned on a second side of the at least one substrate. Initiation devices may include a plurality of substrates where at least one substrate of the plurality of substrates is electrically connected to at least one adjacent substrate of the plurality of substrates with at least one via extending through the at least one substrate. Initiation systems may include such initiation devices. Methods of igniting energetic materials include passing a current through a spark gap formed on at least one substrate of the initiation device, passing the current through at least one via formed through the at least one substrate, and passing the current through an explosive bridge wire of the initiation device.
Time delayed Ensemble Nudging Method
An, Zhe; Abarbanel, Henry
Optimal nudging method based on time delayed embedding theory has shows potentials on analyzing and data assimilation in previous literatures. To extend the application and promote the practical implementation, new nudging assimilation method based on the time delayed embedding space is presented and the connection with other standard assimilation methods are studied. Results shows the incorporating information from the time series of data can reduce the sufficient observation needed to preserve the quality of numerical prediction, making it a potential alternative in the field of data assimilation of large geophysical models.
A FILTRATION METHOD AND APPARATUS INCLUDING A ROLLER WITH PORES
DEFF Research Database (Denmark)
2008-01-01
The present invention offers a method for separating dry matter from a medium. A separation chamber is at least partly defined by a plurality of rollers (2,7) and is capable of being pressure regulated. At least one of the rollers is a pore roller (7) having a surface with pores allowing permeabi...
Composite material including nanocrystals and methods of making
Bawendi, Moungi G.; Sundar, Vikram C.
2010-04-06
Temperature-sensing compositions can include an inorganic material, such as a semiconductor nanocrystal. The nanocrystal can be a dependable and accurate indicator of temperature. The intensity of emission of the nanocrystal varies with temperature and can be highly sensitive to surface temperature. The nanocrystals can be processed with a binder to form a matrix, which can be varied by altering the chemical nature of the surface of the nanocrystal. A nanocrystal with a compatibilizing outer layer can be incorporated into a coating formulation and retain its temperature sensitive emissive properties.
Methods for forming complex oxidation reaction products including superconducting articles
International Nuclear Information System (INIS)
Rapp, R.A.; Urquhart, A.W.; Nagelberg, A.S.; Newkirk, M.S.
1992-01-01
This patent describes a method for producing a superconducting complex oxidation reaction product of two or more metals in an oxidized state. It comprises positioning at least one parent metal source comprising one of the metals adjacent to a permeable mass comprising at least one metal-containing compound capable of reaction to form the complex oxidation reaction product in step below, the metal component of the at least one metal-containing compound comprising at least a second of the two or more metals, and orienting the parent metal source and the permeable mass relative to each other so that formation of the complex oxidation reaction product will occur in a direction towards and into the permeable mass; and heating the parent metal source in the presence of an oxidant to a temperature region above its melting point to form a body of molten parent metal to permit infiltration and reaction of the molten parent metal into the permeable mass and with the oxidant and the at least one metal-containing compound to form the complex oxidation reaction product, and progressively drawing the molten parent metal source through the complex oxidation reaction product towards the oxidant and towards and into the adjacent permeable mass so that fresh complex oxidation reaction product continues to form within the permeable mass; and recovering the resulting complex oxidation reaction product
Lai, Zhiping; Huang, Kuo-Wei; Chen, Wei
2016-01-01
In accordance with the purpose(s) of the present disclosure, as embodied and broadly described herein, embodiments of the present disclosure provide membranes, methods of making the membrane, systems including the membrane, methods of separation, methods of desalination, and the like.
Lai, Zhiping
2016-01-21
In accordance with the purpose(s) of the present disclosure, as embodied and broadly described herein, embodiments of the present disclosure provide membranes, methods of making the membrane, systems including the membrane, methods of separation, methods of desalination, and the like.
Force measuring valve assemblies, systems including such valve assemblies and related methods
DeWall, Kevin George [Pocatello, ID; Garcia, Humberto Enrique [Idaho Falls, ID; McKellar, Michael George [Idaho Falls, ID
2012-04-17
Methods of evaluating a fluid condition may include stroking a valve member and measuring a force acting on the valve member during the stroke. Methods of evaluating a fluid condition may include measuring a force acting on a valve member in the presence of fluid flow over a period of time and evaluating at least one of the frequency of changes in the measured force over the period of time and the magnitude of the changes in the measured force over the period of time to identify the presence of an anomaly in a fluid flow and, optionally, its estimated location. Methods of evaluating a valve condition may include directing a fluid flow through a valve while stroking a valve member, measuring a force acting on the valve member during the stroke, and comparing the measured force to a reference force. Valve assemblies and related systems are also disclosed.
A Time Series Forecasting Method
Directory of Open Access Journals (Sweden)
Wang Zhao-Yu
2017-01-01
Full Text Available This paper proposes a novel time series forecasting method based on a weighted self-constructing clustering technique. The weighted self-constructing clustering processes all the data patterns incrementally. If a data pattern is not similar enough to an existing cluster, it forms a new cluster of its own. However, if a data pattern is similar enough to an existing cluster, it is removed from the cluster it currently belongs to and added to the most similar cluster. During the clustering process, weights are learned for each cluster. Given a series of time-stamped data up to time t, we divide it into a set of training patterns. By using the weighted self-constructing clustering, the training patterns are grouped into a set of clusters. To estimate the value at time t + 1, we find the k nearest neighbors of the input pattern and use these k neighbors to decide the estimation. Experimental results are shown to demonstrate the effectiveness of the proposed approach.
Fast-timing methods for semiconductor detectors
International Nuclear Information System (INIS)
Spieler, H.
1982-03-01
The basic parameters are discussed which determine the accuracy of timing measurements and their effect in a practical application, specifically timing with thin-surface barrier detectors. The discussion focusses on properties of the detector, low-noise amplifiers, trigger circuits and time converters. New material presented in this paper includes bipolar transistor input stages with noise performance superior to currently available FETs, noiseless input terminations in sub-nanosecond preamplifiers and methods using transmission lines to couple the detector to remotely mounted preamplifiers. Trigger circuits are characterized in terms of effective rise time, equivalent input noise and residual jitter
Fast timing methods for semiconductor detectors. Revision
International Nuclear Information System (INIS)
Spieler, H.
1984-10-01
This tutorial paper discusses the basic parameters which determine the accuracy of timing measurements and their effect in a practical application, specifically timing with thin-surface barrier detectors. The discussion focusses on properties of the detector, low-noise amplifiers, trigger circuits and time converters. New material presented in this paper includes bipolar transistor input stages with noise performance superior to currently available FETs, noiseless input terminations in sub-nanosecond preamplifiers and methods using transmission lines to couple the detector to remotely mounted preamplifiers. Trigger circuits are characterized in terms of effective rise time, equivalent input noise and residual jitter
The time domain triple probe method
International Nuclear Information System (INIS)
Meier, M.A.; Hallock, G.A.; Tsui, H.Y.W.; Bengtson, R.D.
1994-01-01
A new Langmuir probe technique based on the triple probe method is being developed to provide simultaneous measurement of plasma temperature, potential, and density with the temporal and spatial resolution required to accurately characterize plasma turbulence. When the conventional triple probe method is used in an inhomogeneous plasma, local differences in the plasma measured at each probe introduce significant error in the estimation of turbulence parameters. The Time Domain Triple Probe method (TDTP) uses high speed switching of Langmuir probe potential, rather than spatially separated probes, to gather the triple probe information thus avoiding these errors. Analysis indicates that plasma response times and recent electronics technology meet the requirements to implement the TDTP method. Data reduction techniques of TDTP data are to include linear and higher order correlation analysis to estimate fluctuation induced particle and thermal transport, as well as energy relationships between temperature, density, and potential fluctuations
2012-01-01
Background Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. Methods We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Results Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Conclusions Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research. PMID:22545681
Atkins, Salla; Launiala, Annika; Kagaha, Alexander; Smith, Helen
2012-04-30
Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research.
Time dependent view factor methods
International Nuclear Information System (INIS)
Kirkpatrick, R.C.
1998-03-01
View factors have been used for treating radiation transport between opaque surfaces bounding a transparent medium for several decades. However, in recent years they have been applied to problems involving intense bursts of radiation in enclosed volumes such as in the laser fusion hohlraums. In these problems, several aspects require treatment of time dependence
Beyond the sticker price: including and excluding time in comparing food prices.
Yang, Yanliang; Davis, George C; Muth, Mary K
2015-07-01
An ongoing debate in the literature is how to measure the price of food. Most analyses have not considered the value of time in measuring the price of food. Whether or not the value of time is included in measuring the price of a food may have important implications for classifying foods based on their relative cost. The purpose of this article is to compare prices that exclude time (time-exclusive price) with prices that include time (time-inclusive price) for 2 types of home foods: home foods using basic ingredients (home recipes) vs. home foods using more processed ingredients (processed recipes). The time-inclusive and time-exclusive prices are compared to determine whether the time-exclusive prices in isolation may mislead in drawing inferences regarding the relative prices of foods. We calculated the time-exclusive price and time-inclusive price of 100 home recipes and 143 processed recipes and then categorized them into 5 standard food groups: grains, proteins, vegetables, fruit, and dairy. We then examined the relation between the time-exclusive prices and the time-inclusive prices and dietary recommendations. For any food group, the processed food time-inclusive price was always less than the home recipe time-inclusive price, even if the processed food's time-exclusive price was more expensive. Time-inclusive prices for home recipes were especially higher for the more time-intensive food groups, such as grains, vegetables, and fruit, which are generally underconsumed relative to the guidelines. Focusing only on the sticker price of a food and ignoring the time cost may lead to different conclusions about relative prices and policy recommendations than when the time cost is included. © 2015 American Society for Nutrition.
International Nuclear Information System (INIS)
Isotalo, A.E.; Wieselquist, W.A.
2015-01-01
Highlights: • A method for handling external feed in depletion calculations with CRAM. • Source term can have polynomial or exponentially decaying time-dependence. • CRAM with source term and adjoint capability implemented to ORIGEN in SCALE. • The new solver is faster and more accurate than the original solver of ORIGEN. - Abstract: A method for including external feed with polynomial time dependence in depletion calculations with the Chebyshev Rational Approximation Method (CRAM) is presented and the implementation of CRAM to the ORIGEN module of the SCALE suite is described. In addition to being able to handle time-dependent feed rates, the new solver also adds the capability to perform adjoint calculations. Results obtained with the new CRAM solver and the original depletion solver of ORIGEN are compared to high precision reference calculations, which shows the new solver to be orders of magnitude more accurate. Furthermore, in most cases, the new solver is up to several times faster due to not requiring similar substepping as the original one
Damped time advance methods for particles and EM fields
International Nuclear Information System (INIS)
Friedman, A.; Ambrosiano, J.J.; Boyd, J.K.; Brandon, S.T.; Nielsen, D.E. Jr.; Rambo, P.W.
1990-01-01
Recent developments in the application of damped time advance methods to plasma simulations include the synthesis of implicit and explicit ''adjustably damped'' second order accurate methods for particle motion and electromagnetic field propagation. This paper discusses this method
State Space Methods for Timed Petri Nets
DEFF Research Database (Denmark)
Christensen, Søren; Jensen, Kurt; Mailund, Thomas
2001-01-01
it possible to condense the usually infinite state space of a timed Petri net into a finite condensed state space without loosing analysis power. The second method supports on-the-fly verification of certain safety properties of timed systems. We discuss the application of the two methods in a number......We present two recently developed state space methods for timed Petri nets. The two methods reconciles state space methods and time concepts based on the introduction of a global clock and associating time stamps to tokens. The first method is based on an equivalence relation on states which makes...
Lillo, Thomas M.; Chu, Henry S.; Harrison, William M.; Bailey, Derek
2013-01-22
Methods of forming composite materials include coating particles of titanium dioxide with a substance including boron (e.g., boron carbide) and a substance including carbon, and reacting the titanium dioxide with the substance including boron and the substance including carbon to form titanium diboride. The methods may be used to form ceramic composite bodies and materials, such as, for example, a ceramic composite body or material including silicon carbide and titanium diboride. Such bodies and materials may be used as armor bodies and armor materials. Such methods may include forming a green body and sintering the green body to a desirable final density. Green bodies formed in accordance with such methods may include particles comprising titanium dioxide and a coating at least partially covering exterior surfaces thereof, the coating comprising a substance including boron (e.g., boron carbide) and a substance including carbon.
Methods for determining time of death.
Madea, Burkhard
2016-12-01
Medicolegal death time estimation must estimate the time since death reliably. Reliability can only be provided empirically by statistical analysis of errors in field studies. Determining the time since death requires the calculation of measurable data along a time-dependent curve back to the starting point. Various methods are used to estimate the time since death. The current gold standard for death time estimation is a previously established nomogram method based on the two-exponential model of body cooling. Great experimental and practical achievements have been realized using this nomogram method. To reduce the margin of error of the nomogram method, a compound method was developed based on electrical and mechanical excitability of skeletal muscle, pharmacological excitability of the iris, rigor mortis, and postmortem lividity. Further increasing the accuracy of death time estimation involves the development of conditional probability distributions for death time estimation based on the compound method. Although many studies have evaluated chemical methods of death time estimation, such methods play a marginal role in daily forensic practice. However, increased precision of death time estimation has recently been achieved by considering various influencing factors (i.e., preexisting diseases, duration of terminal episode, and ambient temperature). Putrefactive changes may be used for death time estimation in water-immersed bodies. Furthermore, recently developed technologies, such as H magnetic resonance spectroscopy, can be used to quantitatively study decompositional changes. This review addresses the gold standard method of death time estimation in forensic practice and promising technological and scientific developments in the field.
International Nuclear Information System (INIS)
Akaoka, Katsuaki; Maruyama, Youichiro; Oba, Masaki; Miyabe, Masabumi; Otobe, Haruyoshi; Wakaida, Ikuo
2010-05-01
For the remote analysis of low DF TRU (Decontamination Factor Transuranic) fuel, Laser Breakdown Spectroscopy (LIBS) was applied to uranium oxide including a small amount of calcium oxide. The characteristics, such as spectrum intensity and plasma excitation temperature, were measured using time-resolved spectroscopy. As a result, in order to obtain the stable intensity of calcium spectrum for the uranium spectrum, it was found out that the optimum observation delay time of spectrum is 4 microseconds or more after laser irradiation. (author)
Bakr, Osman; Peng, Wei; Wang, Lingfei
2017-01-01
Embodiments of the present disclosure provide for solar cells including an organometallic halide perovskite monocrystalline film (see fig. 1.1B), other devices including the organometallic halide perovskite monocrystalline film, methods of making
Another method of dead time correction
International Nuclear Information System (INIS)
Sabol, J.
1988-01-01
A new method of the correction of counting losses caused by a non-extended dead time of pulse detection systems is presented. The approach is based on the distribution of time intervals between pulses at the output of the system. The method was verified both experimentally and by using the Monte Carlo simulations. The results show that the suggested technique is more reliable and accurate than other methods based on a separate measurement of the dead time. (author) 5 refs
Generalized Time-Limited Balanced Reduction Method
DEFF Research Database (Denmark)
Shaker, Hamid Reza; Shaker, Fatemeh
2013-01-01
In this paper, a new method for model reduction of bilinear systems is presented. The proposed technique is from the family of gramian-based model reduction methods. The method uses time-interval generalized gramians in the reduction procedure rather than the ordinary generalized gramians...... and in such a way it improves the accuracy of the approximation within the time-interval which the method is applied. The time-interval generalized gramians are the solutions to the generalized time-interval Lyapunov equations. The conditions for these equations to be solvable are derived and an algorithm...
Flexible barrier film, method of forming same, and organic electronic device including same
Blizzard, John; Tonge, James Steven; Weidner, William Kenneth
2013-03-26
A flexible barrier film has a thickness of from greater than zero to less than 5,000 nanometers and a water vapor transmission rate of no more than 1.times.10.sup.-2 g/m.sup.2/day at 22.degree. C. and 47% relative humidity. The flexible barrier film is formed from a composition, which comprises a multi-functional acrylate. The composition further comprises the reaction product of an alkoxy-functional organometallic compound and an alkoxy-functional organosilicon compound. A method of forming the flexible barrier film includes the steps of disposing the composition on a substrate and curing the composition to form the flexible barrier film. The flexible barrier film may be utilized in organic electronic devices.
Time-efficient multidimensional threshold tracking method
DEFF Research Database (Denmark)
Fereczkowski, Michal; Kowalewski, Borys; Dau, Torsten
2015-01-01
Traditionally, adaptive methods have been used to reduce the time it takes to estimate psychoacoustic thresholds. However, even with adaptive methods, there are many cases where the testing time is too long to be clinically feasible, particularly when estimating thresholds as a function of anothe...
Harst-Wintraecken, van der Eugenie; Potting, José; Kroeze, Carolien
2016-01-01
Many methods have been reported and used to include recycling in life cycle assessments (LCAs). This paper evaluates six widely used methods: three substitution methods (i.e. substitution based on equal quality, a correction factor, and alternative material), allocation based on the number of
Holmberg, Andreas; Kierkegaard, Axel; Weng, Chenyang
2015-06-01
In this paper, a method for including damping of acoustic energy in regions of strong turbulence is derived for a linearized Navier-Stokes method in the frequency domain. The proposed method is validated and analyzed in 2D only, although the formulation is fully presented in 3D. The result is applied in a study of the linear interaction between the acoustic and the hydrodynamic field in a 2D T-junction, subject to grazing flow at Mach 0.1. Part of the acoustic energy at the upstream edge of the junction is shed as harmonically oscillating disturbances, which are conveyed across the shear layer over the junction, where they interact with the acoustic field. As the acoustic waves travel in regions of strong shear, there is a need to include the interaction between the background turbulence and the acoustic field. For this purpose, the oscillation of the background turbulence Reynold's stress, due to the acoustic field, is modeled using an eddy Newtonian model assumption. The time averaged flow is first solved for using RANS along with a k-ε turbulence model. The spatially varying turbulent eddy viscosity is then added to the spatially invariant kinematic viscosity in the acoustic set of equations. The response of the 2D T-junction to an incident acoustic field is analyzed via a plane wave scattering matrix model, and the result is compared to experimental data for a T-junction of rectangular ducts. A strong improvement in the agreement between calculation and experimental data is found when the modification proposed in this paper is implemented. Discrepancies remaining are likely due to inaccuracies in the selected turbulence model, which is known to produce large errors e.g. for flows with significant rotation, which the grazing flow across the T-junction certainly is. A natural next step is therefore to test the proposed methodology together with more sophisticated turbulence models.
Cost and benefit including value of life, health and environmental damage measured in time units
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager; Friis-Hansen, Peter
2009-01-01
Key elements of the authors' work on money equivalent time allocation to costs and benefits in risk analysis are put together as an entity. This includes the data supported dimensionless analysis of an equilibrium relation between total population work time and gross domestic product leading...... of this societal value over the actual costs, used by the owner for economically optimizing an activity, motivates a simple risk accept criterion suited to be imposed on the owner by the public. An illustration is given concerning allocation of economical means for mitigation of loss of life and health on a ferry...
Time-dependent shock acceleration of energetic electrons including synchrotron losses
International Nuclear Information System (INIS)
Fritz, K.; Webb, G.M.
1990-01-01
The present investigation of the time-dependent particle acceleration problem in strong shocks, including synchrotron radiation losses, solves the transport equation analytically by means of Laplace transforms. The particle distribution thus obtained is then transformed numerically into real space for the cases of continuous and impulsive injections of particles at the shock. While in the continuous case the steady-state spectrum undergoes evolution, impulsive injection is noted to yield such unpredicted features as a pile-up of high-energy particles or a steep power-law with time-dependent spectral index. The time-dependent calculations reveal varying spectral shapes and more complex features for the higher energies which may be useful in the interpretation of outburst spectra. 33 refs
Multiple time scale methods in tokamak magnetohydrodynamics
International Nuclear Information System (INIS)
Jardin, S.C.
1984-01-01
Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed
Hofmann, Douglas C. (Inventor); Kennett, Andrew (Inventor)
2018-01-01
Systems and methods to fabricate objects including metallic glass-based materials using low-pressure casting techniques are described. In one embodiment, a method of fabricating an object that includes a metallic glass-based material includes: introducing molten alloy into a mold cavity defined by a mold using a low enough pressure such that the molten alloy does not conform to features of the mold cavity that are smaller than 100 microns; and cooling the molten alloy such that it solidifies, the solid including a metallic glass-based material.
Energy Technology Data Exchange (ETDEWEB)
Kivotides, Demosthenes, E-mail: demosthenes.kivotides@strath.ac.uk
2017-02-12
An asymptotically exact method for the direct computation of turbulent polymeric liquids that includes (a) fully resolved, creeping microflow fields due to hydrodynamic interactions between chains, (b) exact account of (subfilter) residual stresses, (c) polymer Brownian motion, and (d) direct calculation of chain entanglements, is formulated. Although developed in the context of polymeric fluids, the method is equally applicable to turbulent colloidal dispersions and aerosols. - Highlights: • An asymptotically exact method for the computation of polymer and colloidal fluids is developed. • The method is valid for all flow inertia and all polymer volume fractions. • The method models entanglements and hydrodynamic interactions between polymer chains.
International Nuclear Information System (INIS)
Paixao, S.B.; Marzo, M.A.S.; Alvim, A.C.M.
1986-01-01
The calculation method used in WIGLE code is studied. Because of the non availability of such a praiseworthy solution, expounding the method minutely has been tried. This developed method has been applied for the solution of the one-dimensional, two-group, diffusion equations in slab, axial analysis, including non-boiling heat transfer, accountig for feedback. A steady-state program (CITER-1D), written in FORTRAN 4, has been implemented, providing excellent results, ratifying the developed work quality. (Author) [pt
Time-dependent problems and difference methods
Gustafsson, Bertil; Oliger, Joseph
2013-01-01
Praise for the First Edition "". . . fills a considerable gap in the numerical analysis literature by providing a self-contained treatment . . . this is an important work written in a clear style . . . warmly recommended to any graduate student or researcher in the field of the numerical solution of partial differential equations."" -SIAM Review Time-Dependent Problems and Difference Methods, Second Edition continues to provide guidance for the analysis of difference methods for computing approximate solutions to partial differential equations for time-de
Bakr, Osman M.
2017-03-02
Embodiments of the present disclosure provide for solar cells including an organometallic halide perovskite monocrystalline film (see fig. 1.1B), other devices including the organometallic halide perovskite monocrystalline film, methods of making organometallic halide perovskite monocrystalline film, and the like.
Time Scale in Least Square Method
Directory of Open Access Journals (Sweden)
Özgür Yeniay
2014-01-01
Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.
Pan, Feng; Tao, Guohua
2013-03-07
Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.
Energy Technology Data Exchange (ETDEWEB)
Tavassoli, A.A.
1986-10-01
Dislocation substructures formed in austenitic stainless steel 304L and 316L, fatigued at 673 K, 823 K and 873 K under total imposed strain ranges of 0.7 to 2.25%, and their correlation with mechanical properties have been investigated. In addition substructures formed at lower strain ranges have been examined using foils prepared from parts of the specimens with larger cross-sections. Investigation has also been extended to include the effect of intermittent hold-times up to 1.8 x 10/sup 4/s and sequential creep-fatigue and fatigue-creep. The experimental results obtained are analysed and their implications for current dislocation concepts and mechanical properties are discussed.
Kong, Peter C; Grandy, Jon D; Detering, Brent A; Zuck, Larry D
2013-09-17
Electrode assemblies for plasma reactors include a structure or device for constraining an arc endpoint to a selected area or region on an electrode. In some embodiments, the structure or device may comprise one or more insulating members covering a portion of an electrode. In additional embodiments, the structure or device may provide a magnetic field configured to control a location of an arc endpoint on the electrode. Plasma generating modules, apparatus, and systems include such electrode assemblies. Methods for generating a plasma include covering at least a portion of a surface of an electrode with an electrically insulating member to constrain a location of an arc endpoint on the electrode. Additional methods for generating a plasma include generating a magnetic field to constrain a location of an arc endpoint on an electrode.
Directory of Open Access Journals (Sweden)
Ruoyu Luo
Full Text Available Due to the complexity of biological systems, simulation of biological networks is necessary but sometimes complicated. The classic stochastic simulation algorithm (SSA by Gillespie and its modified versions are widely used to simulate the stochastic dynamics of biochemical reaction systems. However, it has remained a challenge to implement accurate and efficient simulation algorithms for general reaction schemes in growing cells. Here, we present a modeling and simulation tool, called 'GeneCircuits', which is specifically developed to simulate gene-regulation in exponentially growing bacterial cells (such as E. coli with overlapping cell cycles. Our tool integrates three specific features of these cells that are not generally included in SSA tools: 1 the time delay between the regulation and synthesis of proteins that is due to transcription and translation processes; 2 cell cycle-dependent periodic changes of gene dosage; and 3 variations in the propensities of chemical reactions that have time-dependent reaction rates as a consequence of volume expansion and cell division. We give three biologically relevant examples to illustrate the use of our simulation tool in quantitative studies of systems biology and synthetic biology.
Optimal Design and Real Time Implementation of Autonomous Microgrid Including Active Load
Directory of Open Access Journals (Sweden)
Mohamed A. Hassan
2018-05-01
Full Text Available Controller gains and power-sharing parameters are the main parameters affect the dynamic performance of the microgrid. Considering an active load to the autonomous microgrid, the stability problem will be more involved. In this paper, the active load effect on microgrid dynamic stability is explored. An autonomous microgrid including three inverter-based distributed generations (DGs with an active load is modeled and the associated controllers are designed. Controller gains of the inverters and active load as well as Phase Locked Loop (PLL parameters are optimally tuned to guarantee overall system stability. A weighted objective function is proposed to minimize the error in both measured active power and DC voltage based on time-domain simulations. Different AC and DC disturbances are applied to verify and assess the effectiveness of the proposed control strategy. The results demonstrate the potential of the proposed controller to enhance the microgrid stability and to provide efficient damping characteristics. Additionally, the proposed controller is compared with the literature to demonstrate its superiority. Finally, the microgrid considered has been established and implemented on real time digital simulator (RTDS. The experimental results validate the simulation results and approve the effectiveness of the proposed controllers to enrich the stability of the considered microgrid.
Torres-Lapasió, J R; Pous-Torres, S; Ortiz-Bolsico, C; García-Alvarez-Coque, M C
2015-01-16
The optimisation of the resolution in high-performance liquid chromatography is traditionally performed attending only to the time information. However, even in the optimal conditions, some peak pairs may remain unresolved. Such incomplete resolution can be still accomplished by deconvolution, which can be carried out with more guarantees of success by including spectral information. In this work, two-way chromatographic objective functions (COFs) that incorporate both time and spectral information were tested, based on the peak purity (analyte peak fraction free of overlapping) and the multivariate selectivity (figure of merit derived from the net analyte signal) concepts. These COFs are sensitive to situations where the components that coelute in a mixture show some spectral differences. Therefore, they are useful to find out experimental conditions where the spectrochromatograms can be recovered by deconvolution. Two-way multivariate selectivity yielded the best performance and was applied to the separation using diode-array detection of a mixture of 25 phenolic compounds, which remained unresolved in the chromatographic order using linear and multi-linear gradients of acetonitrile-water. Peak deconvolution was carried out using the combination of orthogonal projection approach and alternating least squares. Copyright © 2014 Elsevier B.V. All rights reserved.
Robust scaling laws for energy confinement time, including radiated fraction, in Tokamaks
Murari, A.; Peluso, E.; Gaudio, P.; Gelfusa, M.
2017-12-01
In recent years, the limitations of scalings in power-law form that are obtained from traditional log regression have become increasingly evident in many fields of research. Given the wide gap in operational space between present-day and next-generation devices, robustness of the obtained models in guaranteeing reasonable extrapolability is a major issue. In this paper, a new technique, called symbolic regression, is reviewed, refined, and applied to the ITPA database for extracting scaling laws of the energy-confinement time at different radiated fraction levels. The main advantage of this new methodology is its ability to determine the most appropriate mathematical form of the scaling laws to model the available databases without the restriction of their having to be power laws. In a completely new development, this technique is combined with the concept of geodesic distance on Gaussian manifolds so as to take into account the error bars in the measurements and provide more reliable models. Robust scaling laws, including radiated fractions as regressor, have been found; they are not in power-law form, and are significantly better than the traditional scalings. These scaling laws, including radiated fractions, extrapolate quite differently to ITER, and therefore they require serious consideration. On the other hand, given the limitations of the existing databases, dedicated experimental investigations will have to be carried out to fully understand the impact of radiated fractions on the confinement in metallic machines and in the next generation of devices.
Multiple Shooting and Time Domain Decomposition Methods
Geiger, Michael; Körkel, Stefan; Rannacher, Rolf
2015-01-01
This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms. The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics. This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...
A novel technique for including surface tension in PLIC-VOF methods
Energy Technology Data Exchange (ETDEWEB)
Meier, M.; Yadigaroglu, G. [Swiss Federal Institute of Technology, Nuclear Engineering Lab. ETH-Zentrum, CLT, Zurich (Switzerland); Smith, B. [Paul Scherrer Inst. (PSI), Villigen (Switzerland). Lab. for Thermal-Hydraulics
2002-02-01
Various versions of Volume-of-Fluid (VOF) methods have been used successfully for the numerical simulation of gas-liquid flows with an explicit tracking of the phase interface. Of these, Piecewise-Linear Interface Construction (PLIC-VOF) appears as a fairly accurate, although somewhat more involved variant. Including effects due to surface tension remains a problem, however. The most prominent methods, Continuum Surface Force (CSF) of Brackbill et al. and the method of Zaleski and co-workers (both referenced later), both induce spurious or 'parasitic' currents, and only moderate accuracy in regards to determining the curvature. We present here a new method to determine curvature accurately using an estimator function, which is tuned with a least-squares-fit against reference data. Furthermore, we show how spurious currents may be drastically reduced using the reconstructed interfaces from the PLIC-VOF method. (authors)
Probabilistic real-time contingency ranking method
International Nuclear Information System (INIS)
Mijuskovic, N.A.; Stojnic, D.
2000-01-01
This paper describes a real-time contingency method based on a probabilistic index-expected energy not supplied. This way it is possible to take into account the stochastic nature of the electric power system equipment outages. This approach enables more comprehensive ranking of contingencies and it is possible to form reliability cost values that can form the basis for hourly spot price calculations. The electric power system of Serbia is used as an example for the method proposed. (author)
van der Harst, Eugenie; Potting, José; Kroeze, Carolien
2016-02-01
Many methods have been reported and used to include recycling in life cycle assessments (LCAs). This paper evaluates six widely used methods: three substitution methods (i.e. substitution based on equal quality, a correction factor, and alternative material), allocation based on the number of recycling loops, the recycled-content method, and the equal-share method. These six methods were first compared, with an assumed hypothetical 100% recycling rate, for an aluminium can and a disposable polystyrene (PS) cup. The substitution and recycled-content method were next applied with actual rates for recycling, incineration and landfilling for both product systems in selected countries. The six methods differ in their approaches to credit recycling. The three substitution methods stimulate the recyclability of the product and assign credits for the obtained recycled material. The choice to either apply a correction factor, or to account for alternative substituted material has a considerable influence on the LCA results, and is debatable. Nevertheless, we prefer incorporating quality reduction of the recycled material by either a correction factor or an alternative substituted material over simply ignoring quality loss. The allocation-on-number-of-recycling-loops method focusses on the life expectancy of material itself, rather than on a specific separate product. The recycled-content method stimulates the use of recycled material, i.e. credits the use of recycled material in products and ignores the recyclability of the products. The equal-share method is a compromise between the substitution methods and the recycled-content method. The results for the aluminium can follow the underlying philosophies of the methods. The results for the PS cup are additionally influenced by the correction factor or credits for the alternative material accounting for the drop in PS quality, the waste treatment management (recycling rate, incineration rate, landfilling rate), and the
Rollins, Harry W [Idaho Falls, ID; Petkovic, Lucia M [Idaho Falls, ID; Ginosar, Daniel M [Idaho Falls, ID
2011-02-01
Catalytic structures include a catalytic material disposed within a zeolite material. The catalytic material may be capable of catalyzing a formation of methanol from carbon monoxide and/or carbon dioxide, and the zeolite material may be capable of catalyzing a formation of hydrocarbon molecules from methanol. The catalytic material may include copper and zinc oxide. The zeolite material may include a first plurality of pores substantially defined by a crystal structure of the zeolite material and a second plurality of pores dispersed throughout the zeolite material. Systems for synthesizing hydrocarbon molecules also include catalytic structures. Methods for synthesizing hydrocarbon molecules include contacting hydrogen and at least one of carbon monoxide and carbon dioxide with such catalytic structures. Catalytic structures are fabricated by forming a zeolite material at least partially around a template structure, removing the template structure, and introducing a catalytic material into the zeolite material.
Including foreshocks and aftershocks in time-independent probabilistic seismic hazard analyses
Boyd, Oliver S.
2012-01-01
Time‐independent probabilistic seismic‐hazard analysis treats each source as being temporally and spatially independent; hence foreshocks and aftershocks, which are both spatially and temporally dependent on the mainshock, are removed from earthquake catalogs. Yet, intuitively, these earthquakes should be considered part of the seismic hazard, capable of producing damaging ground motions. In this study, I consider the mainshock and its dependents as a time‐independent cluster, each cluster being temporally and spatially independent from any other. The cluster has a recurrence time of the mainshock; and, by considering the earthquakes in the cluster as a union of events, dependent events have an opportunity to contribute to seismic ground motions and hazard. Based on the methods of the U.S. Geological Survey for a high‐hazard site, the inclusion of dependent events causes ground motions that are exceeded at probability levels of engineering interest to increase by about 10% but could be as high as 20% if variations in aftershock productivity can be accounted for reliably.
A consistent causality-based view on a timed process algebra including urgent interactions
Katoen, Joost P.; Latella, Diego; Langerak, Romanus; Brinksma, Hendrik; Bolognesi, Tommaso
1998-01-01
This paper discusses a timed variant of a process algebra akin to LOTOS, baptized UPA, in a causality-based setting. Two timed features are incorporated—a delay function which constrains the occurrence time of atomic actions and an urgency operator that forces (local or synchronized) actions to
Complete Tangent Stiffness for eXtended Finite Element Method by including crack growth parameters
DEFF Research Database (Denmark)
Mougaard, J.F.; Poulsen, P.N.; Nielsen, L.O.
2013-01-01
the crack geometry parameters, such as the crack length and the crack direction directly in the virtual work formulation. For efficiency, it is essential to obtain a complete tangent stiffness. A new method in this work is presented to include an incremental form the crack growth parameters on equal terms......The eXtended Finite Element Method (XFEM) is a useful tool for modeling the growth of discrete cracks in structures made of concrete and other quasi‐brittle and brittle materials. However, in a standard application of XFEM, the tangent stiffness is not complete. This is a result of not including...... with the degrees of freedom in the FEM‐equations. The complete tangential stiffness matrix is based on the virtual work together with the constitutive conditions at the crack tip. Introducing the crack growth parameters as direct unknowns, both equilibrium equations and the crack tip criterion can be handled...
Earthquake analysis of structures including structure-soil interaction by a substructure method
International Nuclear Information System (INIS)
Chopra, A.K.; Guttierrez, J.A.
1977-01-01
A general substructure method for analysis of response of nuclear power plant structures to earthquake ground motion, including the effects of structure-soil interaction, is summarized. The method is applicable to complex structures idealized as finite element systems and the soil region treated as either a continuum, for example as a viscoelastic halfspace, or idealized as a finite element system. The halfspace idealization permits reliable analysis for sites where essentially similar soils extend to large depths and there is no rigid boundary such as soil-rock interface. For sites where layers of soft soil are underlain by rock at shallow depth, finite element idealization of the soil region is appropriate; in this case, the direct and substructure methods would lead to equivalent results but the latter provides the better alternative. Treating the free field motion directly as the earthquake input in the substructure method eliminates the deconvolution calculations and the related assumption -regarding type and direction of earthquake waves- required in the direct method. The substructure method is computationally efficient because the two substructures-the structure and the soil region- are analyzed separately; and, more important, it permits taking advantage of the important feature that response to earthquake ground motion is essentially contained in the lower few natural modes of vibration of the structure on fixed base. For sites where essentially similar soils extend to large depths and there is no obvious rigid boundary such as a soil-rock interface, numerical results for earthquake response of a nuclear reactor structure are presented to demonstrate that the commonly used finite element method may lead to unacceptable errors; but the substructure method leads to reliable results
Method for Determining the Time Parameter
Directory of Open Access Journals (Sweden)
K. P. Baslyk
2014-01-01
Full Text Available This article proposes a method for calculating one of the characteristics that represents the flight program of the first stage of ballistic rocket i.e. time parameter of the program of attack angle.In simulation of placing the payload for the first stage, a program of flight is used which consists of three segments, namely a vertical climb of the rocket, a segment of programmed reversal by attack angle, and a segment of gravitational reversal with zero angle of attack.The programed reversal by attack angle is simulated as a rapidly decreasing and increasing function. This function depends on the attack angle amplitude, time and time parameter.If the projected and ballistic parameters and the amplitude of attack angle were determined this coefficient is calculated based the constraint that the rocket velocity is equal to 0.8 from the sound velocity (0,264 km/sec when the angle of attack becomes equal to zero. Such constraint is transformed to the nonlinear equation, which can be solved using a Newton method.The attack angle amplitude value is unknown for the design analysis. Exceeding some maximum admissible value for this parameter may lead to excessive trajectory collapsing (foreshortening, which can be identified as an arising negative trajectory angle.Consequently, therefore it is necessary to compute the maximum value of the attack angle amplitude with the following constraints: a trajectory angle is positive during the entire first stage flight and the rocket velocity is equal to 0,264 km/sec by the end of program of angle attack. The problem can be formulated as a task of the nonlinear programming, minimization of the modified Lagrange function, which is solved using the multipliers method.If multipliers and penalty parameter are constant the optimization problem without constraints takes place. Using the determined coordinate descent method allows solving the problem of modified Lagrange function of unconstrained minimization with fixed
Palmprint Verification Using Time Series Method
Directory of Open Access Journals (Sweden)
A. A. Ketut Agung Cahyawan Wiranatha
2013-11-01
Full Text Available The use of biometrics as an automatic recognition system is growing rapidly in solving security problems, palmprint is one of biometric system which often used. This paper used two steps in center of mass moment method for region of interest (ROI segmentation and apply the time series method combined with block window method as feature representation. Normalized Euclidean Distance is used to measure the similarity degrees of two feature vectors of palmprint. System testing is done using 500 samples palms, with 4 samples as the reference image and the 6 samples as test images. Experiment results show that this system can achieve a high performance with success rate about 97.33% (FNMR=1.67%, FMR=1.00 %, T=0.036.
Jiménez, Noé; Camarena, Francisco; Redondo, Javier; Sánchez-Morcillo, Víctor; Konofagou, Elisa E.
2015-10-01
We report a numerical method for solving the constitutive relations of nonlinear acoustics, where multiple relaxation processes are included in a generalized formulation that allows the time-domain numerical solution by an explicit finite differences scheme. Thus, the proposed physical model overcomes the limitations of the one-way Khokhlov-Zabolotskaya-Kuznetsov (KZK) type models and, due to the Lagrangian density is implicitly included in the calculation, the proposed method also overcomes the limitations of Westervelt equation in complex configurations for medical ultrasound. In order to model frequency power law attenuation and dispersion, such as observed in biological media, the relaxation parameters are fitted to both exact frequency power law attenuation/dispersion media and also empirically measured attenuation of a variety of tissues that does not fit an exact power law. Finally, a computational technique based on artificial relaxation is included to correct the non-negligible numerical dispersion of the finite difference scheme, and, on the other hand, improve stability trough artificial attenuation when shock waves are present. This technique avoids the use of high-order finite-differences schemes leading to fast calculations. The present algorithm is especially suited for practical configuration where spatial discontinuities are present in the domain (e.g. axisymmetric domains or zero normal velocity boundary conditions in general). The accuracy of the method is discussed by comparing the proposed simulation solutions to one dimensional analytical and k-space numerical solutions.
International Nuclear Information System (INIS)
Kim, Man Cheol
2004-02-01
Conventional PSA (probabilistic safety analysis) is performed in the framework of event tree analysis and fault tree analysis. In conventional PSA, I and C systems and human operators are assumed to be independent for simplicity. But, the dependency of human operators on I and C systems and the dependency of I and C systems on human operators are gradually recognized to be significant. I believe that it is time to consider the interdependency between I and C systems and human operators in the framework of PSA. But, unfortunately it seems that we do not have appropriate methods for incorporating the interdependency between I and C systems and human operators in the framework of Pasa. Conventional human reliability analysis (HRA) methods are not developed to consider the interdependecy, and the modeling of the interdependency using conventional event tree analysis and fault tree analysis seem to be, event though is does not seem to be impossible, quite complex. To incorporate the interdependency between I and C systems and human operators, we need a new method for HRA and a new method for modeling the I and C systems, man-machine interface (MMI), and human operators for quantitative safety assessment. As a new method for modeling the I and C systems, MMI and human operators, I develop a new system reliability analysis method, reliability graph with general gates (RGGG), which can substitute conventional fault tree analysis. RGGG is an intuitive and easy-to-use method for system reliability analysis, while as powerful as conventional fault tree analysis. To demonstrate the usefulness of the RGGG method, it is applied to the reliability analysis of Digital Plant Protection System (DPPS), which is the actual plant protection system of Ulchin 5 and 6 nuclear power plants located in Republic of Korea. The latest version of the fault tree for DPPS, which is developed by the Integrated Safety Assessment team in Korea Atomic Energy Research Institute (KAERI), consists of 64
Park, Jeong Yoon; Kim, Kyung Hyun; Kuh, Sung Uk; Chin, Dong Kyu; Kim, Keun Su; Cho, Yong Eun
2014-05-01
Surgeon spine angle during surgery was studied ergonomically and the kinematics of the surgeon's spine was related with musculoskeletal fatigue and pain. Spine angles varied depending on operation table height and visualization method, and in a previous paper we showed that the use of a loupe and a table height at the midpoint between the umbilicus and the sternum are optimal for reducing musculoskeletal loading. However, no studies have previously included a microscope as a possible visualization method. The objective of this study is to assess differences in surgeon spine angles depending on operating table height and visualization method, including microscope. We enrolled 18 experienced spine surgeons for this study, who each performed a discectomy using a spine surgery simulator. Three different methods were used to visualize the surgical field (naked eye, loupe, microscope) and three different operating table heights (anterior superior iliac spine, umbilicus, the midpoint between the umbilicus and the sternum) were studied. Whole spine angles were compared for three different views during the discectomy simulation: midline, ipsilateral, and contralateral. A 16-camera optoelectronic motion analysis system was used, and 16 markers were placed from the head to the pelvis. Lumbar lordosis, thoracic kyphosis, cervical lordosis, and occipital angle were compared between the different operating table heights and visualization methods as well as a natural standing position. Whole spine angles differed significantly depending on visualization method. All parameters were closer to natural standing values when discectomy was performed with a microscope, and there were no differences between the naked eye and the loupe. Whole spine angles were also found to differ from the natural standing position depending on operating table height, and became closer to natural standing position values as the operating table height increased, independent of the visualization method
System and method for traffic signal timing estimation
Dumazert, Julien; Claudel, Christian G.
2015-01-01
A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.
System and method for traffic signal timing estimation
Dumazert, Julien
2015-12-30
A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.
Earthquake analysis of structures including structure-soil interaction by a substructure method
International Nuclear Information System (INIS)
Chopra, A.K.; Guttierrez, J.A.
1977-01-01
A general substructure method for analysis of response of nuclear power plant structures to earthquake ground motion, including the effects of structure-soil interaction, is summarized. The method is applicable to complex structures idealized as finite element systems and the soil region treated as either a continuum, for example as a viscoelastic halfspace, or idealized as a finite element system. The halfspace idealization permits reliable analysis for sites where essentially similar soils extend to large depths and there is no rigid boundary such as soil-rock interface. For sites where layers of soft soil are underlain by rock at shallow depth, finite element idealization of the soil region is appropriate; in this case, the direct and substructure methods would lead to equivalent results but the latter provides the better alternative. Treating the free field motion directly as the earthquake input in the substructure eliminates the deconvolution calculations and the related assumption-regarding type and direction of earthquake waves-required in the direct method. (Auth.)
Method and apparatus for controlling a powertrain system including a multi-mode transmission
Hessell, Steven M.; Morris, Robert L.; McGrogan, Sean W.; Heap, Anthony H.; Mendoza, Gil J.
2015-09-08
A powertrain including an engine and torque machines is configured to transfer torque through a multi-mode transmission to an output member. A method for controlling the powertrain includes employing a closed-loop speed control system to control torque commands for the torque machines in response to a desired input speed. Upon approaching a power limit of a power storage device transferring power to the torque machines, power limited torque commands are determined for the torque machines in response to the power limit and the closed-loop speed control system is employed to determine an engine torque command in response to the desired input speed and the power limited torque commands for the torque machines.
International Nuclear Information System (INIS)
Tokuyasu, Yoshiki; Kusakabe, Kiyoko; Yamazaki, Toshio
1981-01-01
Electrocardiography (ECG), echocardiography, nuclear method, cardiac catheterization, left ventriculography and endomyocardial biopsy (biopsy) were performed in 40 cases of cardiomyopathy (CM), 9 of endocardial fibroelastosis and 19 of specific heart muscle disease, and the usefulness and limitation of each method was comparatively estimated. In CM, various methods including biopsy were performed. The 40 patients were classified into 3 groups, i.e., hypertrophic (17), dilated (20) and non-hypertrophic.non-dilated (3) on the basis of left ventricular ejection fraction and hypertrophy of the ventricular wall. The hypertrophic group was divided into 4 subgroups: 9 septal, 4 apical, 2 posterior and 2 anterior. The nuclear study is useful in assessing the site of the abnormal ventricular thickening, perfusion defect and ventricular function. Echocardiography is most useful in detecting asymmetric septal hypertrophy. The biopsy gives the sole diagnostic clue, especially in non-hypertrophic.non-dilated cardiomyopathy. ECG is useful in all cases but correlation with the site of disproportional hypertrophy was not obtained. (J.P.N.)
Directory of Open Access Journals (Sweden)
A.M. Yu
2012-01-01
Full Text Available Free vibration equations for non-cylindrical (conical, barrel, and hyperboloidal types helical springs with noncircular cross-sections, which consist of 14 first-order ordinary differential equations with variable coefficients, are theoretically derived using spatially curved beam theory. In the formulation, the warping effect upon natural frequencies and vibrating mode shapes is first studied in addition to including the rotary inertia, the shear and axial deformation influences. The natural frequencies of the springs are determined by the use of improved Riccati transfer matrix method. The element transfer matrix used in the solution is calculated using the Scaling and Squaring method and Pad'e approximations. Three examples are presented for three types of springs with different cross-sectional shapes under clamped-clamped boundary condition. The accuracy of the proposed method has been compared with the FEM results using three-dimensional solid elements (Solid 45 in ANSYS code. Numerical results reveal that the warping effect is more pronounced in the case of non-cylindrical helical springs than that of cylindrical helical springs, which should be taken into consideration in the free vibration analysis of such springs.
Validating the 5Fs mnemonic for cholelithiasis: time to include family history.
LENUS (Irish Health Repository)
Bass, Gary
2013-11-01
The time-honoured mnemonic of \\'5Fs\\' is a reminder to students that patients with upper abdominal pain and who conform to a profile of \\'fair, fat, female, fertile and forty\\' are likely to have cholelithiasis. We feel, however, that a most important \\'F\\'-that for \\'family history\\'-is overlooked and should be introduced to enhance the value of a useful aide memoire.
Optimal Design and Real Time Implementation of Autonomous Microgrid Including Active Load
Mohamed A. Hassan; Muhammed Y. Worku; Mohamed A. Abido
2018-01-01
Controller gains and power-sharing parameters are the main parameters affect the dynamic performance of the microgrid. Considering an active load to the autonomous microgrid, the stability problem will be more involved. In this paper, the active load effect on microgrid dynamic stability is explored. An autonomous microgrid including three inverter-based distributed generations (DGs) with an active load is modeled and the associated controllers are designed. Controller gains of the inverters ...
Change of time methods in quantitative finance
Swishchuk, Anatoliy
2016-01-01
This book is devoted to the history of Change of Time Methods (CTM), the connections of CTM to stochastic volatilities and finance, fundamental aspects of the theory of CTM, basic concepts, and its properties. An emphasis is given on many applications of CTM in financial and energy markets, and the presented numerical examples are based on real data. The change of time method is applied to derive the well-known Black-Scholes formula for European call options, and to derive an explicit option pricing formula for a European call option for a mean-reverting model for commodity prices. Explicit formulas are also derived for variance and volatility swaps for financial markets with a stochastic volatility following a classical and delayed Heston model. The CTM is applied to price financial and energy derivatives for one-factor and multi-factor alpha-stable Levy-based models. Readers should have a basic knowledge of probability and statistics, and some familiarity with stochastic processes, such as Brownian motion, ...
SU-F-J-86: Method to Include Tissue Dose Response Effect in Deformable Image Registration
Energy Technology Data Exchange (ETDEWEB)
Zhu, J; Liang, J; Chen, S; Qin, A; Yan, D [Beaumont Health Systeml, Royal Oak, MI (United States)
2016-06-15
Purpose: Organ changes shape and size during radiation treatment due to both mechanical stress and radiation dose response. However, the dose response induced deformation has not been considered in conventional deformable image registration (DIR). A novel DIR approach is proposed to include both tissue elasticity and radiation dose induced organ deformation. Methods: Assuming that organ sub-volume shrinkage was proportional to the radiation dose induced cell killing/absorption, the dose induced organ volume change was simulated applying virtual temperature on each sub-volume. Hence, both stress and heterogeneity temperature induced organ deformation. Thermal stress finite element method with organ surface boundary condition was used to solve deformation. Initial boundary correspondence on organ surface was created from conventional DIR. Boundary condition was updated by an iterative optimization scheme to minimize elastic deformation energy. The registration was validated on a numerical phantom. Treatment dose was constructed applying both the conventional DIR and the proposed method using daily CBCT image obtained from HN treatment. Results: Phantom study showed 2.7% maximal discrepancy with respect to the actual displacement. Compared with conventional DIR, subvolume displacement difference in a right parotid had the mean±SD (Min, Max) to be 1.1±0.9(−0.4∼4.8), −0.1±0.9(−2.9∼2.4) and −0.1±0.9(−3.4∼1.9)mm in RL/PA/SI directions respectively. Mean parotid dose and V30 constructed including the dose response induced shrinkage were 6.3% and 12.0% higher than those from the conventional DIR. Conclusion: Heterogeneous dose distribution in normal organ causes non-uniform sub-volume shrinkage. Sub-volume in high dose region has a larger shrinkage than the one in low dose region, therefore causing more sub-volumes to move into the high dose area during the treatment course. This leads to an unfavorable dose-volume relationship for the normal organ
Space Weather opportunities from the Swarm mission including near real time applications
DEFF Research Database (Denmark)
Stolle, Claudia; Floberghagen, Rune; Luehr, Hermann
2013-01-01
Sophisticated space weather monitoring aims at nowcasting and predicting solar-terrestrial interactions because their effects on the ionosphere and upper atmosphere may seriously impact advanced technology. Operating alert infrastructures rely heavily on ground-based measurements and satellite...... these products in timely manner will add significant value in monitoring present space weather and helping to predict the evolution of several magnetic and ionospheric events. Swarm will be a demonstrator mission for the valuable application of LEO satellite observations for space weather monitoring tools....
A model for Huanglongbing spread between citrus plants including delay times and human intervention
Vilamiu, Raphael G. d'A.; Ternes, Sonia; Braga, Guilherme A.; Laranjeira, Francisco F.
2012-09-01
The objective of this work was to present a compartmental deterministic mathematical model for representing the dynamics of HLB disease in a citrus orchard, including delay in the disease's incubation phase in the plants, and a delay period on the nymphal stage of Diaphorina citri, the most important HLB insect vector in Brazil. Numerical simulations were performed to assess the possible impacts of human detection efficiency of symptomatic plants, as well as the influence of a long incubation period of HLB in the plant.
Andersson, P. B. U.; Kropp, W.
2008-11-01
Rolling resistance, traction, wear, excitation of vibrations, and noise generation are all attributes to consider in optimisation of the interaction between automotive tyres and wearing courses of roads. The key to understand and describe the interaction is to include a wide range of length scales in the description of the contact geometry. This means including scales on the order of micrometres that have been neglected in previous tyre/road interaction models. A time domain contact model for the tyre/road interaction that includes interfacial details is presented. The contact geometry is discretised into multiple elements forming pairs of matching points. The dynamic response of the tyre is calculated by convolving the contact forces with pre-calculated Green's functions. The smaller-length scales are included by using constitutive interfacial relations, i.e. by using nonlinear contact springs, for each pair of contact elements. The method is presented for normal (out-of-plane) contact and a method for assessing the stiffness of the nonlinear springs based on detailed geometry and elastic data of the tread is suggested. The governing equations of the nonlinear contact problem are solved with the Newton-Raphson iterative scheme. Relations between force, indentation, and contact stiffness are calculated for a single tread block in contact with a road surface. The calculated results have the same character as results from measurements found in literature. Comparison to traditional contact formulations shows that the effect of the small-scale roughness is large; the contact stiffness is only up to half of the stiffness that would result if contact is made over the whole element directly to the bulk of the tread. It is concluded that the suggested contact formulation is a suitable model to include more details of the contact interface. Further, the presented result for the tread block in contact with the road is a suitable input for a global tyre/road interaction model
Engine including hydraulically actuated valvetrain and method of valve overlap control
Cowgill, Joel [White Lake, MI
2012-05-08
An exhaust valve control method may include displacing an exhaust valve in communication with the combustion chamber of an engine to an open position using a hydraulic exhaust valve actuation system and returning the exhaust valve to a closed position using the hydraulic exhaust valve actuation assembly. During closing, the exhaust valve may be displaced for a first duration from the open position to an intermediate closing position at a first velocity by operating the hydraulic exhaust valve actuation assembly in a first mode. The exhaust valve may be displaced for a second duration greater than the first duration from the intermediate closing position to a fully closed position at a second velocity at least eighty percent less than the first velocity by operating the hydraulic exhaust valve actuation assembly in a second mode.
Merkin, V. G.; Wiltberger, M. J.; Zhang, B.; Liu, J.; Wang, W.; Dimant, Y. S.; Oppenheim, M. M.; Lyon, J.
2017-12-01
During geomagnetic storms the magnetosphere-ionosphere-thermosphere system becomes activated in ways that are unique to disturbed conditions. This leads to emergence of physical feedback loops that provide tighter coupling between the system elements, often operating across disparate spatial and temporal scales. One such process that has recently received renewed interest is the generation of microscopic ionospheric turbulence in the electrojet regions (electrojet turbulence, ET) that results from strong convective electric fields imposed by the solar wind-magnetosphere interaction. ET leads to anomalous electron heating and generation of non-linear Pedersen current - both of which result in significant increases in effective ionospheric conductances. This, in turn, provides strong non-linear feedback on the magnetosphere. Recently, our group has published two studies aiming at a comprehensive analysis of the global effects of this microscopic process on the magnetosphere-ionosphere-thermosphere system. In one study, ET physics was incorporated in the TIEGCM model of the ionosphere-thermosphere. In the other study, ad hoc corrections to the ionospheric conductances based on ET theory were incorporated in the conductance module of the Lyon-Fedder-Mobarry (LFM) global magnetosphere model. In this presentation, we make the final step toward the full coupling of the microscopic ET physics within our global coupled model including LFM, the Rice Convection Model (RCM) and TIEGCM. To this end, ET effects are incorporated in the TIEGCM model and propagate throughout the system via thus modified TIEGCM conductances. The March 17, 2013 geomagnetic storm is used as a testbed for these fully coupled simulations, and the results of the model are compared with various ionospheric and magnetospheric observatories, including DMSP, AMPERE, and Van Allen Probes. Via these comparisons, we investigate, in particular, the ET effects on the global magnetosphere indicators such as the
Mariani, Robert Dominick
2014-09-09
Zirconium-based metal alloy compositions comprise zirconium, a first additive in which the permeability of hydrogen decreases with increasing temperatures at least over a temperature range extending from 350.degree. C. to 750.degree. C., and a second additive having a solubility in zirconium over the temperature range extending from 350.degree. C. to 750.degree. C. At least one of a solubility of the first additive in the second additive over the temperature range extending from 350.degree. C. to 750.degree. C. and a solubility of the second additive in the first additive over the temperature range extending from 350.degree. C. to 750.degree. C. is higher than the solubility of the second additive in zirconium over the temperature range extending from 350.degree. C. to 750.degree. C. Nuclear fuel rods include a cladding material comprising such metal alloy compositions, and nuclear reactors include such fuel rods. Methods are used to fabricate such zirconium-based metal alloy compositions.
Time to consider sharing data extracted from trials included in systematic reviews
Directory of Open Access Journals (Sweden)
Luke Wolfenden
2016-11-01
Full Text Available Abstract Background While the debate regarding shared clinical trial data has shifted from whether such data should be shared to how this is best achieved, the sharing of data collected as part of systematic reviews has received little attention. In this commentary, we discuss the potential benefits of coordinated efforts to share data collected as part of systematic reviews. Main body There are a number of potential benefits of systematic review data sharing. Shared information and data obtained as part of the systematic review process may reduce unnecessary duplication, reduce demand on trialist to service repeated requests from reviewers for data, and improve the quality and efficiency of future reviews. Sharing also facilitates research to improve clinical trial and systematic review methods and supports additional analyses to address secondary research questions. While concerns regarding appropriate use of data, costs, or the academic return for original review authors may impede more open access to information extracted as part of systematic reviews, many of these issues are being addressed, and infrastructure to enable greater access to such information is being developed. Conclusion Embracing systems to enable more open access to systematic review data has considerable potential to maximise the benefits of research investment in undertaking systematic reviews.
ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION
Energy Technology Data Exchange (ETDEWEB)
Chen, Bin; Maddumage, Prasad [Research Computing Center, Department of Scientific Computing, Florida State University, Tallahassee, FL 32306 (United States); Kantowski, Ronald; Dai, Xinyu; Baron, Eddie, E-mail: bchen3@fsu.edu [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK 73019 (United States)
2015-05-15
Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.
ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION
International Nuclear Information System (INIS)
Chen, Bin; Maddumage, Prasad; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie
2015-01-01
Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python
Liu, Jinxing
2013-04-24
When the brittle heterogeneous material is simulated via lattice models, the quasi-static failure depends on the relative magnitudes of Telem, the characteristic releasing time of the internal forces of the broken elements and Tlattice, the characteristic relaxation time of the lattice, both of which are infinitesimal compared with Tload, the characteristic loading period. The load-unload (L-U) method is used for one extreme, Telem << Tlattice, whereas the force-release (F-R) method is used for the other, Telem T lattice. For cases between the above two extremes, we develop a new algorithm by combining the L-U and the F-R trial displacement fields to construct the new trial field. As a result, our algorithm includes both L-U and F-R failure characteristics, which allows us to observe the influence of the ratio of Telem to Tlattice by adjusting their contributions in the trial displacement field. Therefore, the material dependence of the snap-back instabilities is implemented by introducing one snap-back parameter γ. Although in principle catastrophic failures can hardly be predicted accurately without knowing all microstructural information, effects of γ can be captured by numerical simulations conducted on samples with exactly the same microstructure but different γs. Such a same-specimen-based study shows how the lattice behaves along with the changing ratio of the L-U and F-R components. © 2013 The Author(s).
Second-principles method for materials simulations including electron and lattice degrees of freedom
García-Fernández, Pablo; Wojdeł, Jacek C.; Íñiguez, Jorge; Junquera, Javier
2016-05-01
We present a first-principles-based (second-principles) scheme that permits large-scale materials simulations including both atomic and electronic degrees of freedom on the same footing. The method is based on a predictive quantum-mechanical theory—e.g., density functional theory—and its accuracy can be systematically improved at a very modest computational cost. Our approach is based on dividing the electron density of the system into a reference part—typically corresponding to the system's neutral, geometry-dependent ground state—and a deformation part—defined as the difference between the actual and reference densities. We then take advantage of the fact that the bulk part of the system's energy depends on the reference density alone; this part can be efficiently and accurately described by a force field, thus avoiding explicit consideration of the electrons. Then, the effects associated to the difference density can be treated perturbatively with good precision by working in a suitably chosen Wannier function basis. Further, the electronic model can be restricted to the bands of interest. All these features combined yield a very flexible and computationally very efficient scheme. Here we present the basic formulation of this approach, as well as a practical strategy to compute model parameters for realistic materials. We illustrate the accuracy and scope of the proposed method with two case studies, namely, the relative stability of various spin arrangements in NiO (featuring complex magnetic interactions in a strongly-correlated oxide) and the formation of a two-dimensional electron gas at the interface between band insulators LaAlO3 and SrTiO3 (featuring subtle electron-lattice couplings and screening effects). We conclude by discussing ways to overcome the limitations of the present approach (most notably, the assumption of a fixed bonding topology), as well as its many envisioned possibilities and future extensions.
Klinkusch, Stefan; Saalfrank, Peter; Klamroth, Tillmann
2009-09-21
We report simulations of laser-pulse driven many-electron dynamics by means of a simple, heuristic extension of the time-dependent configuration interaction singles (TD-CIS) approach. The extension allows for the treatment of ionizing states as nonstationary states with a finite, energy-dependent lifetime to account for above-threshold ionization losses in laser-driven many-electron dynamics. The extended TD-CIS method is applied to the following specific examples: (i) state-to-state transitions in the LiCN molecule which correspond to intramolecular charge transfer, (ii) creation of electronic wave packets in LiCN including wave packet analysis by pump-probe spectroscopy, and, finally, (iii) the effect of ionization on the dynamic polarizability of H(2) when calculated nonperturbatively by TD-CIS.
Applicability of a panel method, which includes nonlinear effects, to a forward-swept-wing aircraft
Ross, J. C.
1984-01-01
The ability of a lower order panel method VSAERO, to accurately predict the lift and pitching moment of a complete forward-swept-wing/canard configuration was investigated. The program can simulate nonlinear effects including boundary-layer displacement thickness, wake roll up, and to a limited extent, separated wakes. The predictions were compared with experimental data obtained using a small-scale model in the 7- by 10- Foot Wind Tunnel at NASA Ames Research Center. For the particular configuration under investigation, wake roll up had only a small effect on the force and moment predictions. The effect of the displacement thickness modeling was to reduce the lift curve slope slightly, thus bringing the predicted lift into good agreement with the measured value. Pitching moment predictions were also improved by the boundary-layer simulation. The separation modeling was found to be sensitive to user inputs, but appears to give a reasonable representation of a separated wake. In general, the nonlinear capabilities of the code were found to improve the agreement with experimental data. The usefullness of the code would be enhanced by improving the reliability of the separated wake modeling and by the addition of a leading edge separation model.
Chu, Henry Shiu-Hung [Idaho Falls, ID; Lillo, Thomas Martin [Idaho Falls, ID
2008-12-02
The invention includes methods of forming an aluminum oxynitride-comprising body. For example, a mixture is formed which comprises A:B:C in a respective molar ratio in the range of 9:3.6-6.2:0.1-1.1, where "A" is Al.sub.2O.sub.3, "B" is AlN, and "C" is a total of one or more of B.sub.2O.sub.3, SiO.sub.2, Si--Al--O--N, and TiO.sub.2. The mixture is sintered at a temperature of at least 1,600.degree. C. at a pressure of no greater than 500 psia effective to form an aluminum oxynitride-comprising body which is at least internally transparent and has at least 99% maximum theoretical density.
A Blade Tip Timing Method Based on a Microwave Sensor
Directory of Open Access Journals (Sweden)
Jilong Zhang
2017-05-01
Full Text Available Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy.
Implementation aspects of the Boundary Element Method including viscous and thermal losses
DEFF Research Database (Denmark)
Cutanda Henriquez, Vicente; Juhl, Peter Møller
2014-01-01
The implementation of viscous and thermal losses using the Boundary Element Method (BEM) is based on the Kirchhoff’s dispersion relation and has been tested in previous work using analytical test cases and comparison with measurements. Numerical methods that can simulate sound fields in fluids...
Time series analysis time series analysis methods and applications
Rao, Tata Subba; Rao, C R
2012-01-01
The field of statistics not only affects all areas of scientific activity, but also many other matters such as public policy. It is branching rapidly into so many different subjects that a series of handbooks is the only way of comprehensively presenting the various aspects of statistical methodology, applications, and recent developments. The Handbook of Statistics is a series of self-contained reference books. Each volume is devoted to a particular topic in statistics, with Volume 30 dealing with time series. The series is addressed to the entire community of statisticians and scientists in various disciplines who use statistical methodology in their work. At the same time, special emphasis is placed on applications-oriented techniques, with the applied statistician in mind as the primary audience. Comprehensively presents the various aspects of statistical methodology Discusses a wide variety of diverse applications and recent developments Contributors are internationally renowened experts in their respect...
Timing, methods and prospective in citizenship training
Directory of Open Access Journals (Sweden)
Alessia Carta
2010-07-01
Full Text Available The current models of development are changing the balance between human activity and Nature on a local ands global level and the urgent need to establish a new relationship between Man and the environment is increasingly apparent. The move towards a more caring approach to the planet introducing concepts such as limits, impact on future generations, regeneration of resources, social and environmental justice and the right to citizenship should make us consider (aside from international undertakings by governments exactly how we can promote a culture of sustainability in schools in terms of methods, time scales, and location. Schools are directly involved in these processes of change however it is necessary to plan carefully and establish situations that will result in greater attention being paid to the interaction between man and the environment, and highlighting the lifestyles and attitudes that are currently incompatible with a sustainable future. These solutions, although based on technical-scientific knowledge, cannot be brought about without the involvement of the individual and local agencies working together. However we have chosen to concentrate on the links between educational policies and local areas interpreting declarations made by international bodies such as UNESCO and suggestions aimed at bringing sustainability to the centre of specific policies. Bringing about these aims requires great educational effort that goes well beyond simple environmental education since it requires a permanent process for educating adults. Looking at stages of the history of the theories regarding the development and education of adults shows how the topic of sustainability made its entry into the debate about permanent education and how in the last ten years it has taken on an unrivalled importance as a point of reference for educational policies and pedagogical reflection. The origin of the concept of sustainability, although belonging to natural
A coupling method for a cardiovascular simulation model which includes the Kalman filter.
Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya
2012-01-01
Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.
Turbomachine combustor nozzle including a monolithic nozzle component and method of forming the same
Stoia, Lucas John; Melton, Patrick Benedict; Johnson, Thomas Edward; Stevenson, Christian Xavier; Vanselow, John Drake; Westmoreland, James Harold
2016-02-23
A turbomachine combustor nozzle includes a monolithic nozzle component having a plate element and a plurality of nozzle elements. Each of the plurality of nozzle elements includes a first end extending from the plate element to a second end. The plate element and plurality of nozzle elements are formed as a unitary component. A plate member is joined with the nozzle component. The plate member includes an outer edge that defines first and second surfaces and a plurality of openings extending between the first and second surfaces. The plurality of openings are configured and disposed to register with and receive the second end of corresponding ones of the plurality of nozzle elements.
International Nuclear Information System (INIS)
Lewandowski, E.F.; Peterson, L.L.
1985-01-01
This invention teaches a method of cutting a narrow slot in an extrusion die with an electrical discharge machine by first drilling spaced holes at the ends of where the slot will be, whereby the oil can flow through the holes and slot to flush the material eroded away as the slot is being cut. The invention further teaches a method of extruding a very thin ribbon of solid highly reactive material such as lithium or sodium through the die in an inert atmosphere of nitrogen, argon or the like as in a glovebox. The invention further teaches a method of stamping out sample discs from the ribbon and of packaging each disc by sandwiching it between two aluminum sheets and cold welding the sheets together along an annular seam beyond the outer periphery of the disc. This provides a sample of high purity reactive material that can have a long shelf life
Method for including detailed evaluation of daylight levels in Be06
DEFF Research Database (Denmark)
Petersen, Steffen
2008-01-01
Good daylight conditions in office buildings have become an important issue due to new European regulatory demands which include energy consumption for electrical lighting in the building energy frame. Good daylight conditions in offices are thus in increased focus as an energy conserving measure....... In order to evaluate whether a certain design is good daylight design or not building designers must perform detailed evaluation of daylight levels, including the daylight performance of dynamic solar shadings, and include these in the energy performance evaluation. However, the mandatory national...... calculation tool in Denmark (Be06) for evaluating the energy performance of buildings is currently using a simple representation of available daylight in a room and simple assumptions regarding the control of shading devices. In a case example, this is leading to an overestimation of the energy consumption...
A novel method of including Landau level mixing in numerical studies of the quantum Hall effect
International Nuclear Information System (INIS)
Wooten, Rachel; Quinn, John; Macek, Joseph
2013-01-01
Landau level mixing should influence the quantum Hall effect for all except the strongest applied magnetic fields. We propose a simple method for examining the effects of Landau level mixing by incorporating multiple Landau levels into the Haldane pseudopotentials through exact numerical diagonalization. Some of the resulting pseudopotentials for the lowest and first excited Landau levels will be presented
Yanagihara, Kota; Kubo, Shin; Dodin, Ilya; Nakamura, Hiroaki; Tsujimura, Toru
2017-10-01
Geometrical Optics Ray-tracing is a reasonable numerical analytic approach for describing the Electron Cyclotron resonance Wave (ECW) in slowly varying spatially inhomogeneous plasma. It is well known that the result with this conventional method is adequate in most cases. However, in the case of Helical fusion plasma which has complicated magnetic structure, strong magnetic shear with a large scale length of density can cause a mode coupling of waves outside the last closed flux surface, and complicated absorption structure requires a strong focused wave for ECH. Since conventional Ray Equations to describe ECW do not have any terms to describe the diffraction, polarization and wave decay effects, we can not describe accurately a mode coupling of waves, strong focus waves, behavior of waves in inhomogeneous absorption region and so on. For fundamental solution of these problems, we consider the extension of the Ray-tracing method. Specific process is planned as follows. First, calculate the reference ray by conventional method, and define the local ray-base coordinate system along the reference ray. Then, calculate the evolution of the distributions of amplitude and phase on ray-base coordinate step by step. The progress of our extended method will be presented.
Indication of Importance of Including Soil Microbial Characteristics into Biotope Valuation Method.
Czech Academy of Sciences Publication Activity Database
Trögl, J.; Pavlorková, Jana; Packová, P.; Seják, J.; Kuráň, P.; Kuráň, J.; Popelka, J.; Pacina, J.
2016-01-01
Roč. 8, č. 3 (2016), č. článku 253. ISSN 2071-1050 Institutional support: RVO:67985858 Keywords : biotope assessment * biotope valuation method * soil microbial communities Subject RIV: DJ - Water Pollution ; Quality Impact factor: 1.789, year: 2016
Ale, B.J.M.; Van Gulijk, C.; Hanea, D.M.; Hudson, P.; Lin, P.H.; Sillem, S.; Steenhoek, M.; Ababei, D.
2013-01-01
An integrated model for risk in a real-time environment for the hydrocarbon industry based on the CATS model for commercial aviation safety has been further developed. The approach described in earlier papers required Bayesian Belief Nets (BBN) to be developed for each process unit separately. A
Tomczuk, Zygmunt; Olszanski, Theodore W.; Battles, James E.
1977-03-08
A negative electrode that includes a lithium alloy as active material is prepared by briefly submerging a porous, electrically conductive substrate within a melt of the alloy. Prior to solidification, excess melt can be removed by vibrating or otherwise manipulating the filled substrate to expose interstitial surfaces. Electrodes of such as solid lithium-aluminum filled within a substrate of metal foam are provided.
Thick electrodes including nanoparticles having electroactive materials and methods of making same
Xiao, Jie; Lu, Dongping; Liu, Jun; Zhang, Jiguang; Graff, Gordon L.
2017-02-21
Electrodes having nanostructure and/or utilizing nanoparticles of active materials and having high mass loadings of the active materials can be made to be physically robust and free of cracks and pinholes. The electrodes include nanoparticles having electroactive material, which nanoparticles are aggregated with carbon into larger secondary particles. The secondary particles can be bound with a binder to form the electrode.
Explicit time marching methods for the time-dependent Euler computations
International Nuclear Information System (INIS)
Tai, C.H.; Chiang, D.C.; Su, Y.P.
1997-01-01
Four explicit type time marching methods, including one proposed by the authors, are examined. The TVD conditions of this method are analyzed with the linear conservation law as the model equation. Performance of these methods when applied to the Euler equations are numerically tested. Seven examples are tested, the main concern is the performance of the methods when discontinuities with different strengths are encountered. When the discontinuity is getting stronger, spurious oscillation shows up for three existing methods, while the method proposed by the authors always gives the results with satisfaction. The effect of the limiter is also investigated. To put these methods in the same basis for the comparison the same spatial discretization is used. Roe's solver is used to evaluate the fluxes at the cell interface; spatially second-order accuracy is achieved by the MUSCL reconstruction. 19 refs., 8 figs
Method for pulse control in a laser including a stimulated brillouin scattering mirror system
Dane, C. Brent; Hackel, Lloyd; Harris, Fritz B.
2007-10-23
A laser system, such as a master oscillator/power amplifier system, comprises a gain medium and a stimulated Brillouin scattering SBS mirror system. The SBS mirror system includes an in situ filtered SBS medium that comprises a compound having a small negative non-linear index of refraction, such as a perfluoro compound. An SBS relay telescope having a telescope focal point includes a baffle at the telescope focal point which blocks off angle beams. A beam splitter is placed between the SBS mirror system and the SBS relay telescope, directing a fraction of the beam to an alternate beam path for an alignment fiducial. The SBS mirror system has a collimated SBS cell and a focused SBS cell. An adjustable attenuator is placed between the collimated SBS cell and the focused SBS cell, by which pulse width of the reflected beam can be adjusted.
Slater, T. F.; Elfring, L.; Novodvorsky, I.; Talanquer, V.; Quintenz, J.
2007-12-01
Science education reform documents universally call for students to have authentic and meaningful experiences using real data in the context of their science education. The underlying philosophical position is that students analyzing data can have experiences that mimic actual research. In short, research experiences that reflect the scientific spirit of inquiry potentially can: prepare students to address real world complex problems; develop students' ability to use scientific methods; prepare students to critically evaluate the validity of data or evidence and of the consequent interpretations or conclusions; teach quantitative skills, technical methods, and scientific concepts; increase verbal, written, and graphical communication skills; and train students in the values and ethics of working with scientific data. However, it is unclear what the broader pre-service teacher preparation community is doing in preparing future teachers to promote, manage, and successful facilitate their own students in conducting authentic scientific inquiry. Surveys of undergraduates in secondary science education programs suggests that students have had almost no experiences themselves in conducting open scientific inquiry where they develop researchable questions, design strategies to pursue evidence, and communicate data-based conclusions. In response, the College of Science Teacher Preparation Program at the University of Arizona requires all students enrolled in its various science teaching methods courses to complete an open inquiry research project and defend their findings at a specially designed inquiry science mini-conference at the end of the term. End-of-term surveys show that students enjoy their research experience and believe that this experience enhances their ability to facilitate their own future students in conducting open inquiry.
Sidik, S. M.
1975-01-01
Ridge, Marquardt's generalized inverse, shrunken, and principal components estimators are discussed in terms of the objectives of point estimation of parameters, estimation of the predictive regression function, and hypothesis testing. It is found that as the normal equations approach singularity, more consideration must be given to estimable functions of the parameters as opposed to estimation of the full parameter vector; that biased estimators all introduce constraints on the parameter space; that adoption of mean squared error as a criterion of goodness should be independent of the degree of singularity; and that ordinary least-squares subset regression is the best overall method.
Wiebe, Nicholas J P; Meyer, Irmtraud M
2010-06-24
The prediction of functional RNA structures has attracted increased interest, as it allows us to study the potential functional roles of many genes. RNA structure prediction methods, however, assume that there is a unique functional RNA structure and also do not predict functional features required for in vivo folding. In order to understand how functional RNA structures form in vivo, we require sophisticated experiments or reliable prediction methods. So far, there exist only a few, experimentally validated transient RNA structures. On the computational side, there exist several computer programs which aim to predict the co-transcriptional folding pathway in vivo, but these make a range of simplifying assumptions and do not capture all features known to influence RNA folding in vivo. We want to investigate if evolutionarily related RNA genes fold in a similar way in vivo. To this end, we have developed a new computational method, Transat, which detects conserved helices of high statistical significance. We introduce the method, present a comprehensive performance evaluation and show that Transat is able to predict the structural features of known reference structures including pseudo-knotted ones as well as those of known alternative structural configurations. Transat can also identify unstructured sub-sequences bound by other molecules and provides evidence for new helices which may define folding pathways, supporting the notion that homologous RNA sequence not only assume a similar reference RNA structure, but also fold similarly. Finally, we show that the structural features predicted by Transat differ from those assuming thermodynamic equilibrium. Unlike the existing methods for predicting folding pathways, our method works in a comparative way. This has the disadvantage of not being able to predict features as function of time, but has the considerable advantage of highlighting conserved features and of not requiring a detailed knowledge of the cellular
A method for generating high resolution satellite image time series
Guo, Tao
2014-10-01
There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation
International Nuclear Information System (INIS)
Tereshin, G.S.; Kharitonova, L.K.; Kuznetsova, O.B.
1979-01-01
Heterogeneous systems Y(NO 3 ) 3 (YCl 3 )-Hsub(n)L-KNO 3 (KCl)-H 2 O are investigated by potentiometric titration (with coulomb-meter generation of oH - ions). Hsub(n)L is one of the following: oxyethylidendiphosphonic; aminobenzilidendiphosphonic; glycine-bis-methyl-phosphonic; nitrilotrimethylphosphonic (H 6 L) and ethylenediaminetetramethylphosphonic acids. The range of the exsistence of YHsub(nL3)LxyH 2 O has been determined. The possibility of using potentiometric titration for investigating heterogeneous systems is demonstrated by the stUdy of the system Y(NO 3 ) 3 -H 6 L-KOH-H 2 o by the method of residual concentration. The two methods have shown that at pH 3 LxyH 2 O; at pH=6, KYH 2 Lxy'H 2 O, and at pH=7, K 2 YHLxy''H 2 O. The complete solubility products of nitrilotrimethylphosphonates are evaluated
A convolution method for predicting mean treatment dose including organ motion at imaging
International Nuclear Information System (INIS)
Booth, J.T.; Zavgorodni, S.F.; Royal Adelaide Hospital, SA
2000-01-01
Full text: The random treatment delivery errors (organ motion and set-up error) can be incorporated into the treatment planning software using a convolution method. Mean treatment dose is computed as the convolution of a static dose distribution with a variation kernel. Typically this variation kernel is Gaussian with variance equal to the sum of the organ motion and set-up error variances. We propose a novel variation kernel for the convolution technique that additionally considers the position of the mobile organ in the planning CT image. The systematic error of organ position in the planning CT image can be considered random for each patient over a population. Thus the variance of the variation kernel will equal the sum of treatment delivery variance and organ motion variance at planning for the population of treatments. The kernel is extended to deal with multiple pre-treatment CT scans to improve tumour localisation for planning. Mean treatment doses calculated with the convolution technique are compared to benchmark Monte Carlo (MC) computations. Calculations of mean treatment dose using the convolution technique agreed with MC results for all cases to better than ± 1 Gy in the planning treatment volume for a prescribed 60 Gy treatment. Convolution provides a quick method of incorporating random organ motion (captured in the planning CT image and during treatment delivery) and random set-up errors directly into the dose distribution. Copyright (2000) Australasian College of Physical Scientists and Engineers in Medicine
Energy Technology Data Exchange (ETDEWEB)
Koh, Chung-Yan; Piccini, Matthew E.; Singh, Anup K.
2017-09-19
Examples are described including measurement systems for conducting competition assays. A first chamber of an assay device may be loaded with a sample containing a target antigen. The target antigen in the sample may be allowed to bind to antibody-coated beads in the first chamber. A control layer separating the first chamber from a second chamber may then be opened to allow a labeling agent loaded in a first portion of the second chamber to bind to any unoccupied sites on the antibodies. A centrifugal force may then be applied to transport the beads through a density media to a detection region for measurement by a detection unit.
Koh, Chung-Yan; Piccini, Matthew E.; Singh, Anup K.
2017-07-11
Examples are described including measurement systems for conducting competition assays. A first chamber of an assay device may be loaded with a sample containing a target antigen. The target antigen in the sample may be allowed to bind to antibody-coated beads in the first chamber. A control layer separating the first chamber from a second chamber may then be opened to allow a labeling agent loaded in a first portion of the second chamber to bind to any unoccupied sites on the antibodies. A centrifugal force may then be applied to transport the beads through a density media to a detection region for measurement by a detection unit.
Milton, Kimball A
2006-01-01
This is a graduate level textbook on the theory of electromagnetic radiation and its application to waveguides, transmission lines, accelerator physics and synchrotron radiation. It has grown out of lectures and manuscripts by Julian Schwinger prepared during the war at MIT's Radiation Laboratory, updated with material developed by Schwinger at UCLA in the 1970s and 1980s, and by Milton at the University of Oklahoma since 1994. The book includes a great number of straightforward and challenging exercises and problems. It is addressed to students in physics, electrical engineering, and applied mathematics seeking a thorough introduction to electromagnetism with emphasis on radiation theory and its applications.
DEFF Research Database (Denmark)
Xiao, Zhao xia; Nan, Jiakai; Guerrero, Josep M.
2017-01-01
A multiple time-scale optimization scheduling including day ahead and short time for an islanded microgrid is presented. In this paper, the microgrid under study includes photovoltaics (PV), wind turbine (WT), diesel generator (DG), batteries, and shiftable loads. The study considers the maximum...... efficiency operation area for the diesel engine and the cost of the battery charge/discharge cycle losses. The day-ahead generation scheduling takes into account the minimum operational cost and the maximum load satisfaction as the objective function. Short-term optimal dispatch is based on minimizing...
Garg, Anil K; Garg, Seema
2017-01-01
The evidence suggests that our perception of physical beauty is based on how closely the features of one's face reflect phi (the golden ratio) in their proportions. By that extension, it must certainly be possible to use a mathematical parameter to design an anterior hairline in all faces. To establish a user-friendly method to design an anterior hairline in cases of male pattern alopecia. We need a flexible measuring tape and skin marker. A reference point A (glabella) is taken in between eyebrows. Mark point E, near the lateral canthus, 8 cm horizontal on either side from the central point A. A mid-frontal point (point B) is marked 8 cm from point A on the forehead in a mid-vertical plane. The frontotemporal points (C and C') are marked on the frontotemporal area, 8 cm in a horizontal plane from point B and 8 cm in a vertical plane from point E. The temporal peak points (D and D') are marked on the line joining the frontotemporal point C to the lateral canthus point E, slightly more than halfway toward lateral canthus, usually 5 cm from the frontotemporal point C. This line makes an anterior border of the temporal triangle. We have conducted a study with 431 cases of male pattern alopecia. The average distance of the mid-frontal point from glabella was 7.9 cm. The patient satisfaction reported was 94.7%. Our method gives a skeletal frame of the anterior hairline with minimal criteria, with no need of visual imagination and experience of the surgeon. It automatically takes care of the curvature of the forehead and is easy to use for a novice surgeon.
Kauvar, Arielle N B; Cronin, Terrence; Roenigk, Randall; Hruza, George; Bennett, Richard
2015-05-01
Basal cell carcinoma (BCC) is the most common cancer in the US population affecting approximately 2.8 million people per year. Basal cell carcinomas are usually slow-growing and rarely metastasize, but they do cause localized tissue destruction, compromised function, and cosmetic disfigurement. To provide clinicians with guidelines for the management of BCC based on evidence from a comprehensive literature review, and consensus among the authors. An extensive review of the medical literature was conducted to evaluate the optimal treatment methods for cutaneous BCC, taking into consideration cure rates, recurrence rates, aesthetic and functional outcomes, and cost-effectiveness of the procedures. Surgical approaches provide the best outcomes for BCCs. Mohs micrographic surgery provides the highest cure rates while maximizing tissue preservation, maintenance of function, and cosmesis. Mohs micrographic surgery is an efficient and cost-effective procedure and remains the treatment of choice for high-risk BCCs and for those in cosmetically sensitive locations. Nonsurgical modalities may be used for low-risk BCCs when surgery is contraindicated or impractical, but the cure rates are lower.
International Nuclear Information System (INIS)
Hwang, I.T.; Ting, K.
1987-01-01
Dynamic response of liquid storage tanks considering the hydrodynamic interactions due to earthquake ground motion has been extensively studied. Several finite element procedures, such as Balendra et. al. (1982) and Haroun (1983), have been devoted to investigate the dynamic interaction between the deformable wall of the tank and the liquid. Further, if the geometry of the storage tank can not be described by axi-symmetric case, the tank wall and the fluid domain must be discretized by three dimensional finite elements to investigate the fluid-structure-interactions. Thus, the need of large computer memory and expense of vast computer time usually make this analysis impractical. To demonstrate the accuracy and reliability of the solution technique developed herein, the dynamic behavior of ground-supported, deformed, cylindrical tank with incompressible fluid conducted by Haroun (1983) are analyzed. Good correlations of hydrodynamic pressure distribution between the computed results with the referenced solutions are noted. The fluid compressibility significantly affects the hydrodynamic pressures of the liquid-tank-interactions and the work which is done on this discussion is still little attention. Thus, the influences of the compressibility of the liquid on the reponse of the liquid storage due to ground motion are then drawn. By the way, the complex-valued frequency response functions for hydrodynamic forces of Haroun's problem are also displayed. (orig./GL)
Simultaneous real-time data collection methods
Klincsek, Thomas
1992-01-01
This paper describes the development of electronic test equipment which executes, supervises, and reports on various tests. This validation process uses computers to analyze test results and report conclusions. The test equipment consists of an electronics component and the data collection and reporting unit. The PC software, display screens, and real-time data-base are described. Pass-fail procedures and data replay are discussed. The OS2 operating system and Presentation Manager user interface system were used to create a highly interactive automated system. The system outputs are hardcopy printouts and MS DOS format files which may be used as input for other PC programs.
Time dependent variational method in quantum mechanics
International Nuclear Information System (INIS)
Torres del Castillo, G.F.
1987-01-01
Using the fact that the solutions to the time-dependent Schodinger equation can be obtained from a variational principle, by restricting the evolution of the state vector to some surface in the corresponding Hilbert space, approximations to the exact solutions can be obtained, which are determined by equations similar to Hamilton's equations. It is shown that, in order for the approximate evolution to be well defined on a given surface, the imaginary part of the inner product restricted to the surface must be non-singular. (author)
Development method of Hybrid Energy Storage System, including PEM fuel cell and a battery
International Nuclear Information System (INIS)
Ustinov, A; Khayrullina, A; Khmelik, M; Sveshnikova, A; Borzenko, V
2016-01-01
Development of fuel cell (FC) and hydrogen metal-hydride storage (MH) technologies continuously demonstrate higher efficiency rates and higher safety, as hydrogen is stored at low pressures of about 2 bar in a bounded state. A combination of a FC/MH system with an electrolyser, powered with a renewable source, allows creation of an almost fully autonomous power system, which could potentially replace a diesel-generator as a back-up power supply. However, the system must be extended with an electro-chemical battery to start-up the FC and compensate the electric load when FC fails to deliver the necessary power. Present paper delivers the results of experimental and theoretical investigation of a hybrid energy system, including a proton exchange membrane (PEM) FC, MH- accumulator and an electro-chemical battery, development methodology for such systems and the modelling of different battery types, using hardware-in-the-loop approach. The economic efficiency of the proposed solution is discussed using an example of power supply of a real town of Batamai in Russia. (paper)
Development method of Hybrid Energy Storage System, including PEM fuel cell and a battery
Ustinov, A.; Khayrullina, A.; Borzenko, V.; Khmelik, M.; Sveshnikova, A.
2016-09-01
Development of fuel cell (FC) and hydrogen metal-hydride storage (MH) technologies continuously demonstrate higher efficiency rates and higher safety, as hydrogen is stored at low pressures of about 2 bar in a bounded state. A combination of a FC/MH system with an electrolyser, powered with a renewable source, allows creation of an almost fully autonomous power system, which could potentially replace a diesel-generator as a back-up power supply. However, the system must be extended with an electro-chemical battery to start-up the FC and compensate the electric load when FC fails to deliver the necessary power. Present paper delivers the results of experimental and theoretical investigation of a hybrid energy system, including a proton exchange membrane (PEM) FC, MH- accumulator and an electro-chemical battery, development methodology for such systems and the modelling of different battery types, using hardware-in-the-loop approach. The economic efficiency of the proposed solution is discussed using an example of power supply of a real town of Batamai in Russia.
Hano, Mitsuo; Hotta, Masashi
A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.
Time series analysis methods and applications for flight data
Zhang, Jianye
2017-01-01
This book focuses on different facets of flight data analysis, including the basic goals, methods, and implementation techniques. As mass flight data possesses the typical characteristics of time series, the time series analysis methods and their application for flight data have been illustrated from several aspects, such as data filtering, data extension, feature optimization, similarity search, trend monitoring, fault diagnosis, and parameter prediction, etc. An intelligent information-processing platform for flight data has been established to assist in aircraft condition monitoring, training evaluation and scientific maintenance. The book will serve as a reference resource for people working in aviation management and maintenance, as well as researchers and engineers in the fields of data analysis and data mining.
Winter Holts Oscillatory Method: A New Method of Resampling in Time Series.
Directory of Open Access Journals (Sweden)
Muhammad Imtiaz Subhani
2016-12-01
Full Text Available The core proposition behind this research is to create innovative methods of bootstrapping that can be applied in time series data. In order to find new methods of bootstrapping, various methods were reviewed; The data of automotive Sales, Market Shares and Net Exports of the top 10 countries, which includes China, Europe, United States of America (USA, Japan, Germany, South Korea, India, Mexico, Brazil, Spain and, Canada from 2002 to 2014 were collected through various sources which includes UN Comtrade, Index Mundi and World Bank. The findings of this paper confirmed that Bootstrapping for resampling through winter forecasting by Oscillation and Average methods give more robust results than the winter forecasting by any general methods.
International Nuclear Information System (INIS)
Santos, Rui C.; Leal, Joao P.; Martinho Simoes, Jose A.
2009-01-01
A revised parameterization of the extended Laidler method for predicting standard molar enthalpies of atomization and standard molar enthalpies of formation at T = 298.15 K for several families of hydrocarbons (alkanes, alkenes, alkynes, polyenes, poly-ynes, cycloalkanes, substituted cycloalkanes, cycloalkenes, substituted cycloalkenes, benzene derivatives, and bi and polyphenyls) is presented. Data for a total of 265 gas-phase and 242 liquid-phase compounds were used for the calculation of the parameters. Comparison of the experimental values with those obtained using the additive scheme led to an average absolute difference of 0.73 kJ . mol -1 for the gas-phase standard molar enthalpy of formation and 0.79 kJ . mol -1 for the liquid-phase standard molar enthalpy of formation. The database used to establish the parameters was carefully reviewed by using, whenever possible, the original publications. A worksheet to simplify the calculation of standard molar enthalpies of formation and standard molar enthalpies of atomization at T = 298.15 K based on the extended Laidler parameters defined in this paper is provided as supplementary material.
The time-dependent density matrix renormalisation group method
Ma, Haibo; Luo, Zhen; Yao, Yao
2018-04-01
Substantial progress of the time-dependent density matrix renormalisation group (t-DMRG) method in the recent 15 years is reviewed in this paper. By integrating the time evolution with the sweep procedures in density matrix renormalisation group (DMRG), t-DMRG provides an efficient tool for real-time simulations of the quantum dynamics for one-dimensional (1D) or quasi-1D strongly correlated systems with a large number of degrees of freedom. In the illustrative applications, the t-DMRG approach is applied to investigate the nonadiabatic processes in realistic chemical systems, including exciton dissociation and triplet fission in polymers and molecular aggregates as well as internal conversion in pyrazine molecule.
Neumann, Rebecca B; Cardon, Zoe G; Teshera-Levye, Jennifer; Rockwell, Fulton E; Zwieniecki, Maciej A; Holbrook, N Michele
2014-04-01
The movement of water from moist to dry soil layers through the root systems of plants, referred to as hydraulic redistribution (HR), occurs throughout the world and is thought to influence carbon and water budgets and ecosystem functioning. The realized hydrologic, biogeochemical and ecological consequences of HR depend on the amount of redistributed water, whereas the ability to assess these impacts requires models that correctly capture HR magnitude and timing. Using several soil types and two ecotypes of sunflower (Helianthus annuus L.) in split-pot experiments, we examined how well the widely used HR modelling formulation developed by Ryel et al. matched experimental determination of HR across a range of water potential driving gradients. H. annuus carries out extensive night-time transpiration, and although over the last decade it has become more widely recognized that night-time transpiration occurs in multiple species and many ecosystems, the original Ryel et al. formulation does not include the effect of night-time transpiration on HR. We developed and added a representation of night-time transpiration into the formulation, and only then was the model able to capture the dynamics and magnitude of HR we observed as soils dried and night-time stomatal behaviour changed, both influencing HR. © 2013 John Wiley & Sons Ltd.
International Nuclear Information System (INIS)
Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon
2014-01-01
The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents for a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible
Software Design Methods for Real-Time Systems
1989-12-01
This module describes the concepts and methods used in the software design of real time systems . It outlines the characteristics of real time systems , describes...the role of software design in real time system development, surveys and compares some software design methods for real - time systems , and
A method for investigating relative timing information on phylogenetic trees.
Ford, Daniel; Matsen, Frederick A; Stadler, Tanja
2009-04-01
In this paper, we present a new way to describe the timing of branching events in phylogenetic trees. Our description is in terms of the relative timing of diversification events between sister clades; as such it is complementary to existing methods using lineages-through-time plots which consider diversification in aggregate. The method can be applied to look for evidence of diversification happening in lineage-specific "bursts", or the opposite, where diversification between 2 clades happens in an unusually regular fashion. In order to be able to distinguish interesting events from stochasticity, we discuss 2 classes of neutral models on trees with relative timing information and develop a statistical framework for testing these models. These model classes include both the coalescent with ancestral population size variation and global rate speciation-extinction models. We end the paper with 2 example applications: first, we show that the evolution of the hepatitis C virus deviates from the coalescent with arbitrary population size. Second, we analyze a large tree of ants, demonstrating that a period of elevated diversification rates does not appear to have occurred in a bursting manner.
Determination of beta attenuation coefficients by means of timing method
International Nuclear Information System (INIS)
Ermis, E.E.; Celiktas, C.
2012-01-01
Highlights: ► Beta attenuation coefficients of absorber materials were found in this study. ► For this process, a new method (timing method) was suggested. ► The obtained beta attenuation coefficients were compatible with the results from the traditional one. ► The timing method can be used to determine beta attenuation coefficient. - Abstract: Using a counting system with plastic scintillation detector, beta linear and mass attenuation coefficients were determined for bakelite, Al, Fe and plexiglass absorbers by means of timing method. To show the accuracy and reliability of the obtained results through this method, the coefficients were also found via conventional energy method. Obtained beta attenuation coefficients from both methods were compared with each other and the literature values. Beta attenuation coefficients obtained through timing method were found to be compatible with the values obtained from conventional energy method and the literature.
An Efficient Explicit-time Description Method for Timed Model Checking
Directory of Open Access Journals (Sweden)
Hao Wang
2009-12-01
Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.
Seismic assessment of a site using the time series method
International Nuclear Information System (INIS)
Krutzik, N.J.; Rotaru, I.; Bobei, M.; Mingiuc, C.; Serban, V.; Androne, M.
1997-01-01
To increase the safety of a NPP located on a seismic site, the seismic acceleration level to which the NPP should be qualified must be as representative as possible for that site, with a conservative degree of safety but not too exaggerated. The consideration of the seismic events affecting the site as independent events and the use of statistic methods to define some safety levels with very low annual occurrence probability (10 -4 ) may lead to some exaggerations of the seismic safety level. The use of some very high value for the seismic acceleration imposed by the seismic safety levels required by the hazard analysis may lead to very costly technical solutions that can make the plant operation more difficult and increase maintenance costs. The considerations of seismic events as a time series with dependence among the events produced, may lead to a more representative assessment of a NPP site seismic activity and consequently to a prognosis on the seismic level values to which the NPP would be ensured throughout its life-span. That prognosis should consider the actual seismic activity (including small earthquakes in real time) of the focuses that affect the plant site. The paper proposes the applications of Autoregressive Time Series to issue a prognosis on the seismic activity of a focus and presents the analysis on Vrancea focus that affects NPP Cernavoda site, by this method. The paper also presents the manner to analyse the focus activity as per the new approach and it assesses the maximum seismic acceleration that may affect NPP Cernavoda throughout its life-span (∼ 30 years). Development and applications of new mathematical analysis method, both for long - and short - time intervals, may lead to important contributions in the process of foretelling the seismic events in the future. (authors)
Singular perturbation methods for nonlinear dynamic systems with time delays
International Nuclear Information System (INIS)
Hu, H.Y.; Wang, Z.H.
2009-01-01
This review article surveys the recent advances in the dynamics and control of time-delay systems, with emphasis on the singular perturbation methods, such as the method of multiple scales, the method of averaging, and two newly developed methods, the energy analysis and the pseudo-oscillator analysis. Some examples are given to demonstrate the advantages of the methods. The comparisons with other methods show that these methods lead to easier computations and higher accurate prediction on the local dynamics of time-delay systems near a Hopf bifurcation.
Advances in Time Estimation Methods for Molecular Data.
Kumar, Sudhir; Hedges, S Blair
2016-04-01
Molecular dating has become central to placing a temporal dimension on the tree of life. Methods for estimating divergence times have been developed for over 50 years, beginning with the proposal of molecular clock in 1962. We categorize the chronological development of these methods into four generations based on the timing of their origin. In the first generation approaches (1960s-1980s), a strict molecular clock was assumed to date divergences. In the second generation approaches (1990s), the equality of evolutionary rates between species was first tested and then a strict molecular clock applied to estimate divergence times. The third generation approaches (since ∼2000) account for differences in evolutionary rates across the tree by using a statistical model, obviating the need to assume a clock or to test the equality of evolutionary rates among species. Bayesian methods in the third generation require a specific or uniform prior on the speciation-process and enable the inclusion of uncertainty in clock calibrations. The fourth generation approaches (since 2012) allow rates to vary from branch to branch, but do not need prior selection of a statistical model to describe the rate variation or the specification of speciation model. With high accuracy, comparable to Bayesian approaches, and speeds that are orders of magnitude faster, fourth generation methods are able to produce reliable timetrees of thousands of species using genome scale data. We found that early time estimates from second generation studies are similar to those of third and fourth generation studies, indicating that methodological advances have not fundamentally altered the timetree of life, but rather have facilitated time estimation by enabling the inclusion of more species. Nonetheless, we feel an urgent need for testing the accuracy and precision of third and fourth generation methods, including their robustness to misspecification of priors in the analysis of large phylogenies and data
Jia, Shouqing; La, Dongsheng; Ma, Xuelian
2018-04-01
The finite difference time domain (FDTD) algorithm and Green function algorithm are implemented into the numerical simulation of electromagnetic waves in Schwarzschild space-time. FDTD method in curved space-time is developed by filling the flat space-time with an equivalent medium. Green function in curved space-time is obtained by solving transport equations. Simulation results validate both the FDTD code and Green function code. The methods developed in this paper offer a tool to solve electromagnetic scattering problems.
Directory of Open Access Journals (Sweden)
Eila Jeronen
2016-12-01
Full Text Available There are very few studies concerning the importance of teaching methods in biology education and environmental education including outdoor education for promoting sustainability at the levels of primary and secondary schools and pre-service teacher education. The material was selected using special keywords from biology and sustainable education in several scientific databases. The article provides an overview of 24 selected articles published in peer-reviewed scientific journals from 2006–2016. The data was analyzed using qualitative content analysis. Altogether, 16 journals were selected and 24 articles were analyzed in detail. The foci of the analyses were teaching methods, learning environments, knowledge and thinking skills, psychomotor skills, emotions and attitudes, and evaluation methods. Additionally, features of good methods were investigated and their implications for teaching were emphasized. In total, 22 different teaching methods were found to improve sustainability education in different ways. The most emphasized teaching methods were those in which students worked in groups and participated actively in learning processes. Research points toward the value of teaching methods that provide a good introduction and supportive guidelines and include active participation and interactivity.
Jeronen, Eila; Palmberg, Irmeli; Yli-Panula, Eija
2017-01-01
There are very few studies concerning the importance of teaching methods in biology education and environmental education including outdoor education for promoting sustainability at the levels of primary and secondary schools and pre-service teacher education. The material was selected using special keywords from biology and sustainable education…
Cederkvist, Karin; Jensen, Marina B; Holm, Peter E
2017-08-01
Stormwater treatment facilities (STFs) are becoming increasingly widespread but knowledge on their performance is limited. This is due to difficulties in obtaining representative samples during storm events and documenting removal of the broad range of contaminants found in stormwater runoff. This paper presents a method to evaluate STFs by addition of synthetic runoff with representative concentrations of contaminant species, including the use of tracer for correction of removal rates for losses not caused by the STF. A list of organic and inorganic contaminant species, including trace elements representative of runoff from roads is suggested, as well as relevant concentration ranges. The method was used for adding contaminants to three different STFs including a curbstone extension with filter soil, a dual porosity filter, and six different permeable pavements. Evaluation of the method showed that it is possible to add a well-defined mixture of contaminants despite different field conditions by having a flexibly system, mixing different stock-solutions on site, and use bromide tracer for correction of outlet concentrations. Bromide recovery ranged from only 12% in one of the permeable pavements to 97% in the dual porosity filter, stressing the importance of including a conservative tracer for correction of contaminant retention values. The method is considered useful in future treatment performance testing of STFs. The observed performance of the STFs is presented in coming papers. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Masternak Sebastian
2016-06-01
Full Text Available Alcohol dependence and its treatment is not an exactly resolved problem. Based on the EZOP [Epidemiology of Mental Disorders and Accessibility of Mental Health Care] survey, which included a regular analysis of the incidence of mental disorders in the population of adult Polish citizens, we were able to estimate that the problem of alcohol abuse in any period of life affects even 10.9% of the population aged 18-64 years, and those addicted represent 2.2% of the country’s population. The typical symptoms of alcohol dependence according to ICD-10, include alcohol craving, impaired ability to control alcohol consumption, withdrawal symptoms which appear when a heavy drinker stops drinking, alternating alcohol tolerance, growing neglect of other areas of life, and persistent alcohol intake despite clear evidence of its destructive effect on life. At the moment, the primary method of alcoholism treatment is psychotherapy. It aims to change the patient’s habits, behaviours, relationships, or the way of thinking. It seems that psychotherapy is irreplaceable in the treatment of alcoholism, but for many years now attempts have been made to increase the effectiveness of alcoholism treatment with pharmacological agents. In this article we will try to provide a description of medications which help patients sustain abstinence in alcoholism therapy with particular emphasis on baclofen.
20 CFR 617.35 - Time and method of payment.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Time and method of payment. 617.35 Section 617.35 Employees' Benefits EMPLOYMENT AND TRAINING ADMINISTRATION, DEPARTMENT OF LABOR TRADE ADJUSTMENT ASSISTANCE FOR WORKERS UNDER THE TRADE ACT OF 1974 Job Search Allowances § 617.35 Time and method...
Real-time hybrid simulation using the convolution integral method
International Nuclear Information System (INIS)
Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A
2011-01-01
This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results
Scattering in an intense radiation field: Time-independent methods
International Nuclear Information System (INIS)
Rosenberg, L.
1977-01-01
The standard time-independent formulation of nonrelativistic scattering theory is here extended to take into account the presence of an intense external radiation field. In the case of scattering by a static potential the extension is accomplished by the introduction of asymptotic states and intermediate-state propagators which account for the absorption and induced emission of photons by the projectile as it propagates through the field. Self-energy contributions to the propagator are included by a systematic summation of forward-scattering terms. The self-energy analysis is summarized in the form of a modified perturbation expansion of the type introduced by Watson some time ago in the context of nuclear-scattering theory. This expansion, which has a simple continued-fraction structure in the case of a single-mode field, provides a generally applicable successive approximation procedure for the propagator and the asymptotic states. The problem of scattering by a composite target is formulated using the effective-potential method. The modified perturbation expansion which accounts for self-energy effects is applicable here as well. A discussion of a coupled two-state model is included to summarize and clarify the calculational procedures
Methods optimization for the first time core critical
International Nuclear Information System (INIS)
Yan Liang
2014-01-01
The PWR reactor core commissioning programs the content of the first critical reactor physics experiment, and describes thc physical test method. However, all the methods arc not exactly the same but efficient. This article aims to enhance the reactor for the first time in the process of critical safety, shorten the overall time of critical physical test for the first time, and improve the integrity of critical physical test data for the first time and accuracy, eventually to improve the operation of the plant economic benefit adopting sectional dilution, power feedback for Doppler point improvement of physical test methods, and so on. (author)
Finite element method for time-space-fractional Schrodinger equation
Directory of Open Access Journals (Sweden)
Xiaogang Zhu
2017-07-01
Full Text Available In this article, we develop a fully discrete finite element method for the nonlinear Schrodinger equation (NLS with time- and space-fractional derivatives. The time-fractional derivative is described in Caputo's sense and the space-fractional derivative in Riesz's sense. Its stability is well derived; the convergent estimate is discussed by an orthogonal operator. We also extend the method to the two-dimensional time-space-fractional NLS and to avoid the iterative solvers at each time step, a linearized scheme is further conducted. Several numerical examples are implemented finally, which confirm the theoretical results as well as illustrate the accuracy of our methods.
Seismic assessment of a site using the time series method
International Nuclear Information System (INIS)
Krutzik, N.J.; Rotaru, I.; Bobei, M.; Mingiuc, C.; Serban, V.; Androne, M.
2001-01-01
1. To increase the safety of a NPP located on a seismic site, the seismic acceleration level to which the NPP should be qualified must be as representative as possible for that site, with a conservative degree of safety but not too exaggerated. 2. The consideration of the seismic events affecting the site as independent events and the use of statistic methods to define some safety levels with very low annual occurrence probabilities (10 -4 ) may lead to some exaggerations of the seismic safety level. 3. The use of some very high values for the seismic accelerations imposed by the seismic safety levels required by the hazard analysis may lead to very expensive technical solutions that can make the plant operation more difficult and increase the maintenance costs. 4. The consideration of seismic events as a time series with dependence among the events produced may lead to a more representative assessment of a NPP site seismic activity and consequently to a prognosis on the seismic level values to which the NPP would be ensured throughout its life-span. That prognosis should consider the actual seismic activity (including small earthquakes in real time) of the focuses that affect the plant site. The method is useful for two purposes: a) research, i.e. homogenizing the history data basis by the generation of earthquakes during periods lacking information and correlation of the information with the existing information. The aim is to perform the hazard analysis using a homogeneous data set in order to determine the seismic design data for a site; b) operation, i.e. the performance of a prognosis on the seismic activity on a certain site and consideration of preventive measures to minimize the possible effects of an earthquake. 5. The paper proposes the application of Autoregressive Time Series to issue a prognosis on the seismic activity of a focus and presents the analysis on Vrancea focus that affects Cernavoda NPP site by this method. 6. The paper also presents the
A time-delayed method for controlling chaotic maps
International Nuclear Information System (INIS)
Chen Maoyin; Zhou Donghua; Shang Yun
2005-01-01
Combining the repetitive learning strategy and the optimality principle, this Letter proposes a time-delayed method to control chaotic maps. This method can effectively stabilize unstable periodic orbits within chaotic attractors in the sense of least mean square. Numerical simulations of some chaotic maps verify the effectiveness of this method
Novel crystal timing calibration method based on total variation
Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng
2016-11-01
A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.
Sun, Dan; Garmory, Andrew; Page, Gary J.
2017-02-01
For flows where the particle number density is low and the Stokes number is relatively high, as found when sand or ice is ingested into aircraft gas turbine engines, streams of particles can cross each other's path or bounce from a solid surface without being influenced by inter-particle collisions. The aim of this work is to develop an Eulerian method to simulate these types of flow. To this end, a two-node quadrature-based moment method using 13 moments is proposed. In the proposed algorithm thirteen moments of particle velocity, including cross-moments of second order, are used to determine the weights and abscissas of the two nodes and to set up the association between the velocity components in each node. Previous Quadrature Method of Moments (QMOM) algorithms either use more than two nodes, leading to increased computational expense, or are shown here to give incorrect results under some circumstances. This method gives the computational efficiency advantages of only needing two particle phase velocity fields whilst ensuring that a correct combination of weights and abscissas is returned for any arbitrary combination of particle trajectories without the need for any further assumptions. Particle crossing and wall bouncing with arbitrary combinations of angles are demonstrated using the method in a two-dimensional scheme. The ability of the scheme to include the presence of drag from a carrier phase is also demonstrated, as is bouncing off surfaces with inelastic collisions. The method is also applied to the Taylor-Green vortex flow test case and is found to give results superior to the existing two-node QMOM method and is in good agreement with results from Lagrangian modelling of this case.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Chaoyang Shi; Bi Yu Chen; William H. K. Lam; Qingquan Li
2017-01-01
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are f...
Verifying Real-Time Systems using Explicit-time Description Methods
Directory of Open Access Journals (Sweden)
Hao Wang
2009-12-01
Full Text Available Timed model checking has been extensively researched in recent years. Many new formalisms with time extensions and tools based on them have been presented. On the other hand, Explicit-Time Description Methods aim to verify real-time systems with general untimed model checkers. Lamport presented an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables for time requirements. This paper proposes a new explicit-time description method with no reliance on global variables. Instead, it uses rendezvous synchronization steps between the Tick process and each system process to simulate time. This new method achieves better modularity and facilitates usage of more complex timing constraints. The two explicit-time description methods are implemented in DIVINE, a well-known distributed-memory model checker. Preliminary experiment results show that our new method, with better modularity, is comparable to Lamport's method with respect to time and memory efficiency.
A high-order time-accurate interrogation method for time-resolved PIV
International Nuclear Information System (INIS)
Lynch, Kyle; Scarano, Fulvio
2013-01-01
both cases, it is demonstrated that the measurement time interval can be significantly extended without compromising the correlation signal-to-noise ratio and with no increase of the truncation error. The increase of velocity dynamic range scales more than linearly with the number of frames included for the analysis, which supersedes by one order of magnitude the pair correlation by window deformation. The main factors influencing the performance of the method are discussed, namely the number of images composing the sequence and the polynomial order chosen to represent the motion throughout the trajectory. (paper)
A comparison of three time-domain anomaly detection methods
Energy Technology Data Exchange (ETDEWEB)
Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E. [Delft University of Technology (Netherlands). Interfaculty Reactor Institute
1996-01-01
Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the {chi}{sup 2} method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author).
A comparison of three time-domain anomaly detection methods
International Nuclear Information System (INIS)
Schoonewelle, H.; Hagen, T.H.J.J. van der; Hoogenboom, J.E.
1996-01-01
Three anomaly detection methods based on a comparison of signal values with predictions from an autoregressive model are presented. These methods are: the extremes method, the χ 2 method and the sequential probability ratio test. The methods are used to detect a change of the standard deviation of the residual noise obtained from applying an autoregressive model. They are fast and can be used in on-line applications. For each method some important anomaly detection parameters are determined by calculation or simulation. These parameters are: the false alarm rate, the average time to alarm and - being of minor importance -the alarm failure rate. Each method is optimized with respect to the average time to alarm for a given value of the false alarm rate. The methods are compared with each other, resulting in the sequential probability ratio test being clearly superior. (author)
Endurance time method for Seismic analysis and design of structures
International Nuclear Information System (INIS)
Estekanchi, H.E.; Vafai, A.; Sadeghazar, M.
2004-01-01
In this paper, a new method for performance based earthquake analysis and design has been introduced. In this method, the structure is subjected to accelerograms that impose increasing dynamic demand on the structure with time. Specified damage indexes are monitored up to the collapse level or other performance limit that defines the endurance limit point for the structure. Also, a method for generating standard intensifying accelerograms has been described. Three accelerograms have been generated using this method. Furthermore, the concept of Endurance Time has been described by applying these accelerograms to single and multi degree of freedom linear systems. The application of this method for analysis of complex nonlinear systems has been explained. Endurance Time method provides a uniform approach to seismic analysis and design of complex structures that can be applied in numerical and experimental investigations
A Novel Time Synchronization Method for Dynamic Reconfigurable Bus
Directory of Open Access Journals (Sweden)
Zhang Weigong
2016-01-01
Full Text Available UM-BUS is a novel dynamically reconfigurable high-speed serial bus for embedded systems. It can achieve fault tolerance by detecting the channel status in real time and reconfigure dynamically at run-time. The bus supports direct interconnections between up to eight master nodes and multiple slave nodes. In order to solve the time synchronization problem among master nodes, this paper proposes a novel time synchronization method, which can meet the requirement of time precision in UM-BUS. In this proposed method, time is firstly broadcasted through time broadcast packets. Then, the transmission delay and time deviations via three handshakes during link self-checking and channel detection can be worked out referring to the IEEE 1588 protocol. Thereby, each node calibrates its own time according to the broadcasted time. The proposed method has been proved to meet the requirement of real-time time synchronization. The experimental results show that the synchronous precision can achieve a bias less than 20 ns.
A novel weight determination method for time series data aggregation
Xu, Paiheng; Zhang, Rong; Deng, Yong
2017-09-01
Aggregation in time series is of great importance in time series smoothing, predicting and other time series analysis process, which makes it crucial to address the weights in times series correctly and reasonably. In this paper, a novel method to obtain the weights in time series is proposed, in which we adopt induced ordered weighted aggregation (IOWA) operator and visibility graph averaging (VGA) operator and linearly combine the weights separately generated by the two operator. The IOWA operator is introduced to the weight determination of time series, through which the time decay factor is taken into consideration. The VGA operator is able to generate weights with respect to the degree distribution in the visibility graph constructed from the corresponding time series, which reflects the relative importance of vertices in time series. The proposed method is applied to two practical datasets to illustrate its merits. The aggregation of Construction Cost Index (CCI) demonstrates the ability of proposed method to smooth time series, while the aggregation of The Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) illustrate how proposed method maintain the variation tendency of original data.
Time evolution of the wave equation using rapid expansion method
Pestana, Reynam C.; Stoffa, Paul L.
2010-01-01
Forward modeling of seismic data and reverse time migration are based on the time evolution of wavefields. For the case of spatially varying velocity, we have worked on two approaches to evaluate the time evolution of seismic wavefields. An exact solution for the constant-velocity acoustic wave equation can be used to simulate the pressure response at any time. For a spatially varying velocity, a one-step method can be developed where no intermediate time responses are required. Using this approach, we have solved for the pressure response at intermediate times and have developed a recursive solution. The solution has a very high degree of accuracy and can be reduced to various finite-difference time-derivative methods, depending on the approximations used. Although the two approaches are closely related, each has advantages, depending on the problem being solved. © 2010 Society of Exploration Geophysicists.
Time evolution of the wave equation using rapid expansion method
Pestana, Reynam C.
2010-07-01
Forward modeling of seismic data and reverse time migration are based on the time evolution of wavefields. For the case of spatially varying velocity, we have worked on two approaches to evaluate the time evolution of seismic wavefields. An exact solution for the constant-velocity acoustic wave equation can be used to simulate the pressure response at any time. For a spatially varying velocity, a one-step method can be developed where no intermediate time responses are required. Using this approach, we have solved for the pressure response at intermediate times and have developed a recursive solution. The solution has a very high degree of accuracy and can be reduced to various finite-difference time-derivative methods, depending on the approximations used. Although the two approaches are closely related, each has advantages, depending on the problem being solved. © 2010 Society of Exploration Geophysicists.
Fischbach, Jens; Xander, Nina Carolin; Frohme, Marcus; Glökler, Jörn Felix
2015-04-01
The need for simple and effective assays for detecting nucleic acids by isothermal amplification reactions has led to a great variety of end point and real-time monitoring methods. Here we tested direct and indirect methods to visualize the amplification of potato spindle tuber viroid (PSTVd) by loop-mediated isothermal amplification (LAMP) and compared features important for one-pot in-field applications. We compared the performance of magnesium pyrophosphate, hydroxynaphthol blue (HNB), calcein, SYBR Green I, EvaGreen, and berberine. All assays could be used to distinguish between positive and negative samples in visible or UV light. Precipitation of magnesium-pyrophosphate resulted in a turbid reaction solution. The use of HNB resulted in a color change from violet to blue, whereas calcein induced a change from orange to yellow-green. We also investigated berberine as a nucleic acid-specific dye that emits a fluorescence signal under UV light after a positive LAMP reaction. It has a comparable sensitivity to SYBR Green I and EvaGreen. Based on our results, an optimal detection method can be chosen easily for isothermal real-time or end point screening applications.
Impact of Rainfall, Sales Method, and Time on Land Prices
Stephens, Steve; Schurle, Bryan
2013-01-01
Land prices in Western Kansas are analyzed using regression to estimate the influence of rainfall, sales method, and time of sale. The estimates from regression indicate that land prices decreased about $27 for each range that was farther west which can be converted to about $75 per inch of average rainfall. In addition, the influence of method of sale (private sale or auction) is estimated along with the impact of time of sale. Auction sales prices are approximately $100 higher per acre than...
Comparative study of on-line response time measurement methods for platinum resistance thermometer
International Nuclear Information System (INIS)
Zwingelstein, G.; Gopal, R.
1979-01-01
This study deals with the in site determination of the response time of platinum resistance sensor. In the first part of this work, two methods furnishing the reference response time of the sensors are studied. In the second part of the work, two methods obtaining the response time without dismounting of the sensor, are studied. A comparative study of the performances of these methods is included for fluid velocities varying from 0 to 10 m/sec, in both laboratory and plant conditions
Time interval approach to the pulsed neutron logging method
International Nuclear Information System (INIS)
Zhao Jingwu; Su Weining
1994-01-01
The time interval of neighbouring neutrons emitted from a steady state neutron source can be treated as that from a time-dependent neutron source. In the rock space, the neutron flux is given by the neutron diffusion equation and is composed of an infinite terms. Each term s composed of two die-away curves. The delay action is discussed and used to measure the time interval with only one detector in the experiment. Nuclear reactions with the time distribution due to different types of radiations observed in the neutron well-logging methods are presented with a view to getting the rock nuclear parameters from the time interval technique
Highly comparative time-series analysis: the empirical structure of time series and their methods.
Fulcher, Ben D; Little, Max A; Jones, Nick S
2013-06-06
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.
A pseudospectral collocation time-domain method for diffractive optics
DEFF Research Database (Denmark)
Dinesen, P.G.; Hesthaven, J.S.; Lynov, Jens-Peter
2000-01-01
We present a pseudospectral method for the analysis of diffractive optical elements. The method computes a direct time-domain solution of Maxwell's equations and is applied to solving wave propagation in 2D diffractive optical elements. (C) 2000 IMACS. Published by Elsevier Science B.V. All rights...
An iterated Radau method for time-dependent PDE's
S. Pérez-Rodríguez; S. González-Pinto; B.P. Sommeijer (Ben)
2008-01-01
htmlabstractThis paper is concerned with the time integration of semi-discretized, multi-dimensional PDEs of advection-diffusion-reaction type. To cope with the stiffness of these ODEs, an implicit method has been selected, viz., the two-stage, third-order Radau IIA method. The main topic of this
DRK methods for time-domain oscillator simulation
Sevat, M.F.; Houben, S.H.M.J.; Maten, ter E.J.W.; Di Bucchianico, A.; Mattheij, R.M.M.; Peletier, M.A.
2006-01-01
This paper presents a new Runge-Kutta type integration method that is well-suited for time-domain simulation of oscillators. A unique property of the new method is that its damping characteristics can be controlled by a continuous parameter.
Dead time corrections using the backward extrapolation method
Energy Technology Data Exchange (ETDEWEB)
Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)
2017-05-11
Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.
Directory of Open Access Journals (Sweden)
F. Saez de Adana
2009-01-01
Full Text Available This paper presents an efficient application of the Time-Domain Uniform Theory of Diffraction (TD-UTD for the analysis of Ultra-Wideband (UWB mobile communications for indoor environments. The classical TD-UTD formulation is modified to include the contribution of lossy materials and multiple-ray interactions with the environment. The electromagnetic analysis is combined with a ray-tracing acceleration technique to treat realistic and complex environments. The validity of this method is tested with measurements performed inside the Polytechnic building of the University of Alcala and shows good performance of the model for the analysis of UWB propagation.
Immersed Boundary-Lattice Boltzmann Method Using Two Relaxation Times
Directory of Open Access Journals (Sweden)
Kosuke Hayashi
2012-06-01
Full Text Available An immersed boundary-lattice Boltzmann method (IB-LBM using a two-relaxation time model (TRT is proposed. The collision operator in the lattice Boltzmann equation is modeled using two relaxation times. One of them is used to set the fluid viscosity and the other is for numerical stability and accuracy. A direct-forcing method is utilized for treatment of immersed boundary. A multi-direct forcing method is also implemented to precisely satisfy the boundary conditions at the immersed boundary. Circular Couette flows between a stationary cylinder and a rotating cylinder are simulated for validation of the proposed method. The method is also validated through simulations of circular and spherical falling particles. Effects of the functional forms of the direct-forcing term and the smoothed-delta function, which interpolates the fluid velocity to the immersed boundary and distributes the forcing term to fixed Eulerian grid points, are also examined. As a result, the following conclusions are obtained: (1 the proposed method does not cause non-physical velocity distribution in circular Couette flows even at high relaxation times, whereas the single-relaxation time (SRT model causes a large non-physical velocity distortion at a high relaxation time, (2 the multi-direct forcing reduces the errors in the velocity profile of a circular Couette flow at a high relaxation time, (3 the two-point delta function is better than the four-point delta function at low relaxation times, but worse at high relaxation times, (4 the functional form of the direct-forcing term does not affect predictions, and (5 circular and spherical particles falling in liquids are well predicted by using the proposed method both for two-dimensional and three-dimensional cases.
Spectral methods for time dependent partial differential equations
Gottlieb, D.; Turkel, E.
1983-01-01
The theory of spectral methods for time dependent partial differential equations is reviewed. When the domain is periodic Fourier methods are presented while for nonperiodic problems both Chebyshev and Legendre methods are discussed. The theory is presented for both hyperbolic and parabolic systems using both Galerkin and collocation procedures. While most of the review considers problems with constant coefficients the extension to nonlinear problems is also discussed. Some results for problems with shocks are presented.
A finite element method for SSI time history calculation
International Nuclear Information System (INIS)
Ni, X.; Gantenbein, F.; Petit, M.
1989-01-01
The method which is proposed is based on a finite element modelization for the soil and the structure and a time history calculation. It has been developed for plane and axisymmetric geometries. The principle of this method is presented, then applications are given, first to a linear calculation for which results will be compared to those obtained by standard methods. Then results for a non linear behavior are described
A finite element method for SSI time history calculations
International Nuclear Information System (INIS)
Ni, X.M.; Gantenbein, F.; Petit, M.
1989-01-01
The method which is proposed is based on a finite element modelisation for the soil and the structure and a time history calculation. It has been developed for plane and axisymmetric geometries. The principle of this method will be presented, then applications will be given, first to a linear calculation for which results will be compared to those obtained by standard methods. Then results for a non linear behavior will be described
Ban, Chunmei; Wu, Zhuangchun; Dillon, Anne C.
2017-01-10
An electrode (110) is provided that may be used in an electrochemical device (100) such as an energy storage/discharge device, e.g., a lithium-ion battery, or an electrochromic device, e.g., a smart window. Hydrothermal techniques and vacuum filtration methods were applied to fabricate the electrode (110). The electrode (110) includes an active portion (140) that is made up of electrochemically active nanoparticles, with one embodiment utilizing 3d-transition metal oxides to provide the electrochemical capacity of the electrode (110). The active material (140) may include other electrochemical materials, such as silicon, tin, lithium manganese oxide, and lithium iron phosphate. The electrode (110) also includes a matrix or net (170) of electrically conductive nanomaterial that acts to connect and/or bind the active nanoparticles (140) such that no binder material is required in the electrode (110), which allows more active materials (140) to be included to improve energy density and other desirable characteristics of the electrode. The matrix material (170) may take the form of carbon nanotubes, such as single-wall, double-wall, and/or multi-wall nanotubes, and be provided as about 2 to 30 percent weight of the electrode (110) with the rest being the active material (140).
Introduction to numerical methods for time dependent differential equations
Kreiss, Heinz-Otto
2014-01-01
Introduces both the fundamentals of time dependent differential equations and their numerical solutions Introduction to Numerical Methods for Time Dependent Differential Equations delves into the underlying mathematical theory needed to solve time dependent differential equations numerically. Written as a self-contained introduction, the book is divided into two parts to emphasize both ordinary differential equations (ODEs) and partial differential equations (PDEs). Beginning with ODEs and their approximations, the authors provide a crucial presentation of fundamental notions, such as the t
Liu, Jinxing; El Sayed, Tamer S.
2013-01-01
When the brittle heterogeneous material is simulated via lattice models, the quasi-static failure depends on the relative magnitudes of Telem, the characteristic releasing time of the internal forces of the broken elements and Tlattice
International Nuclear Information System (INIS)
Hoffman, Adam J.; Lee, John C.
2016-01-01
A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Source Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.
Energy Technology Data Exchange (ETDEWEB)
Hoffman, Adam J., E-mail: adamhoff@umich.edu; Lee, John C., E-mail: jcl@umich.edu
2016-02-15
A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Source Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.
Financial time series analysis based on information categorization method
Tian, Qiang; Shang, Pengjian; Feng, Guochen
2014-12-01
The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.
Crystal timing offset calibration method for time of flight PET scanners
Ye, Jinghan; Song, Xiyun
2016-03-01
In time-of-flight (TOF) positron emission tomography (PET), precise calibration of the timing offset of each crystal of a PET scanner is essential. Conventionally this calibration requires a specially designed tool just for this purpose. In this study a method that uses a planar source to measure the crystal timing offsets (CTO) is developed. The method uses list mode acquisitions of a planar source placed at multiple orientations inside the PET scanner field-of-view (FOV). The placement of the planar source in each acquisition is automatically figured out from the measured data, so that a fixture for exactly placing the source is not required. The expected coincidence time difference for each detected list mode event can be found from the planar source placement and the detector geometry. A deviation of the measured time difference from the expected one is due to CTO of the two crystals. The least squared solution of the CTO is found iteratively using the list mode events. The effectiveness of the crystal timing calibration method is evidenced using phantom images generated by placing back each list mode event into the image space with the timing offset applied to each event. The zigzagged outlines of the phantoms in the images become smooth after the crystal timing calibration is applied. In conclusion, a crystal timing calibration method is developed. The method uses multiple list mode acquisitions of a planar source to find the least squared solution of crystal timing offsets.
Improved methods for nightside time domain Lunar Electromagnetic Sounding
Fuqua-Haviland, H.; Poppe, A. R.; Fatemi, S.; Delory, G. T.; De Pater, I.
2017-12-01
Time Domain Electromagnetic (TDEM) Sounding isolates induced magnetic fields to remotely deduce material properties at depth. The first step of performing TDEM Sounding at the Moon is to fully characterize the dynamic plasma environment, and isolate geophysically induced currents from concurrently present plasma currents. The transfer function method requires a two-point measurement: an upstream reference measuring the pristine solar wind, and one downstream near the Moon. This method was last performed during Apollo assuming the induced fields on the nightside of the Moon expand as in an undisturbed vacuum within the wake cavity [1]. Here we present an approach to isolating induction and performing TDEM with any two point magnetometer measurement at or near the surface of the Moon. Our models include a plasma induction model capturing the kinetic plasma environment within the wake cavity around a conducting Moon, and a geophysical forward model capturing induction in a vacuum. The combination of these two models enable the analysis of magnetometer data within the wake cavity. Plasma hybrid models use the upstream plasma conditions and interplanetary magnetic field (IMF) to capture the wake current systems formed around the Moon. The plasma kinetic equations are solved for ion particles with electrons as a charge-neutralizing fluid. These models accurately capture the large scale lunar wake dynamics for a variety of solar wind conditions: ion density, temperature, solar wind velocity, and IMF orientation [2]. Given the 3D orientation variability coupled with the large range of conditions seen within the lunar plasma environment, we characterize the environment one case at a time. The global electromagnetic induction response of the Moon in a vacuum has been solved numerically for a variety of electrical conductivity models using the finite-element method implemented within the COMSOL software. This model solves for the geophysically induced response in vacuum to
Barkeshli, Kasra; Volakis, John L.
1991-01-01
The theoretical and computational aspects related to the application of the Conjugate Gradient FFT (CGFFT) method in computational electromagnetics are examined. The advantages of applying the CGFFT method to a class of large scale scattering and radiation problems are outlined. The main advantages of the method stem from its iterative nature which eliminates a need to form the system matrix (thus reducing the computer memory allocation requirements) and guarantees convergence to the true solution in a finite number of steps. Results are presented for various radiators and scatterers including thin cylindrical dipole antennas, thin conductive and resistive strips and plates, as well as dielectric cylinders. Solutions of integral equations derived on the basis of generalized impedance boundary conditions (GIBC) are also examined. The boundary conditions can be used to replace the profile of a material coating by an impedance sheet or insert, thus, eliminating the need to introduce unknown polarization currents within the volume of the layer. A general full wave analysis of 2-D and 3-D rectangular grooves and cavities is presented which will also serve as a reference for future work.
Mathematical methods in time series analysis and digital image processing
Kurths, J; Maass, P; Timmer, J
2008-01-01
The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.
Multiple time-scale methods in particle simulations of plasmas
International Nuclear Information System (INIS)
Cohen, B.I.
1985-01-01
This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling
Limitations of the time slide method of background estimation
International Nuclear Information System (INIS)
Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis
2010-01-01
Time shifting the output of gravitational wave detectors operating in coincidence is a convenient way of estimating the background in a search for short-duration signals. In this paper, we show how non-stationary data affect the background estimation precision. We present a method of measuring the fluctuations of the data and computing its effects on a coincident search. In particular, we show that for fluctuations of moderate amplitude, time slides larger than the fluctuation time scales can be used. We also recall how the false alarm variance saturates with the number of time shifts.
Limitations of the time slide method of background estimation
Energy Technology Data Exchange (ETDEWEB)
Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis, E-mail: mwas@lal.in2p3.f [LAL, Universite Paris-Sud, CNRS/IN2P3, Orsay (France)
2010-10-07
Time shifting the output of gravitational wave detectors operating in coincidence is a convenient way of estimating the background in a search for short-duration signals. In this paper, we show how non-stationary data affect the background estimation precision. We present a method of measuring the fluctuations of the data and computing its effects on a coincident search. In particular, we show that for fluctuations of moderate amplitude, time slides larger than the fluctuation time scales can be used. We also recall how the false alarm variance saturates with the number of time shifts.
Sam, Jonathan; Pierse, Michael; Al-Qahtani, Abdullah; Cheng, Adam
2012-02-01
To develop, implement and evaluate a simulation-based acute care curriculum in a paediatric residency program using an integrated and longitudinal approach. Curriculum framework consisting of three modular, year-specific courses and longitudinal just-in-time, in situ mock codes. Paediatric residency program at BC Children's Hospital, Vancouver, British Columbia. The three year-specific courses focused on the critical first 5 min, complex medical management and crisis resource management, respectively. The just-in-time in situ mock codes simulated the acute deterioration of an existing ward patient, prepared the actual multidisciplinary code team, and primed the surrounding crisis support systems. Each curriculum component was evaluated with surveys using a five-point Likert scale. A total of 40 resident surveys were completed after each of the modular courses, and an additional 28 surveys were completed for the overall simulation curriculum. The highest Likert scores were for hands-on skill stations, immersive simulation environment and crisis resource management teaching. Survey results also suggested that just-in-time mock codes were realistic, reinforced learning, and prepared ward teams for patient deterioration. A simulation-based acute care curriculum was successfully integrated into a paediatric residency program. It provides a model for integrating simulation-based learning into other training programs, as well as a model for any hospital that wishes to improve paediatric resuscitation outcomes using just-in-time in situ mock codes.
[A new measurement method of time-resolved spectrum].
Shi, Zhi-gang; Huang, Shi-hua; Liang, Chun-jun; Lei, Quan-sheng
2007-02-01
A new method for measuring time-resolved spectrum (TRS) is brought forward. Programming with assemble language controlled the micro-control-processor (AT89C51), and a kind of peripheral circuit constituted the drive circuit, which drived the stepping motor to run the monochromator. So the light of different kinds of expected wavelength could be obtained. The optical signal was transformed to electrical signal by optical-to-electrical transform with the help of photomultiplier tube (Hamamatsu 1P28). The electrical signal of spectrum data was transmitted to the oscillograph. Connecting the two serial interfaces of RS232 between the oscillograph and computer, the electrical signal of spectrum data could be transmitted to computer for programming to draw the attenuation curve and time-resolved spectrum (TRS) of the swatch. The method for measuring time-resolved spectrum (TRS) features parallel measurement in time scale but serial measurement in wavelength scale. Time-resolved spectrum (TRS) and integrated emission spectrum of Tb3+ in swatch Tb(o-BBA)3 phen were measured using this method. Compared with the real time-resolved spectrum (TRS). It was validated to be feasible, credible and convenient. The 3D spectra of fluorescence intensity-wavelength-time, and the integrated spectrum of the swatch Tb(o-BBA)3 phen are given.
A Comparison of Iterative 2D-3D Pose Estimation Methods for Real-Time Applications
DEFF Research Database (Denmark)
Grest, Daniel; Krüger, Volker; Petersen, Thomas
2009-01-01
This work compares iterative 2D-3D Pose Estimation methods for use in real-time applications. The compared methods are available for public as C++ code. One method is part of the openCV library, namely POSIT. Because POSIT is not applicable for planar 3Dpoint congurations, we include the planar P...
An Optimization Method of Time Window Based on Travel Time and Reliability
Directory of Open Access Journals (Sweden)
Fengjie Fu
2015-01-01
Full Text Available The dynamic change of urban road travel time was analyzed using video image detector data, and it showed cyclic variation, so the signal cycle length at the upstream intersection was conducted as the basic unit of time window; there was some evidence of bimodality in the actual travel time distributions; therefore, the fitting parameters of the travel time bimodal distribution were estimated using the EM algorithm. Then the weighted average value of the two means was indicated as the travel time estimation value, and the Modified Buffer Time Index (MBIT was expressed as travel time variability; based on the characteristics of travel time change and MBIT along with different time windows, the time window was optimized dynamically for minimum MBIT, requiring that the travel time change be lower than the threshold value and traffic incidents can be detected real time; finally, travel times on Shandong Road in Qingdao were estimated every 10 s, 120 s, optimal time windows, and 480 s and the comparisons demonstrated that travel time estimation in optimal time windows can exactly and steadily reflect the real-time traffic. It verifies the effectiveness of the optimization method.
Reduction Methods for Real-time Simulations in Hybrid Testing
DEFF Research Database (Denmark)
Andersen, Sebastian
2016-01-01
Hybrid testing constitutes a cost-effective experimental full scale testing method. The method was introduced in the 1960's by Japanese researchers, as an alternative to conventional full scale testing and small scale material testing, such as shake table tests. The principle of the method...... is performed on a glass fibre reinforced polymer composite box girder. The test serves as a pilot test for prospective real-time tests on a wind turbine blade. The Taylor basis is implemented in the test, used to perform the numerical simulations. Despite of a number of introduced errors in the real...... is to divide a structure into a physical substructure and a numerical substructure, and couple these in a test. If the test is conducted in real-time it is referred to as real time hybrid testing. The hybrid testing concept has developed significantly since its introduction in the 1960', both with respect...
Real time simulation method for fast breeder reactors dynamics
International Nuclear Information System (INIS)
Miki, Tetsushi; Mineo, Yoshiyuki; Ogino, Takamichi; Kishida, Koji; Furuichi, Kenji.
1985-01-01
The development of multi-purpose real time simulator models with suitable plant dynamics was made; these models can be used not only in training operators but also in designing control systems, operation sequences and many other items which must be studied for the development of new type reactors. The prototype fast breeder reactor ''Monju'' is taken as an example. Analysis is made on various factors affecting the accuracy and computer load of its dynamic simulation. A method is presented which determines the optimum number of nodes in distributed systems and time steps. The oscillations due to the numerical instability are observed in the dynamic simulation of evaporators with a small number of nodes, and a method to cancel these oscillations is proposed. It has been verified through the development of plant dynamics simulation codes that these methods can provide efficient real time dynamics models of fast breeder reactors. (author)
Fault detection of gearbox using time-frequency method
Widodo, A.; Satrijo, Dj.; Prahasto, T.; Haryanto, I.
2017-04-01
This research deals with fault detection and diagnosis of gearbox by using vibration signature. In this work, fault detection and diagnosis are approached by employing time-frequency method, and then the results are compared with cepstrum analysis. Experimental work has been conducted for data acquisition of vibration signal thru self-designed gearbox test rig. This test-rig is able to demonstrate normal and faulty gearbox i.e., wears and tooth breakage. Three accelerometers were used for vibration signal acquisition from gearbox, and optical tachometer was used for shaft rotation speed measurement. The results show that frequency domain analysis using fast-fourier transform was less sensitive to wears and tooth breakage condition. However, the method of short-time fourier transform was able to monitor the faults in gearbox. Wavelet Transform (WT) method also showed good performance in gearbox fault detection using vibration signal after employing time synchronous averaging (TSA).
International Nuclear Information System (INIS)
Cohen, M.
1984-01-01
A great deal of thought has been given in recent years to the documentation of individual patients and their diseases, especially since the computerization of registry sytems facilitates the storage and retrieval of large amounts of data, but the documentation of radiation treatment methods has received surprisingly little attention. The guidelines which follow are intended for use both internally (within radiotherapy centres) and externally when a treatment method is reported in the literature or transferred from one centre to another. The amount of detail reported externally will, of course, depend on the circumstances: for example, a published paper will usually mention only the most important of the radiation and physical parameters, but it is important for the department of origin to list all parameters in a separate document, available on request. These guidelines apply specifically to the documentation of treatment by external radiation beams, although many of the suggestions would also apply to treatment by small sealed sources (brachytherapy) and by unsealed radionuclides. Treatment techniques which involve a combination of external and internal sources (e.g. Ca. cervix uteri treatd by intracavitary sources plus external beam therapy) require particularly careful documentation to indicate the relationship bwtween dose distribution (in both space and time) achieved by the two modalities
A time-domain method to generate artificial time history from a given reference response spectrum
Energy Technology Data Exchange (ETDEWEB)
Shin, Gang Sik [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Song, Oh Seop [Dept. of Mechanical Engineering, Chungnam National University, Daejeon (Korea, Republic of)
2016-06-15
Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance.
A time-domain method to generate artificial time history from a given reference response spectrum
International Nuclear Information System (INIS)
Shin, Gang Sik; Song, Oh Seop
2016-01-01
Seismic qualification by test is widely used as a way to show the integrity and functionality of equipment that is related to the overall safety of nuclear power plants. Another means of seismic qualification is by direct integration analysis. Both approaches require a series of time histories as an input. However, in most cases, the possibility of using real earthquake data is limited. Thus, artificial time histories are widely used instead. In many cases, however, response spectra are given. Thus, most of the artificial time histories are generated from the given response spectra. Obtaining the response spectrum from a given time history is straightforward. However, the procedure for generating artificial time histories from a given response spectrum is difficult and complex to understand. Thus, this paper presents a simple time-domain method for generating a time history from a given response spectrum; the method was shown to satisfy conditions derived from nuclear regulatory guidance
Richard D Bergman
2012-01-01
Greenhouse gases (GHGs) trap infrared radiation emitting from the Earthâs surface to generate the âgreenhouse effectâ thus keeping the planet warm. Many natural activities including rotting vegetation emit GHGs such as carbon dioxide to produce this natural affect. However, in the last 200 years or so, human activity has increased the atmospheric concentrations of GHGs...
Exact methods for time constrained routing and related scheduling problems
DEFF Research Database (Denmark)
Kohl, Niklas
1995-01-01
of customers. In the VRPTW customers must be serviced within a given time period - a so called time window. The objective can be to minimize operating costs (e.g. distance travelled), fixed costs (e.g. the number of vehicles needed) or a combination of these component costs. During the last decade optimization......This dissertation presents a number of optimization methods for the Vehicle Routing Problem with Time Windows (VRPTW). The VRPTW is a generalization of the well known capacity constrained Vehicle Routing Problem (VRP), where a fleet of vehicles based at a central depot must service a set...... of J?rnsten, Madsen and S?rensen (1986), which has been tested computationally by Halse (1992). Both methods decompose the problem into a series of time and capacity constrained shotest path problems. This yields a tight lower bound on the optimal objective, and the dual gap can often be closed...
Pinson, Paul A.
1998-01-01
A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated in barrier material, preferably in the form of a flexible sheet, one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention.
Radhakrishna, K.; Bowles, K.; Zettek-Sumner, A.
2013-01-01
Summary Background Telehealth data overload through high alert generation is a significant barrier to sustained adoption of telehealth for managing HF patients. Objective To explore the factors contributing to frequent telehealth alerts including false alerts for Medicare heart failure (HF) patients admitted to a home health agency. Materials and Methods A mixed methods design that combined quantitative correlation analysis of patient characteristic data with number of telehealth alerts and qualitative analysis of telehealth and visiting nurses’ notes on follow-up actions to patients’ telehealth alerts was employed. All the quantitative and qualitative data was collected through retrospective review of electronic records of the home heath agency. Results Subjects in the study had a mean age of 83 (SD = 7.6); 56% were female. Patient co-morbidities (ppatient characteristics along with establishing patient-centered telehealth outcome goals may allow meaningful generation of telehealth alerts. Reducing avoidable telehealth alerts could vastly improve the efficiency and sustainability of telehealth programs for HF management. PMID:24454576
International Nuclear Information System (INIS)
Pinson, P.A.
1998-01-01
A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated barrier material, preferably in the form of a flexible sheet, and one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention. 3 figs
Evaluation of the filtered leapfrog-trapezoidal time integration method
International Nuclear Information System (INIS)
Roache, P.J.; Dietrich, D.E.
1988-01-01
An analysis and evaluation are presented for a new method of time integration for fluid dynamic proposed by Dietrich. The method, called the filtered leapfrog-trapezoidal (FLT) scheme, is analyzed for the one-dimensional constant-coefficient advection equation and is shown to have some advantages for quasi-steady flows. A modification (FLTW) using a weighted combination of FLT and leapfrog is developed which retains the advantages for steady flows, increases accuracy for time-dependent flows, and involves little coding effort. Merits and applicability are discussed
Novel Verification Method for Timing Optimization Based on DPSO
Directory of Open Access Journals (Sweden)
Chuandong Chen
2018-01-01
Full Text Available Timing optimization for logic circuits is one of the key steps in logic synthesis. Extant research data are mainly proposed based on various intelligence algorithms. Hence, they are neither comparable with timing optimization data collected by the mainstream electronic design automation (EDA tool nor able to verify the superiority of intelligence algorithms to the EDA tool in terms of optimization ability. To address these shortcomings, a novel verification method is proposed in this study. First, a discrete particle swarm optimization (DPSO algorithm was applied to optimize the timing of the mixed polarity Reed-Muller (MPRM logic circuit. Second, the Design Compiler (DC algorithm was used to optimize the timing of the same MPRM logic circuit through special settings and constraints. Finally, the timing optimization results of the two algorithms were compared based on MCNC benchmark circuits. The timing optimization results obtained using DPSO are compared with those obtained from DC, and DPSO demonstrates an average reduction of 9.7% in the timing delays of critical paths for a number of MCNC benchmark circuits. The proposed verification method directly ascertains whether the intelligence algorithm has a better timing optimization ability than DC.
The RATIO method for time-resolved Laue crystallography
International Nuclear Information System (INIS)
Coppens, P.; Pitak, M.; Gembicky, M.; Messerschmidt, M.; Scheins, S.; Benedict, J.; Adachi, S.-I.; Sato, T.; Nozawa, S.; Ichiyanagi, K.; Chollet, M.; Koshihara, S.-Y.
2009-01-01
A RATIO method for analysis of intensity changes in time-resolved pump-probe Laue diffraction experiments is described. The method eliminates the need for scaling the data with a wavelength curve representing the spectral distribution of the source and removes the effect of possible anisotropic absorption. It does not require relative scaling of series of frames and removes errors due to all but very short term fluctuations in the synchrotron beam.
System and method for time synchronization in a wireless network
Gonia, Patrick S.; Kolavennu, Soumitri N.; Mahasenan, Arun V.; Budampati, Ramakrishna S.
2010-03-30
A system includes multiple wireless nodes forming a cluster in a wireless network, where each wireless node is configured to communicate and exchange data wirelessly based on a clock. One of the wireless nodes is configured to operate as a cluster master. Each of the other wireless nodes is configured to (i) receive time synchronization information from a parent node, (ii) adjust its clock based on the received time synchronization information, and (iii) broadcast time synchronization information based on the time synchronization information received by that wireless node. The time synchronization information received by each of the other wireless nodes is based on time synchronization information provided by the cluster master so that the other wireless nodes substantially synchronize their clocks with the clock of the cluster master.
Directory of Open Access Journals (Sweden)
F. Radicioni
2017-05-01
Full Text Available The Tempio della Consolazione in Todi (16th cent. has always been one of the most significant symbols of the Umbrian landscape. Since the first times after its completion (1606 the structure has exhibited evidences of instability, due to foundation subsiding and/or seismic activity. Structural and geotechnical countermeasures have been undertaken on the Tempio and its surroundings from the 17th century until recent times. Until now a truly satisfactory analysis of the overall deformation and attitude of the building has not been performed, since the existing surveys record the overhangs of the pillars, the crack pattern or the subsidence over limited time spans. Describing the attitude of the whole church is in fact a complex operation due to the architectural character of the building, consisting of four apses (three polygonal and one semicircular covered with half domes, which surround the central area with the large dome. The present research aims to fill the gap of knowledge with a global study based on geomatic techniques for an accurate 3D reconstruction of geometry and attitude, integrated with a historical research on damage and interventions and a geotechnical analysis. The geomatic survey results from the integration of different techniques: GPS-GNSS for global georeferencing, laser scanning and digital photogrammetry for an accurate 3D reconstruction, high precision total station and geometric leveling for a direct survey of deformations and cracks, and for the alignment of the laser scans. The above analysis allowed to assess the dynamics of the cracks occurred in the last 25 years by a comparison with a previous survey. From the photographic colour associated to the point cloud was also possible to map the damp patches showing on the domes intrados, mapping their evolution over the last years.
Sloth Møller, Ditte; Knap, Marianne Marquard; Nyeng, Tine Bisballe; Khalil, Azza Ahmed; Holt, Marianne Ingerslev; Kandi, Maria; Hoffmann, Lone
2017-11-01
Minimizing the planning target volume (PTV) while ensuring sufficient target coverage during the entire respiratory cycle is essential for free-breathing radiotherapy of lung cancer. Different methods are used to incorporate the respiratory motion into the PTV. Fifteen patients were analyzed. Respiration can be included in the target delineation process creating a respiratory GTV, denoted iGTV. Alternatively, the respiratory amplitude (A) can be measured based on the 4D-CT and A can be incorporated in the margin expansion. The GTV expanded by A yielded GTV + resp, which was compared to iGTV in terms of overlap. Three methods for PTV generation were compared. PTV del (delineated iGTV expanded to CTV plus PTV margin), PTV σ (GTV expanded to CTV and A was included as a random uncertainty in the CTV to PTV margin) and PTV ∑ (GTV expanded to CTV, succeeded by CTV linear expansion by A to CTV + resp, which was finally expanded to PTV ∑ ). Deformation of tumor and lymph nodes during respiration resulted in volume changes between the respiratory phases. The overlap between iGTV and GTV + resp showed that on average 7% of iGTV was outside the GTV + resp implying that GTV + resp did not capture the tumor during the full deformable respiration cycle. A comparison of the PTV volumes showed that PTV σ was smallest and PTV Σ largest for all patients. PTV σ was in mean 14% (31 cm 3 ) smaller than PTV del , while PTV del was 7% (20 cm 3 ) smaller than PTV Σ . PTV σ yields the smallest volumes but does not ensure coverage of tumor during the full respiratory motion due to tumor deformation. Incorporating the respiratory motion in the delineation (PTV del ) takes into account the entire respiratory cycle including deformation, but at the cost, however, of larger treatment volumes. PTV Σ should not be used, since it incorporates the disadvantages of both PTV del and PTV σ .
Novel methods for real-time 3D facial recognition
Rodrigues, Marcos; Robinson, Alan
2010-01-01
In this paper we discuss our approach to real-time 3D face recognition. We argue the need for real time operation in a realistic scenario and highlight the required pre- and post-processing operations for effective 3D facial recognition. We focus attention to some operations including face and eye detection, and fast post-processing operations such as hole filling, mesh smoothing and noise removal. We consider strategies for hole filling such as bilinear and polynomial interpolation and Lapla...
Neutron spectrum measurement using rise-time discrimination method
International Nuclear Information System (INIS)
Luo Zhiping; Suzuki, C.; Kosako, T.; Ma Jizeng
2009-01-01
PSD method can be used to measure the fast neutron spectrum in n/γ mixed field. A set of assemblies for measuring the pulse height distribution of neutrons is built up,based on a large volume NE213 liquid scintillator and standard NIM circuits,through the rise-time discrimination method. After that,the response matrix is calculated using Monte Carlo method. The energy calibration of the pulse height distribution is accomplished using 60 Co radioisotope. The neutron spectrum of the mono-energetic accelerator neutron source is achieved by unfolding process. Suggestions for further improvement of the system are presented at last. (authors)
Minimum entropy density method for the time series analysis
Lee, Jeong Won; Park, Joongwoo Brian; Jo, Hang-Hyun; Yang, Jae-Suk; Moon, Hie-Tae
2009-01-01
The entropy density is an intuitive and powerful concept to study the complicated nonlinear processes derived from physical systems. We develop the minimum entropy density method (MEDM) to detect the structure scale of a given time series, which is defined as the scale in which the uncertainty is minimized, hence the pattern is revealed most. The MEDM is applied to the financial time series of Standard and Poor’s 500 index from February 1983 to April 2006. Then the temporal behavior of structure scale is obtained and analyzed in relation to the information delivery time and efficient market hypothesis.
Formal methods for dependable real-time systems
Rushby, John
1993-01-01
The motivation for using formal methods to specify and reason about real time properties is outlined and approaches that were proposed and used are sketched. The formal verifications of clock synchronization algorithms are concluded as showing that mechanically supported reasoning about complex real time behavior is feasible. However, there was significant increase in the effectiveness of verification systems since those verifications were performed, at it is to be expected that verifications of comparable difficulty will become fairly routine. The current challenge lies in developing perspicuous and economical approaches to the formalization and specification of real time properties.
Method for determining thermal neutron decay times of earth formations
International Nuclear Information System (INIS)
Arnold, D.M.
1976-01-01
A method is disclosed for measuring the thermal neutron decay time of earth formations in the vicinity of a well borehole. A harmonically intensity modulated source of fast neutrons is used to irradiate the earth formations with fast neutrons at three different intensity modulation frequencies. The tangents of the relative phase angles of the fast neutrons and the resulting thermal neutrons at each of the three frequencies of modulation are measured. First and second approximations to the earth formation thermal neutron decay time are derived from the three tangent measurements. These approximations are then combined to derive a value for the true earth formation thermal neutron decay time
Adding Timing Requirements to the CODARTS Real-Time Software Design Method
DEFF Research Database (Denmark)
Bach, K.R.
The CODARTS software design method consideres how concurrent, distributed and real-time applications can be designed. Although accounting for the important issues of task and communication, the method does not provide means for expressing the timeliness of the tasks and communication directly...
Roussel, Sophie; Felix, Benjamin; Vingadassalon, Noémie; Grout, Joël; Hennekinne, Jacques-Antoine; Guillier, Laurent; Brisabois, Anne; Auvray, Fréderic
2015-01-01
Staphylococcal food poisoning outbreaks (SFPOs) are frequently reported in France. However, most of them remain unconfirmed, highlighting a need for a better characterization of isolated strains. Here we analyzed the genetic diversity of 112 Staphylococcus aureus strains isolated from 76 distinct SFPOs that occurred in France over the last 30 years. We used a recently developed multiple-locus variable-number tandem-repeat analysis (MLVA) protocol and compared this method with pulsed field gel electrophoresis (PFGE), spa-typing and carriage of genes (se genes) coding for 11 staphylococcal enterotoxins (i.e., SEA, SEB, SEC, SED, SEE, SEG, SEH, SEI, SEJ, SEP, SER). The strains known to have an epidemiological association with one another had identical MLVA types, PFGE profiles, spa-types or se gene carriage. MLVA, PFGE and spa-typing divided 103 epidemiologically unrelated strains into 84, 80, and 50 types respectively demonstrating the high genetic diversity of S. aureus strains involved in SFPOs. Each MLVA type shared by more than one strain corresponded to a single spa-type except for one MLVA type represented by four strains that showed two different-but closely related-spa-types. The 87 enterotoxigenic strains were distributed across 68 distinct MLVA types that correlated all with se gene carriage except for four MLVA types. The most frequent se gene detected was sea, followed by seg and sei and the most frequently associated se genes were sea-seh and sea-sed-sej-ser. The discriminatory ability of MLVA was similar to that of PFGE and higher than that of spa-typing. This MLVA protocol was found to be compatible with high throughput analysis, and was also faster and less labor-intensive than PFGE. MLVA holds promise as a suitable method for investigating SFPOs and tracking the source of contamination in food processing facilities in real time. PMID:26441849
Statistical methods of parameter estimation for deterministically chaotic time series
Pisarenko, V. F.; Sornette, D.
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A “segmentation fitting” maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x1 considered as an additional unknown parameter. The segmentation fitting method, called “piece-wise” ML, is similar in spirit but simpler and has smaller bias than the “multiple shooting” previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically).
On the solution of high order stable time integration methods
Czech Academy of Sciences Publication Activity Database
Axelsson, Owe; Blaheta, Radim; Sysala, Stanislav; Ahmad, B.
2013-01-01
Roč. 108, č. 1 (2013), s. 1-22 ISSN 1687-2770 Institutional support: RVO:68145535 Keywords : evolution equations * preconditioners for quadratic matrix polynomials * a stiffly stable time integration method Subject RIV: BA - General Mathematics Impact factor: 0.836, year: 2013 http://www.boundaryvalueproblems.com/content/2013/1/108
Non-linear shape functions over time in the space-time finite element method
Directory of Open Access Journals (Sweden)
Kacprzyk Zbigniew
2017-01-01
Full Text Available This work presents a generalisation of the space-time finite element method proposed by Kączkowski in his seminal of 1970’s and early 1980’s works. Kączkowski used linear shape functions in time. The recurrence formula obtained by Kączkowski was conditionally stable. In this paper, non-linear shape functions in time are proposed.
Real-Time Pore Pressure Detection: Indicators and Improved Methods
Directory of Open Access Journals (Sweden)
Jincai Zhang
2017-01-01
Full Text Available High uncertainties may exist in the predrill pore pressure prediction in new prospects and deepwater subsalt wells; therefore, real-time pore pressure detection is highly needed to reduce drilling risks. The methods for pore pressure detection (the resistivity, sonic, and corrected d-exponent methods are improved using the depth-dependent normal compaction equations to adapt to the requirements of the real-time monitoring. A new method is proposed to calculate pore pressure from the connection gas or elevated background gas, which can be used for real-time pore pressure detection. The pore pressure detection using the logging-while-drilling, measurement-while-drilling, and mud logging data is also implemented and evaluated. Abnormal pore pressure indicators from the well logs, mud logs, and wellbore instability events are identified and analyzed to interpret abnormal pore pressures for guiding real-time drilling decisions. The principles for identifying abnormal pressure indicators are proposed to improve real-time pore pressure monitoring.
Real-time earthquake monitoring using a search engine method.
Zhang, Jie; Zhang, Haijiang; Chen, Enhong; Zheng, Yi; Kuang, Wenhuan; Zhang, Xiong
2014-12-04
When an earthquake occurs, seismologists want to use recorded seismograms to infer its location, magnitude and source-focal mechanism as quickly as possible. If such information could be determined immediately, timely evacuations and emergency actions could be undertaken to mitigate earthquake damage. Current advanced methods can report the initial location and magnitude of an earthquake within a few seconds, but estimating the source-focal mechanism may require minutes to hours. Here we present an earthquake search engine, similar to a web search engine, that we developed by applying a computer fast search method to a large seismogram database to find waveforms that best fit the input data. Our method is several thousand times faster than an exact search. For an Mw 5.9 earthquake on 8 March 2012 in Xinjiang, China, the search engine can infer the earthquake's parameters in <1 s after receiving the long-period surface wave data.
Seasonal adjustment methods and real time trend-cycle estimation
Bee Dagum, Estela
2016-01-01
This book explores widely used seasonal adjustment methods and recent developments in real time trend-cycle estimation. It discusses in detail the properties and limitations of X12ARIMA, TRAMO-SEATS and STAMP - the main seasonal adjustment methods used by statistical agencies. Several real-world cases illustrate each method and real data examples can be followed throughout the text. The trend-cycle estimation is presented using nonparametric techniques based on moving averages, linear filters and reproducing kernel Hilbert spaces, taking recent advances into account. The book provides a systematical treatment of results that to date have been scattered throughout the literature. Seasonal adjustment and real time trend-cycle prediction play an essential part at all levels of activity in modern economies. They are used by governments to counteract cyclical recessions, by central banks to control inflation, by decision makers for better modeling and planning and by hospitals, manufacturers, builders, transportat...
Perfectly matched layer for the time domain finite element method
International Nuclear Information System (INIS)
Rylander, Thomas; Jin Jianming
2004-01-01
A new perfectly matched layer (PML) formulation for the time domain finite element method is described and tested for Maxwell's equations. In particular, we focus on the time integration scheme which is based on Galerkin's method with a temporally piecewise linear expansion of the electric field. The time stepping scheme is constructed by forming a linear combination of exact and trapezoidal integration applied to the temporal weak form, which reduces to the well-known Newmark scheme in the case without PML. Extensive numerical tests on scattering from infinitely long metal cylinders in two dimensions show good accuracy and no signs of instabilities. For a circular cylinder, the proposed scheme indicates the expected second order convergence toward the analytic solution and gives less than 2% root-mean-square error in the bistatic radar cross section (RCS) for resolutions with more than 10 points per wavelength. An ogival cylinder, which has sharp corners supporting field singularities, shows similar accuracy in the monostatic RCS
Method to implement the CCD timing generator based on FPGA
Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin
2010-07-01
With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.
A method for untriggered time-dependent searches for multiple flares from neutrino point sources
International Nuclear Information System (INIS)
Gora, D.; Bernardini, E.; Cruz Silva, A.H.
2011-04-01
A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)
A method for untriggered time-dependent searches for multiple flares from neutrino point sources
Energy Technology Data Exchange (ETDEWEB)
Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)
2011-04-15
A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)
Le Goff, Alain; Cathala, Thierry; Latger, Jean
2015-10-01
To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.
Energy Technology Data Exchange (ETDEWEB)
Snel, H. [Netherlands Energy Research Foundation ECN, Renewable Energy, Wind Energy (Netherlands)
1997-08-01
Recently the Blade Element Momentum (BEM) method has been made more versatile. Inclusion of rotational effects on time averaged profile coefficients have improved its achievements for performance calculations in stalled flow. Time dependence as a result of turbulent inflow, pitching actions and yawed operation is now treated more correctly (although more improvement is needed) than before. It is of interest to note that adaptations in modelling of unsteady or periodic induction stem from qualitative and quantitative insights obtained from free vortex models. Free vortex methods and further into the future Navier Stokes (NS) calculations, together with wind tunnel and field experiments, can be very useful in enhancing the potential of BEM for aero-elastic response calculations. It must be kept in mind however that extreme caution must be used with free vortex methods, as will be discussed in the following chapters. A discussion of the shortcomings and the strength of BEM and of vortex wake models is given. Some ideas are presented on how BEM might be improved without too much loss of efficiency. (EG)
A simple time-delayed method to control chaotic systems
International Nuclear Information System (INIS)
Chen Maoyin; Zhou Donghua; Shang Yun
2004-01-01
Based on the adaptive iterative learning strategy, a simple time-delayed controller is proposed to stabilize unstable periodic orbits (UPOs) embedded in chaotic attractors. This controller includes two parts: one is a linear feedback part; the other is an adaptive iterative learning estimation part. Theoretical analysis and numerical simulation show the effectiveness of this controller
Long-memory time series theory and methods
Palma, Wilfredo
2007-01-01
Wilfredo Palma, PhD, is Chairman and Professor of Statistics in the Department of Statistics at Pontificia Universidad Católica de Chile. Dr. Palma has published several refereed articles and has received over a dozen academic honors and awards. His research interests include time series analysis, prediction theory, state space systems, linear models, and econometrics.
A Multivariate Time Series Method for Monte Carlo Reactor Analysis
International Nuclear Information System (INIS)
Taro Ueki
2008-01-01
A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor
Limitations in simulator time-based human reliability analysis methods
International Nuclear Information System (INIS)
Wreathall, J.
1989-01-01
Developments in human reliability analysis (HRA) methods have evolved slowly. Current methods are little changed from those of almost a decade ago, particularly in the use of time-reliability relationships. While these methods were suitable as an interim step, the time (and the need) has come to specify the next evolution of HRA methods. As with any performance-oriented data source, power plant simulator data have no direct connection to HRA models. Errors reported in data are normal deficiencies observed in human performance; failures are events modeled in probabilistic risk assessments (PRAs). Not all errors cause failures; not all failures are caused by errors. Second, the times at which actions are taken provide no measure of the likelihood of failures to act correctly within an accident scenario. Inferences can be made about human reliability, but they must be made with great care. Specific limitations are discussed. Simulator performance data are useful in providing qualitative evidence of the variety of error types and their potential influences on operating systems. More work is required to combine recent developments in the psychology of error with the qualitative data collected at stimulators. Until data become openly available, however, such an advance will not be practical
Multiple-time-stepping generalized hybrid Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Escribano, Bruno, E-mail: bescribano@bcamath.org [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); Akhmatskaya, Elena [BCAM—Basque Center for Applied Mathematics, E-48009 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, E-48013 Bilbao (Spain); Reich, Sebastian [Universität Potsdam, Institut für Mathematik, D-14469 Potsdam (Germany); Azpiroz, Jon M. [Kimika Fakultatea, Euskal Herriko Unibertsitatea (UPV/EHU) and Donostia International Physics Center (DIPC), P.K. 1072, Donostia (Spain)
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Single photon imaging and timing array sensor apparatus and method
Smith, R. Clayton
2003-06-24
An apparatus and method are disclosed for generating a three-dimension image of an object or target. The apparatus is comprised of a photon source for emitting a photon at a target. The emitted photons are received by a photon receiver for receiving the photon when reflected from the target. The photon receiver determines a reflection time of the photon and further determines an arrival position of the photon on the photon receiver. An analyzer is communicatively coupled to the photon receiver, wherein the analyzer generates a three-dimensional image of the object based upon the reflection time and the arrival position.
Super-nodal methods for space-time kinetics
Mertyurek, Ugur
The purpose of this research has been to develop an advanced Super-Nodal method to reduce the run time of 3-D core neutronics models, such as in the NESTLE reactor core simulator and FORMOSA nuclear fuel management optimization codes. Computational performance of the neutronics model is increased by reducing the number of spatial nodes used in the core modeling. However, as the number of spatial nodes decreases, the error in the solution increases. The Super-Nodal method reduces the error associated with the use of coarse nodes in the analyses by providing a new set of cross sections and ADFs (Assembly Discontinuity Factors) for the new nodalization. These so called homogenization parameters are obtained by employing consistent collapsing technique. During this research a new type of singularity, namely "fundamental mode singularity", is addressed in the ANM (Analytical Nodal Method) solution. The "Coordinate Shifting" approach is developed as a method to address this singularity. Also, the "Buckling Shifting" approach is developed as an alternative and more accurate method to address the zero buckling singularity, which is a more common and well known singularity problem in the ANM solution. In the course of addressing the treatment of these singularities, an effort was made to provide better and more robust results from the Super-Nodal method by developing several new methods for determining the transverse leakage and collapsed diffusion coefficient, which generally are the two main approximations in the ANM methodology. Unfortunately, the proposed new transverse leakage and diffusion coefficient approximations failed to provide a consistent improvement to the current methodology. However, improvement in the Super-Nodal solution is achieved by updating the homogenization parameters at several time points during a transient. The update is achieved by employing a refinement technique similar to pin-power reconstruction. A simple error analysis based on the relative
Which DTW Method Applied to Marine Univariate Time Series Imputation
Phan , Thi-Thu-Hong; Caillault , Émilie; Lefebvre , Alain; Bigand , André
2017-01-01
International audience; Missing data are ubiquitous in any domains of applied sciences. Processing datasets containing missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Therefore, the aim of this paper is to build a framework for filling missing values in univariate time series and to perform a comparison of different similarity metrics used for the imputation task. This allows to suggest the most suitable methods for the imp...
Method of parallel processing in SANPO real time system
International Nuclear Information System (INIS)
Ostrovnoj, A.I.; Salamatin, I.M.
1981-01-01
A method of parellel processing in SANPO real time system is described. Algorithms of data accumulation and preliminary processing in this system as a parallel processes using a specialized high level programming language are described. Hierarchy of elementary processes are also described. It provides the synchronization of concurrent processes without semaphors. The developed means are applied to the systems of experiment automation using SM-3 minicomputers [ru
Normalization methods in time series of platelet function assays
Van Poucke, Sven; Zhang, Zhongheng; Roest, Mark; Vukicevic, Milan; Beran, Maud; Lauwereins, Bart; Zheng, Ming-Hua; Henskens, Yvonne; Lancé, Marcus; Marcus, Abraham
2016-01-01
Abstract Platelet function can be quantitatively assessed by specific assays such as light-transmission aggregometry, multiple-electrode aggregometry measuring the response to adenosine diphosphate (ADP), arachidonic acid, collagen, and thrombin-receptor activating peptide and viscoelastic tests such as rotational thromboelastometry (ROTEM). The task of extracting meaningful statistical and clinical information from high-dimensional data spaces in temporal multivariate clinical data represented in multivariate time series is complex. Building insightful visualizations for multivariate time series demands adequate usage of normalization techniques. In this article, various methods for data normalization (z-transformation, range transformation, proportion transformation, and interquartile range) are presented and visualized discussing the most suited approach for platelet function data series. Normalization was calculated per assay (test) for all time points and per time point for all tests. Interquartile range, range transformation, and z-transformation demonstrated the correlation as calculated by the Spearman correlation test, when normalized per assay (test) for all time points. When normalizing per time point for all tests, no correlation could be abstracted from the charts as was the case when using all data as 1 dataset for normalization. PMID:27428217
Communication: Time-dependent optimized coupled-cluster method for multielectron dynamics
Sato, Takeshi; Pathak, Himadri; Orimo, Yuki; Ishikawa, Kenichi L.
2018-02-01
Time-dependent coupled-cluster method with time-varying orbital functions, called time-dependent optimized coupled-cluster (TD-OCC) method, is formulated for multielectron dynamics in an intense laser field. We have successfully derived the equations of motion for CC amplitudes and orthonormal orbital functions based on the real action functional, and implemented the method including double excitations (TD-OCCD) and double and triple excitations (TD-OCCDT) within the optimized active orbitals. The present method is size extensive and gauge invariant, a polynomial cost-scaling alternative to the time-dependent multiconfiguration self-consistent-field method. The first application of the TD-OCC method of intense-laser driven correlated electron dynamics in Ar atom is reported.
Computational electrodynamics the finite-difference time-domain method
Taflove, Allen
2005-01-01
This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.
Iterative Refinement Methods for Time-Domain Equalizer Design
Directory of Open Access Journals (Sweden)
Evans Brian L
2006-01-01
Full Text Available Commonly used time domain equalizer (TEQ design methods have been recently unified as an optimization problem involving an objective function in the form of a Rayleigh quotient. The direct generalized eigenvalue solution relies on matrix decompositions. To reduce implementation complexity, we propose an iterative refinement approach in which the TEQ length starts at two taps and increases by one tap at each iteration. Each iteration involves matrix-vector multiplications and vector additions with matrices and two-element vectors. At each iteration, the optimization of the objective function either improves or the approach terminates. The iterative refinement approach provides a range of communication performance versus implementation complexity tradeoffs for any TEQ method that fits the Rayleigh quotient framework. We apply the proposed approach to three such TEQ design methods: maximum shortening signal-to-noise ratio, minimum intersymbol interference, and minimum delay spread.
Efficient methods for time-absorption (α) eigenvalue calculations
International Nuclear Information System (INIS)
Hill, T.R.
1983-01-01
The time-absorption eigenvalue (α) calculation is one of the options found in most discrete-ordinates transport codes. Several methods have been developed at Los Alamos to improve the efficiency of this calculation. Two procedures, based on coarse-mesh rebalance, to accelerate the α eigenvalue search are derived. A hybrid scheme to automatically choose the more-effective rebalance method is described. The α rebalance scheme permits some simple modifications to the iteration strategy that eliminates many unnecessary calculations required in the standard search procedure. For several fast supercritical test problems, these methods resulted in convergence with one-fifth the number of iterations required for the conventional eigenvalue search procedure
Formal methods for discrete-time dynamical systems
Belta, Calin; Aydin Gol, Ebru
2017-01-01
This book bridges fundamental gaps between control theory and formal methods. Although it focuses on discrete-time linear and piecewise affine systems, it also provides general frameworks for abstraction, analysis, and control of more general models. The book is self-contained, and while some mathematical knowledge is necessary, readers are not expected to have a background in formal methods or control theory. It rigorously defines concepts from formal methods, such as transition systems, temporal logics, model checking and synthesis. It then links these to the infinite state dynamical systems through abstractions that are intuitive and only require basic convex-analysis and control-theory terminology, which is provided in the appendix. Several examples and illustrations help readers understand and visualize the concepts introduced throughout the book.
BOX-COX REGRESSION METHOD IN TIME SCALING
Directory of Open Access Journals (Sweden)
ATİLLA GÖKTAŞ
2013-06-01
Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.
The method of covariant symbols in curved space-time
International Nuclear Information System (INIS)
Salcedo, L.L.
2007-01-01
Diagonal matrix elements of pseudodifferential operators are needed in order to compute effective Lagrangians and currents. For this purpose the method of symbols is often used, which however lacks manifest covariance. In this work the method of covariant symbols, introduced by Pletnev and Banin, is extended to curved space-time with arbitrary gauge and coordinate connections. For the Riemannian connection we compute the covariant symbols corresponding to external fields, the covariant derivative and the Laplacian, to fourth order in a covariant derivative expansion. This allows one to obtain the covariant symbol of general operators to the same order. The procedure is illustrated by computing the diagonal matrix element of a nontrivial operator to second order. Applications of the method are discussed. (orig.)
The Application of Time-Frequency Methods to HUMS
Pryor, Anna H.; Mosher, Marianne; Lewicki, David G.; Norvig, Peter (Technical Monitor)
2001-01-01
This paper reports the study of four time-frequency transforms applied to vibration signals and presents a new metric for comparing them for fault detection. The four methods to be described and compared are the Short Time Frequency Transform (STFT), the Choi-Williams Distribution (WV-CW), the Continuous Wavelet Transform (CWT) and the Discrete Wavelet Transform (DWT). Vibration data of bevel gear tooth fatigue cracks, under a variety of operating load levels, are analyzed using these methods. The new metric for automatic fault detection is developed and can be produced from any systematic numerical representation of the vibration signals. This new metric reveals indications of gear damage with all of the methods on this data set. Analysis with the CWT detects mechanical problems with the test rig not found with the other transforms. The WV-CW and CWT use considerably more resources than the STFT and the DWT. More testing of the new metric is needed to determine its value for automatic fault detection and to develop methods of setting the threshold for the metric.
Design of time interval generator based on hybrid counting method
Energy Technology Data Exchange (ETDEWEB)
Yao, Yuan [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Zhaoqi [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Lu, Houbing [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Hefei Electronic Engineering Institute, Hefei 230037 (China); Chen, Lian [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Jin, Ge, E-mail: goldjin@ustc.edu.cn [State Key Laboratory of Particle Detection and Electronics, University of Science and Technology of China, Hefei, Anhui 230026 (China)
2016-10-01
Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.
Design of time interval generator based on hybrid counting method
International Nuclear Information System (INIS)
Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge
2016-01-01
Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some “off-the-shelf” TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.
Methods of Run-Time Error Detection in Distributed Process Control Software
DEFF Research Database (Denmark)
Drejer, N.
of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...
DEFF Research Database (Denmark)
Pires, Sara Monteiro
2013-01-01
on the public health question being addressed, on the data requirements, on advantages and limitations of the method, and on the data availability of the country or region in question. Previous articles have described available methods for source attribution, but have focused only on foodborne microbiological...
An Efficient Integer Coding and Computing Method for Multiscale Time Segment
Directory of Open Access Journals (Sweden)
TONG Xiaochong
2016-12-01
Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.
A Novel Time-Varying Friction Compensation Method for Servomechanism
Directory of Open Access Journals (Sweden)
Bin Feng
2015-01-01
Full Text Available Friction is an inevitable nonlinear phenomenon existing in servomechanisms. Friction errors often affect their motion and contour accuracies during the reverse motion. To reduce friction errors, a novel time-varying friction compensation method is proposed to solve the problem that the traditional friction compensation methods hardly deal with. This problem leads to an unsatisfactory friction compensation performance and the motion and contour accuracies cannot be maintained effectively. In this method, a trapezoidal compensation pulse is adopted to compensate for the friction errors. A generalized regression neural network algorithm is used to generate the optimal pulse amplitude function. The optimal pulse duration function and the pulse amplitude function can be established by the pulse characteristic parameter learning and then the optimal friction compensation pulse can be generated. The feasibility of friction compensation method was verified on a high-precision X-Y worktable. The experimental results indicated that the motion and contour accuracies were improved greatly with reduction of the friction errors, in different working conditions. Moreover, the overall friction compensation performance indicators were decreased by more than 54% and this friction compensation method can be implemented easily on most of servomechanisms in industry.
A window-based time series feature extraction method.
Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife
2017-10-01
This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Larsen, Edward W.
2004-01-01
The equations of nonlinear, time-dependent radiative transfer are known to yield the equilibrium diffusion equation as the leading-order solution of an asymptotic analysis when the mean-free path and mean-free time of a photon become small. We apply this same analysis to the Fleck-Cummings, Carter-Forest, and N'kaoua Monte Carlo approximations for grey (frequency-independent) radiative transfer. Although Monte Carlo simulation usually does not require the discretizations found in deterministic transport techniques, Monte Carlo methods for radiative transfer require a time discretization due to the nonlinearities of the problem. If an asymptotic analysis of the equations used by a particular Monte Carlo method yields an accurate time-discretized version of the equilibrium diffusion equation, the method should generate accurate solutions if a time discretization is chosen that resolves temperature changes, even if the time steps are much larger than the mean-free time of a photon. This analysis is of interest because in many radiative transfer problems, it is a practical necessity to use time steps that are large compared to a mean-free time. Our asymptotic analysis shows that: (i) the N'kaoua method has the equilibrium diffusion limit, (ii) the Carter-Forest method has the equilibrium diffusion limit if the material temperature change during a time step is small, and (iii) the Fleck-Cummings method does not have the equilibrium diffusion limit. We include numerical results that verify our theoretical predictions
Creep behavior of bone cement: a method for time extrapolation using time-temperature equivalence.
Morgan, R L; Farrar, D F; Rose, J; Forster, H; Morgan, I
2003-04-01
The clinical lifetime of poly(methyl methacrylate) (PMMA) bone cement is considerably longer than the time over which it is convenient to perform creep testing. Consequently, it is desirable to be able to predict the long term creep behavior of bone cement from the results of short term testing. A simple method is described for prediction of long term creep using the principle of time-temperature equivalence in polymers. The use of the method is illustrated using a commercial acrylic bone cement. A creep strain of approximately 0.6% is predicted after 400 days under a constant flexural stress of 2 MPa. The temperature range and stress levels over which it is appropriate to perform testing are described. Finally, the effects of physical aging on the accuracy of the method are discussed and creep data from aged cement are reported.
FREEZING AND THAWING TIME PREDICTION METHODS OF FOODS II: NUMARICAL METHODS
Directory of Open Access Journals (Sweden)
Yahya TÜLEK
1999-03-01
Full Text Available Freezing is one of the excellent methods for the preservation of foods. If freezing and thawing processes and frozen storage method are carried out correctly, the original characteristics of the foods can remain almost unchanged over an extended periods of time. It is very important to determine the freezing and thawing time period of the foods, as they strongly influence the both quality of food material and process productivity and the economy. For developing a simple and effectively usable mathematical model, less amount of process parameters and physical properties should be enrolled in calculations. But it is a difficult to have all of these in one prediction method. For this reason, various freezing and thawing time prediction methods were proposed in literature and research studies have been going on.
Hybrid perturbation methods based on statistical time series models
San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario
2016-04-01
In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.
Full Waveform Inversion Using Oriented Time Migration Method
Zhang, Zhendong
2016-04-12
Full waveform inversion (FWI) for reflection events is limited by its linearized update requirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate the resulting gradient can have an inaccurate update direction leading the inversion to converge into what we refer to as local minima of the objective function. In this thesis, I first look into the subject of full model wavenumber to analysis the root of local minima and suggest the possible ways to avoid this problem. And then I analysis the possibility of recovering the corresponding wavenumber components through the existing inversion and migration algorithms. Migration can be taken as a generalized inversion method which mainly retrieves the high wavenumber part of the model. Conventional impedance inversion method gives a mapping relationship between the migration image (high wavenumber) and model parameters (full wavenumber) and thus provides a possible cascade inversion strategy to retrieve the full wavenumber components from seismic data. In the proposed approach, consider a mild lateral variation in the model, I find an analytical Frechet derivation corresponding to the new objective function. In the proposed approach, the gradient is given by the oriented time-domain imaging method. This is independent of the background velocity. Specifically, I apply the oriented time-domain imaging (which depends on the reflection slope instead of a background velocity) on the data residual to obtain the geometrical features of the velocity perturbation. Assuming that density is constant, the conventional 1D impedance inversion method is also applicable for 2D or 3D velocity inversion within the process of FWI. This method is not only capable of inverting for velocity, but it is also capable of retrieving anisotropic parameters relying on linearized representations of the reflection response. To eliminate the cross-talk artifacts between different parameters, I
Chen, Ye; Khashab, Niveen M.; Tao, Jing
2017-01-01
Composition comprising at least one graphene material and at least one metal. The metal can be in the form of nanoparticles as well as microflakes, including single crystal microflakes. The metal can be intercalated in the graphene sheets
Method for Hot Real-Time Sampling of Gasification Products
Energy Technology Data Exchange (ETDEWEB)
Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-09-29
The Thermochemical Process Development Unit (TCPDU) at the National Renewable Energy Laboratory (NREL) is a highly instrumented half-ton/day pilot scale plant capable of demonstrating industrially relevant thermochemical technologies from lignocellulosic biomass conversion, including gasification. Gasification creates primarily Syngas (a mixture of Hydrogen and Carbon Monoxide) that can be utilized with synthesis catalysts to form transportation fuels and other valuable chemicals. Biomass derived gasification products are a very complex mixture of chemical components that typically contain Sulfur and Nitrogen species that can act as catalysis poisons for tar reforming and synthesis catalysts. Real-time hot online sampling techniques, such as Molecular Beam Mass Spectrometry (MBMS), and Gas Chromatographs with Sulfur and Nitrogen specific detectors can provide real-time analysis providing operational indicators for performance. Sampling typically requires coated sampling lines to minimize trace sulfur interactions with steel surfaces. Other materials used inline have also shown conversion of sulfur species into new components and must be minimized. Sample line Residence time within the sampling lines must also be kept to a minimum to reduce further reaction chemistries. Solids from ash and char contribute to plugging and must be filtered at temperature. Experience at NREL has shown several key factors to consider when designing and installing an analytical sampling system for biomass gasification products. They include minimizing sampling distance, effective filtering as close to source as possible, proper line sizing, proper line materials or coatings, even heating of all components, minimizing pressure drops, and additional filtering or traps after pressure drops.
Time-Frequency Methods for Structural Health Monitoring
Directory of Open Access Journals (Sweden)
Alexander L. Pyayt
2014-03-01
Full Text Available Detection of early warning signals for the imminent failure of large and complex engineered structures is a daunting challenge with many open research questions. In this paper we report on novel ways to perform Structural Health Monitoring (SHM of flood protection systems (levees, earthen dikes and concrete dams using sensor data. We present a robust data-driven anomaly detection method that combines time-frequency feature extraction, using wavelet analysis and phase shift, with one-sided classification techniques to identify the onset of failure anomalies in real-time sensor measurements. The methodology has been successfully tested at three operational levees. We detected a dam leakage in the retaining dam (Germany and “strange” behaviour of sensors installed in a Boston levee (UK and a Rhine levee (Germany.
Recommender engine for continuous-time quantum Monte Carlo methods
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
R package imputeTestbench to compare imputations methods for univariate time series
Bokde, Neeraj; Kulat, Kishore; Beck, Marcus W; Asencio-Cortés, Gualberto
2016-01-01
This paper describes the R package imputeTestbench that provides a testbench for comparing imputation methods for missing data in univariate time series. The imputeTestbench package can be used to simulate the amount and type of missing data in a complete dataset and compare filled data using different imputation methods. The user has the option to simulate missing data by removing observations completely at random or in blocks of different sizes. Several default imputation methods are includ...
Cutibacterium acnes molecular typing: time to standardize the method.
Dagnelie, M-A; Khammari, A; Dréno, B; Corvec, S
2018-03-12
The Gram-positive, anaerobic/aerotolerant bacterium Cutibacterium acnes is a commensal of healthy human skin; it is subdivided into six main phylogenetic groups or phylotypes: IA1, IA2, IB, IC, II and III. To decipher how far specific subgroups of C. acnes are involved in disease physiopathology, different molecular typing methods have been developed to identify these subgroups: i.e. phylotypes, clonal complexes, and types defined by single-locus sequence typing (SLST). However, as several molecular typing methods have been developed over the last decade, it has become a difficult task to compare the results from one article to another. Based on the scientific literature, the aim of this narrative review is to propose a standardized method to perform molecular typing of C. acnes, according to the degree of resolution needed (phylotypes, clonal complexes, or SLST types). We discuss the existing different typing methods from a critical point of view, emphasizing their advantages and drawbacks, and we identify the most frequently used methods. We propose a consensus algorithm according to the needed phylogeny resolution level. We first propose to use multiplex PCR for phylotype identification, MLST9 for clonal complex determination, and SLST for phylogeny investigation including numerous isolates. There is an obvious need to create a consensus about molecular typing methods for C. acnes. This standardization will facilitate the comparison of results between one article and another, and also the interpretation of clinical data. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Garde, A H; Hansen, Åse Marie; Kristiansen, J
2003-01-01
The aims of this study were to elucidate to what extent storage and repeated freezing and thawing influenced the concentration of creatinine in urine samples and to evaluate the method for determination of creatinine in urine. The creatinine method was based on the well-known Jaffe's reaction...... and measured on a COBAS Mira autoanalyser from Roche. The main findings were that samples for analysis of creatinine should be kept at a temperature of -20 degrees C or lower and frozen and thawed only once. The limit of detection, determined as 3 x SD of 20 determinations of a sample at a low concentration (6...
Time Delay Systems Methods, Applications and New Trends
Vyhlídal, Tomáš; Niculescu, Silviu-Iulian; Pepe, Pierdomenico
2012-01-01
This volume is concerned with the control and dynamics of time delay systems; a research field with at least six-decade long history that has been very active especially in the past two decades. In parallel to the new challenges emerging from engineering, physics, mathematics, and economics, the volume covers several new directions including topology induced stability, large-scale interconnected systems, roles of networks in stability, and new trends in predictor-based control and consensus dynamics. The associated applications/problems are described by highly complex models, and require solving inverse problems as well as the development of new theories, mathematical tools, numerically-tractable algorithms for real-time control. The volume, which is targeted to present these developments in this rapidly evolving field, captures a careful selection of the most recent papers contributed by experts and collected under five parts: (i) Methodology: From Retarded to Neutral Continuous Delay Models, (ii) Systems, S...
Guillemin, Ernst A
2013-01-01
An eminent electrical engineer and authority on linear system theory presents this advanced treatise, which approaches the subject from the viewpoint of classical dynamics and covers Fourier methods. This volume will assist upper-level undergraduates and graduate students in moving from introductory courses toward an understanding of advanced network synthesis. 1963 edition.
Time-of-flight cameras principles, methods and applications
Hansard, Miles; Choi, Ouk; Horaud, Radu
2012-01-01
Time-of-flight (TOF) cameras provide a depth value at each pixel, from which the 3D structure of the scene can be estimated. This new type of active sensor makes it possible to go beyond traditional 2D image processing, directly to depth-based and 3D scene processing. Many computer vision and graphics applications can benefit from TOF data, including 3D reconstruction, activity and gesture recognition, motion capture and face detection. It is already possible to use multiple TOF cameras, in order to increase the scene coverage, and to combine the depth data with images from several colour came
Crandall, David Lynn
2011-08-16
Sighting optics include a front sight and a rear sight positioned in a spaced-apart relation. The rear sight includes an optical element having a first focal length and a second focal length. The first focal length is selected so that it is about equal to a distance separating the optical element and the front sight and the second focal length is selected so that it is about equal to a target distance. The optical element thus brings into simultaneous focus for a user images of the front sight and the target.
Chen, Ye
2017-01-26
Composition comprising at least one graphene material and at least one metal. The metal can be in the form of nanoparticles as well as microflakes, including single crystal microflakes. The metal can be intercalated in the graphene sheets. The composition has high conductivity and flexibility. The composition can be made by a one-pot synthesis in which a graphene material precursor is converted to the graphene material, and the metal precursor is converted to the metal. A reducing solvent or dispersant such as NMP can be used. Devices made from the composition include a pressure sensor which has high sensitivity. Two two- dimension materials can be combined to form a hybrid material.
DEFF Research Database (Denmark)
Sloth Møller, Ditte; Knap, Marianne Marquard; Nyeng, Tine Bisballe
2017-01-01
: PTVσ yields the smallest volumes but does not ensure coverage of tumor during the full respiratory motion due to tumor deformation. Incorporating the respiratory motion in the delineation (PTVdel) takes into account the entire respiratory cycle including deformation, but at the cost, however, of larger...
Imaging Method Based on Time Reversal Channel Compensation
Directory of Open Access Journals (Sweden)
Bing Li
2015-01-01
Full Text Available The conventional time reversal imaging (TRI method builds imaging function by using the maximal value of signal amplitude. In this circumstance, some remote targets are missed (near-far problem or low resolution is obtained in lossy and/or dispersive media, and too many transceivers are employed to locate targets, which increases the complexity and cost of system. To solve these problems, a novel TRI algorithm is presented in this paper. In order to achieve a high resolution, the signal amplitude corresponding to focal time observed at target position is used to reconstruct the target image. For disposing near-far problem and suppressing spurious images, combining with cross-correlation property and amplitude compensation, channel compensation function (CCF is introduced. Moreover, the complexity and cost of system are reduced by employing only five transceivers to detect four targets whose number is close to that of transceivers. For the sake of demonstrating the practicability of the proposed analytical framework, the numerical experiments are actualized in both nondispersive-lossless (NDL media and dispersive-conductive (DPC media. Results show that the performance of the proposed method is superior to that of conventional TRI algorithm even under few echo signals.
Energy Technology Data Exchange (ETDEWEB)
Beal, D. [BA-PIRC, Cocoa, FL (United States); McIlvaine, J. [BA-PIRC, Cocoa, FL (United States); Fonorow, K. [BA-PIRC, Cocoa, FL (United States); Martin, E. [BA-PIRC, Cocoa, FL (United States)
2011-11-01
This document illustrates guidelines for the efficient installation of interior duct systems in new housing, including the fur-up chase method, the fur-down chase method, and interior ducts positioned in sealed attics or sealed crawl spaces.
Directory of Open Access Journals (Sweden)
H. O. Bakodah
2013-01-01
Full Text Available A method of lines approach to the numerical solution of nonlinear wave equations typified by the regularized long wave (RLW is presented. The method developed uses a finite differences discretization to the space. Solution of the resulting system was obtained by applying fourth Runge-Kutta time discretization method. Using Von Neumann stability analysis, it is shown that the proposed method is marginally stable. To test the accuracy of the method some numerical experiments on test problems are presented. Test problems including solitary wave motion, two-solitary wave interaction, and the temporal evaluation of a Maxwellian initial pulse are studied. The accuracy of the present method is tested with and error norms and the conservation properties of mass, energy, and momentum under the RLW equation.
Radiographic apparatus and method for monitoring film exposure time
International Nuclear Information System (INIS)
Vatne, R.S.; Woodmansee, W.E.
1981-01-01
In connection with radiographic inspection of structural and industrial materials, method and apparatus are disclosed for automatically determining and displaying the time required to expose a radiographic film positioned to receive radiation passed by a test specimen, so that the finished film is exposed to an optimum blackening (density) for maximum film contrast. A plot is made of the variations in a total exposure parameter (representing the product of detected radiation rate and time needed to cause optimum film blackening) as a function of the voltage level applied to an X-ray tube. An electronic function generator storing the shape of this plot is incorporated into an exposure monitoring apparatus, such that for a selected tube voltage setting, the function generator produces an electrical analog signal of the corresponding exposure parameter. During the exposure, another signal is produced representing the rate of radiation as monitored by a diode detector positioned so as to receive the same radiation that is incident on the film. The signal representing the detected radiation rate is divided, by an electrical divider circuit into the signal representing total exposure, and the resulting quotient is an electrical signal representing the required exposure time. (author)
Directory of Open Access Journals (Sweden)
A. Becker
2007-06-01
Full Text Available In this paper a hybrid method combining the Time-Domain Method of Moments (TD-MoM, the Time-Domain Uniform Theory of Diffraction (TD-UTD and the Finite-Difference Time-Domain Method (FDTD is presented. When applying this new hybrid method, thin-wire antennas are modeled with the TD-MoM, inhomogeneous bodies are modelled with the FDTD and large perfectly conducting plates are modelled with the TD-UTD. All inhomogeneous bodies are enclosed in a so-called FDTD-volume and the thin-wire antennas can be embedded into this volume or can lie outside. The latter avoids the simulation of white space between antennas and inhomogeneous bodies. If the antennas are positioned into the FDTD-volume, their discretization does not need to agree with the grid of the FDTD. By using the TD-UTD large perfectly conducting plates can be considered efficiently in the solution-procedure. Thus this hybrid method allows time-domain simulations of problems including very different classes of objects, applying the respective most appropriate numerical techniques to every object.
Directory of Open Access Journals (Sweden)
Thomas Acher
2014-12-01
Full Text Available A simulation model for 3D polydisperse bubble column flows in an Eulerian/Eulerian framework is presented. A computationally efficient and numerically stable algorithm is created by making use of quadrature method of moments (QMOM functionalities, in conjunction with appropriate breakup and coalescence models. To account for size dependent bubble motion, the constituent moments of the bubble size distribution function are transported with individual velocities. Validation of the simulation results against experimental and numerical data of Hansen [1] show the capability of the present model to accurately predict complex gas-liquid flows.
International Nuclear Information System (INIS)
Frenzel, H.; Scherpenberg, H. van; Zimmer, R.
1980-01-01
360 hard-rock miners were examined for breathing organ health, working conditions, smoking habits, and partly for chromosome aberrations (115 persons, 100 cells each). Long-time exposure at diameter 1 to 4 WL Rn-daughters and/or occasional >= 1000 pCi Rn/l air increased rates of S 2 -type aberrations (x 3,4; P 2 -rates. From dose estimations, bronchial radiation conditions appear as bronchitis cofactor; lymphocyte but not lymph node dose fairly fits S 2 -rate after 20 years' exposure. (orig.) [de
Tanioka, Yuichiro; Miranda, Greyving Jose Arguello; Gusman, Aditya Riadi; Fujii, Yushiro
2017-08-01
Large earthquakes, such as the Mw 7.7 1992 Nicaragua earthquake, have occurred off the Pacific coasts of El Salvador and Nicaragua in Central America and have generated distractive tsunamis along these coasts. It is necessary to determine appropriate fault models before large tsunamis hit the coast. In this study, first, fault parameters were estimated from the W-phase inversion, and then an appropriate fault model was determined from the fault parameters and scaling relationships with a depth dependent rigidity. The method was tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off El Salvador and Nicaragua in Central America. The tsunami numerical simulations were carried out from the determined fault models. We found that the observed tsunami heights, run-up heights, and inundation areas were reasonably well explained by the computed ones. Therefore, our method for tsunami early warning purpose should work to estimate a fault model which reproduces tsunami heights near the coast of El Salvador and Nicaragua due to large earthquakes in the subduction zone.
2018-01-30
home range maintenance or attraction to or avoidance of landscape features, including roads (Morales et al. 2004, McClintock et al. 2012). For example...radiotelemetry and extensive road survey data are used to generate the first density estimates available for the species. The results show that southern...secretive snakes that combines behavioral observations of snake road crossing speed, systematic road survey data, and simulations of spatial
Moor, C C; Wapenaar, M; Miedema, J R; Geelhoed, J J M; Chandoesing, P P; Wijsenbeek, M S
2018-05-29
In idiopathic pulmonary fibrosis (IPF), home monitoring experiences are limited, not yet real-time available nor implemented in daily care. We evaluated feasibility and potential barriers of a new home monitoring program with real-time wireless home spirometry in IPF. Ten patients with IPF were asked to test this home monitoring program, including daily home spirometry, for four weeks. Measurements of home and hospital spirometry showed good agreement. All patients considered real-time wireless spirometry useful and highly feasible. Both patients and researchers suggested relatively easy solutions for the identified potential barriers regarding real-time home monitoring in IPF.
Time-domain Green's Function Method for three-dimensional nonlinear subsonic flows
Tseng, K.; Morino, L.
1978-01-01
The Green's Function Method for linearized 3D unsteady potential flow (embedded in the computer code SOUSSA P) is extended to include the time-domain analysis as well as the nonlinear term retained in the transonic small disturbance equation. The differential-delay equations in time, as obtained by applying the Green's Function Method (in a generalized sense) and the finite-element technique to the transonic equation, are solved directly in the time domain. Comparisons are made with both linearized frequency-domain calculations and existing nonlinear results.
Two methods of space--time energy densification
International Nuclear Information System (INIS)
Sahlin, R.L.
1976-01-01
With a view to the goal of net energy production from a DT microexplosion, we study two ideas (methods) through which (separately or in combination) energy may be ''concentrated'' into a small volume and short period of time--the so-called space-time energy densification or compression. We first discuss the advantages and disadvantages of lasers and relativistic electron-beam (E-beam) machines as the sources of such energy and identify the amplification of laser pulses as a key factor in energy compression. The pulse length of present relativistic E-beam machines is the most serious limitation of this pulsed-power source. The first energy-compression idea we discuss is the reasonably efficient production of short-duration, high-current relativistic electron pulses by the self interruption and restrike of a current in a plasma pinch due to the rapid onset of strong turbulence. A 1-MJ plasma focus based on this method is nearing completion at this Laboratory. The second energy-compression idea is based on laser-pulse production through the parametric amplification of a self-similar or solitary wave pulse, for which analogs can be found in other wave processes. Specifically, the second energy-compression idea is a proposal for parametric amplification of a solitary, transverse magnetic pulse in a coaxial cavity with a Bennett dielectric rod as an inner coax. Amplifiers of this type can be driven by the pulsed power from a relativistic E-beam machine. If the end of the inner dielectric coax is made of LiDT or another fusionable material, the amplified pulse can directly drive a fusion reaction--there would be no need to switch the pulse out of the system toward a remote target
Two methods of space-time energy densification
International Nuclear Information System (INIS)
Sahlin, H.L.
1975-01-01
With a view to the goal of net energy production from a DT microexplosion, two ideas (methods) are studied through which (separately or in combination) energy may be ''concentrated'' into a small volume and short period of time--the so-called space-time energy densification or compression. The advantages and disadvantages of lasers and relativistic electron-beam (E-beam) machines as the sources of such energy are studied and the amplification of laser pulses as a key factor in energy compression is discussed. The pulse length of present relativistic E-beam machines is the most serious limitation of this pulsed-power source. The first energy-compression idea discussed is the reasonably efficient production of short-duration, high-current relativistic electron pulses by the self interruption and restrike of a current in a plasma pinch due to the rapid onset of strong turbulence. A 1-MJ plasma focus based on this method is nearing completion at this Laboratory. The second energy-compression idea is based on laser-pulse production through the parametric amplification of a self-similar or solitary wave pulse, for which analogs can be found in other wave processes. Specifically, the second energy-compression idea is a proposal for parametric amplification of a solitary, transverse magnetic pulse in a coaxial cavity with a Bennett dielectric rod as an inner coax. Amplifiers of this type can be driven by the pulsed power from a relativistic E-beam machine. If the end of the inner dielectric coax is made of LiDT or another fusionable material, the amplified pulse can directly drive a fusion reaction--there would be no need to switch the pulse out of the system toward a remote target. (auth)
A new method of detection for a positron emission tomograph using a time of flight method
International Nuclear Information System (INIS)
Gresset, Christian.
1981-05-01
In the first chapter, it is shown the advantages of positron radioemitters (β + ) of low period, and the essential characteristics of positron tomographs realized at the present time. The second chapter presents the interest of an original technique of image reconstruction: the time of flight technique. The third chapter describes the characterization methods which were set for verifying the feasibility of cesium fluoride in tomography. Chapter four presents the results obtained by these methods. It appears that the cesium fluoride constitute presently the best positron emission associated to time of flight technique. The hypotheses made on eventual performances of such machines are validated by experiments with phantom. The results obtained with a detector (bismuth germanate) conserves all its interest in skull tomography [fr
International Nuclear Information System (INIS)
Gupta, B.L.
2000-01-01
Our laboratory maintains standards for high doses in India. The glutamine powder dosimeter (spectrophotometric readout) is used for this purpose. Present studies show that 20 mg of unirradiated/irradiated glutamine dissolved in freshly prepared 10 ml of aerated aqueous acidic FX solution containing 2 x 10 -3 mol dm -3 ferrous ammonium sulphate and 10 -4 mol dm -3 xylenol orange in 0.033 mol dm -3 sulphuric acid is suitable for the dosimetry in the dose range of 0.1-100 kGy. Normally no corrections are required for the post-irradiation fading of the irradiated glutamine. The response of glutamine dosimeter is independent of irradiation temperature in the range of about 23-30 deg. C and at other temperatures, a correction is necessary. The dose intercomparison results for photon, electron and bremsstrahlung radiations show that glutamine can be used as a reference standard dosimeter. The use of flat polyethylene bags containing glutamine powder has proved very successful for electron dosimetry of wide energies. Several other amino acids like alanine, valine and threonine can also be used to cover wide range of doses using spectrophotometric readout method. (author)
International Nuclear Information System (INIS)
Smith, D.R.; Luna, R.E.; Taylor, J.M.
1978-01-01
Two studies were completed which evaluate the environmental impact of radioactive material transport. The first was a generic study which evaluated all radioactive materials and all transportation modes; the second addressed spent fuel and fuel-cycle wastes shipped by truck, rail and barge. A portion of each of those studies dealing with the change in impact resulting from alternative shipping methods is presented in this paper. Alternatives evaluated in each study were mode shifts, operational constraints, and, in generic case, changes in material properties and package capabilities. Data for the analyses were obtained from a shipper survey and from projections of shipments that would occur in an equilibrium fuel cycle supporting one hundred 1000-MW(e) reactors. Population exposures were deduced from point source radiation formulae using separation distances derived for scenarios appropriate to each shipping mode and to each exposed population group. Fourteen alternatives were investigated for the generic impact case. All showed relatively minor changes in the overall radiological impact. Since the radioactive material transport is estimated to be fewer than 3 latent cancer fatalities (LCF) for each shipment year (compared to some 300,000 yearly cancer fatalities or 5000 LCF's calculated for background radiation using the same radiological effects model), a 15% decrease caused by shifting from passenger air to cargo air is a relatively small effect. Eleven alternatives were considered for the fuel cycle/special train study, but only one produced a reduction in total special train baseline LCF's (.047) that was larger than 5%
International Nuclear Information System (INIS)
Kim, Yun Goo; Seong, Poong Hyun
2012-01-01
The Computerized Procedure System (CPS) is one of the primary operating support systems in the digital Main Control Room. The CPS displays procedure on the computer screen in the form of a flow chart, and displays plant operating information along with procedure instructions. It also supports operator decision making by providing a system decision. A procedure flow should be correct and reliable, as an error would lead to operator misjudgement and inadequate control. In this paper we present a modeling for the CPS that enables formal verification based on Petri nets. The proposed State Token Petri Nets (STPN) also support modeling of a procedure flow that has various interruptions by the operator, according to the plant condition. STPN modeling is compared with Coloured Petri net when they are applied to Emergency Operating Computerized Procedure. A converting program for Computerized Procedure (CP) to STPN has been also developed. The formal verification and validation methods of CP with STPN increase the safety of a nuclear power plant and provide digital quality assurance means that are needed when the role and function of the CPS is increasing.
Dobie, Robert A; Wojcik, Nancy C
2015-07-13
The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to
International Nuclear Information System (INIS)
Guardiola, Carlos; Climent, Héctor; Pla, Benjamín; Reig, Alberto
2017-01-01
Highlights: • Optimal Control is applied for heat release shaping in internal combustion engines. • Optimal Control allows to assess the engine performance with a realistic reference. • The proposed method gives a target heat release law to define control strategies. - Abstract: The present paper studies the optimal heat release law in a Diesel engine to maximise the indicated efficiency subject to different constraints, namely: maximum cylinder pressure, maximum cylinder pressure derivative, and NO_x emission restrictions. With this objective, a simple but also representative model of the combustion process has been implemented. The model consists of a 0D energy balance model aimed to provide the pressure and temperature evolutions in the high pressure loop of the engine thermodynamic cycle from the gas conditions at the intake valve closing and the heat release law. The gas pressure and temperature evolutions allow to compute the engine efficiency and NO_x emissions. The comparison between model and experimental results shows that despite the model simplicity, it is able to reproduce the engine efficiency and NO_x emissions. After the model identification and validation, the optimal control problem is posed and solved by means of Dynamic Programming (DP). Also, if only pressure constraints are considered, the paper proposes a solution that reduces the computation cost of the DP strategy in two orders of magnitude for the case being analysed. The solution provides a target heat release law to define injection strategies but also a more realistic maximum efficiency boundary than the ideal thermodynamic cycles usually employed to estimate the maximum engine efficiency.
A prediction method based on wavelet transform and multiple models fusion for chaotic time series
International Nuclear Information System (INIS)
Zhongda, Tian; Shujiang, Li; Yanhong, Wang; Yi, Sha
2017-01-01
In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.
Barkaoui, Abdelwahed; Chamekh, Abdessalem; Merzouki, Tarek; Hambli, Ridha; Mkaddem, Ali
2014-03-01
The complexity and heterogeneity of bone tissue require a multiscale modeling to understand its mechanical behavior and its remodeling mechanisms. In this paper, a novel multiscale hierarchical approach including microfibril scale based on hybrid neural network (NN) computation and homogenization equations was developed to link nanoscopic and macroscopic scales to estimate the elastic properties of human cortical bone. The multiscale model is divided into three main phases: (i) in step 0, the elastic constants of collagen-water and mineral-water composites are calculated by averaging the upper and lower Hill bounds; (ii) in step 1, the elastic properties of the collagen microfibril are computed using a trained NN simulation. Finite element calculation is performed at nanoscopic levels to provide a database to train an in-house NN program; and (iii) in steps 2-10 from fibril to continuum cortical bone tissue, homogenization equations are used to perform the computation at the higher scales. The NN outputs (elastic properties of the microfibril) are used as inputs for the homogenization computation to determine the properties of mineralized collagen fibril. The mechanical and geometrical properties of bone constituents (mineral, collagen, and cross-links) as well as the porosity were taken in consideration. This paper aims to predict analytically the effective elastic constants of cortical bone by modeling its elastic response at these different scales, ranging from the nanostructural to mesostructural levels. Our findings of the lowest scale's output were well integrated with the other higher levels and serve as inputs for the next higher scale modeling. Good agreement was obtained between our predicted results and literature data. Copyright © 2013 John Wiley & Sons, Ltd.
Parker, Sherwood
1995-01-01
A filmless X-ray imaging system includes at least one X-ray source, upper and lower collimators, and a solid-state detector array, and can provide three-dimensional imaging capability. The X-ray source plane is distance z.sub.1 above upper collimator plane, distance z.sub.2 above the lower collimator plane, and distance z.sub.3 above the plane of the detector array. The object to be X-rayed is located between the upper and lower collimator planes. The upper and lower collimators and the detector array are moved horizontally with scanning velocities v.sub.1, v.sub.2, v.sub.3 proportional to z.sub.1, z.sub.2 and z.sub.3, respectively. The pattern and size of openings in the collimators, and between detector positions is proportional such that similar triangles are always defined relative to the location of the X-ray source. X-rays that pass through openings in the upper collimator will always pass through corresponding and similar openings in the lower collimator, and thence to a corresponding detector in the underlying detector array. Substantially 100% of the X-rays irradiating the object (and neither absorbed nor scattered) pass through the lower collimator openings and are detected, which promotes enhanced sensitivity. A computer system coordinates repositioning of the collimators and detector array, and X-ray source locations. The computer system can store detector array output, and can associate a known X-ray source location with detector array output data, to provide three-dimensional imaging. Detector output may be viewed instantly, stored digitally, and/or transmitted electronically for image viewing at a remote site.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Directory of Open Access Journals (Sweden)
Chaoyang Shi
2017-12-01
Full Text Available Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.
Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan
2017-12-06
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
Accessible methods for the dynamic time-scale decomposition of biochemical systems.
Surovtsova, Irina; Simus, Natalia; Lorenz, Thomas; König, Artjom; Sahle, Sven; Kummer, Ursula
2009-11-01
The growing complexity of biochemical models asks for means to rationally dissect the networks into meaningful and rather independent subnetworks. Such foregoing should ensure an understanding of the system without any heuristics employed. Important for the success of such an approach is its accessibility and the clarity of the presentation of the results. In order to achieve this goal, we developed a method which is a modification of the classical approach of time-scale separation. This modified method as well as the more classical approach have been implemented for time-dependent application within the widely used software COPASI. The implementation includes different possibilities for the representation of the results including 3D-visualization. The methods are included in COPASI which is free for academic use and available at www.copasi.org. irina.surovtsova@bioquant.uni-heidelberg.de Supplementary data are available at Bioinformatics online.
Directory of Open Access Journals (Sweden)
Stefan eSchinkel
2012-11-01
Full Text Available Complex networks provide an excellent framework for studying the functionof the human brain activity. Yet estimating functional networks from mea-sured signals is not trivial, especially if the data is non-stationary and noisyas it is often the case with physiological recordings. In this article we proposea method that uses the local rank structure of the data to deﬁne functionallinks in terms of identical rank structures. The method yields temporal se-quences of networks which permits to trace the evolution of the functionalconnectivity during the time course of the observation. We demonstrate thepotentials of this approach with model data as well as with experimentaldata from an electrophysiological study on language processing.
Directory of Open Access Journals (Sweden)
Людмила Мар’янівна Іщенко
2016-11-01
Full Text Available Approbation of diagnostic tests for species identification of beef, pork and chicken by real time polymerase chain reaction method was done. Meat food, including heat treated and animal feed, was used for research. The fact of inconsistencies was revealed for product composition of some meat products that is marked by manufacturer
Interval-Censored Time-to-Event Data Methods and Applications
Chen, Ding-Geng
2012-01-01
Interval-Censored Time-to-Event Data: Methods and Applications collects the most recent techniques, models, and computational tools for interval-censored time-to-event data. Top biostatisticians from academia, biopharmaceutical industries, and government agencies discuss how these advances are impacting clinical trials and biomedical research. Divided into three parts, the book begins with an overview of interval-censored data modeling, including nonparametric estimation, survival functions, regression analysis, multivariate data analysis, competing risks analysis, and other models for interva
DEFF Research Database (Denmark)
Nielsen, J. Rasmus; Kristensen, Kasper; Lewy, Peter
2014-01-01
Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP) statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes...
Scoping reviews: time for clarity in definition, methods, and reporting.
Colquhoun, Heather L; Levac, Danielle; O'Brien, Kelly K; Straus, Sharon; Tricco, Andrea C; Perrier, Laure; Kastner, Monika; Moher, David
2014-12-01
The scoping review has become increasingly popular as a form of knowledge synthesis. However, a lack of consensus on scoping review terminology, definition, methodology, and reporting limits the potential of this form of synthesis. In this article, we propose recommendations to further advance the field of scoping review methodology. We summarize current understanding of scoping review publication rates, terms, definitions, and methods. We propose three recommendations for clarity in term, definition and methodology. We recommend adopting the terms "scoping review" or "scoping study" and the use of a proposed definition. Until such time as further guidance is developed, we recommend the use of the methodological steps outlined in the Arksey and O'Malley framework and further enhanced by Levac et al. The development of reporting guidance for the conduct and reporting of scoping reviews is underway. Consistency in the proposed domains and methodologies of scoping reviews, along with the development of reporting guidance, will facilitate methodological advancement, reduce confusion, facilitate collaboration and improve knowledge translation of scoping review findings. Copyright © 2014 Elsevier Inc. All rights reserved.
2010-07-01
... 41 Public Contracts and Property Management 4 2010-07-01 2010-07-01 false Does the 2-year time period in Â§ 302-2.8 include time that I cannot travel and/or transport my household effects due to... time that I cannot travel and/or transport my household effects due to shipping restrictions to or from...
Vidal-Acuña, M Reyes; Ruiz-Pérez de Pipaón, Maite; Torres-Sánchez, María José; Aznar, Javier
2017-12-08
An expanded library of matrix assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) has been constructed using the spectra generated from 42 clinical isolates and 11 reference strains, including 23 different species from 8 sections (16 cryptic plus 7 noncryptic species). Out of a total of 379 strains of Aspergillus isolated from clinical samples, 179 strains were selected to be identified by sequencing of beta-tubulin or calmodulin genes. Protein spectra of 53 strains, cultured in liquid medium, were used to construct an in-house reference database in the MALDI-TOF MS. One hundred ninety strains (179 clinical isolates previously identified by sequencing and the 11 reference strains), cultured on solid medium, were blindy analyzed by the MALDI-TOF MS technology to validate the generated in-house reference database. A 100% correlation was obtained with both identification methods, gene sequencing and MALDI-TOF MS, and no discordant identification was obtained. The HUVR database provided species level (score of ≥2.0) identification in 165 isolates (86.84%) and for the remaining 25 (13.16%) a genus level identification (score between 1.7 and 2.0) was obtained. The routine MALDI-TOF MS analysis with the new database, was then challenged with 200 Aspergillus clinical isolates grown on solid medium in a prospective evaluation. A species identification was obtained in 191 strains (95.5%), and only nine strains (4.5%) could not be identified at the species level. Among the 200 strains, A. tubingensis was the only cryptic species identified. We demonstrated the feasibility and usefulness of the new HUVR database in MALDI-TOF MS by the use of a standardized procedure for the identification of Aspergillus clinical isolates, including cryptic species, grown either on solid or liquid media. © The Author 2017. Published by Oxford University Press on behalf of The International Society for Human and Animal Mycology. All rights reserved. For
Barik, Mayadhar; Bajpai, Minu; Patnaik, Santosh; Mishra, Pravash; Behera, Priyamadhaba; Dwivedi, Sada Nanda
2016-01-01
Cryopreservation is basically related to meritorious thin samples or small clumps of cells that are cooled quickly without loss. Our main objective is to establish and formulate an innovative method and protocol development for cryopreservation as a gold standard for clinical uses in laboratory practice and treatment. The knowledge regarding usefulness of cryopreservation in clinical practice is essential to carry forward the clinical practice and research. We are trying to compare different methods of cryopreservation (in two dozen of cells) at the same time we compare the embryo and oocyte freezing interms of fertilization rate according to the International standard protocol. The combination of cryoprotectants and regimes of rapid cooling and rinsing during warming often allows successful cryopreservation of biological materials, particularly cell suspensions or thin tissue samples. Examples include semen, blood, tissue samples like tumors, histological cross-sections, human eggs and human embryos. Although presently many studies have reported that the children born from frozen embryos or "frosties," show consistently positive results with no increase in birth defects or development abnormalities is quite good enough and similar to our study (50-85%). We ensure that cryopreservation technology provided useful cell survivability, tissue and organ preservation in a proper way. Although it varies according to different laboratory conditions, it is certainly beneficial for patient's treatment and research. Further studies are needed for standardization and development of new protocol.
Kang, Hye-In; Shin, Ho-Sang
2015-01-20
A novel derivatization method of free cyanide (HCN + CN(-)) including cyanogen chloride in chlorinated drinking water was developed with d-cysteine and hypochlorite. The optimum conditions (0.5 mM D-cysteine, 0.5 mM hypochlorite, pH 4.5, and a reaction time of 10 min at room temperature) were established by the variation of parameters. Cyanide (C(13)N(15)) was chosen as an internal standard. The formed β-thiocyanoalanine was directly injected into a liquid chromatography-tandem mass spectrometer without any additional extraction or purification procedures. Under the established conditions, the limits of detection and the limits of quantification were 0.07 and 0.2 μg/L, respectively, and the interday relative standard deviation was less than 4% at concentrations of 4.0, 20.0, and 100.0 μg/L. The method was successfully applied to determine CN(-) in chlorinated water samples. The detected concentration range and detection frequency of CN(-) were 0.20-8.42 μg/L (14/24) in source drinking water and 0.21-1.03 μg/L (18/24) in chlorinated drinking water.
Saeed, Saqib; Darwish, Ashraf; Abraham, Ajith
2014-01-01
Nowadays embedded and real-time systems contain complex software. The complexity of embedded systems is increasing, and the amount and variety of software in the embedded products are growing. This creates a big challenge for embedded and real-time software development processes and there is a need to develop separate metrics and benchmarks. “Embedded and Real Time System Development: A Software Engineering Perspective: Concepts, Methods and Principles” presents practical as well as conceptual knowledge of the latest tools, techniques and methodologies of embedded software engineering and real-time systems. Each chapter includes an in-depth investigation regarding the actual or potential role of software engineering tools in the context of the embedded system and real-time system. The book presents state-of-the art and future perspectives with industry experts, researchers, and academicians sharing ideas and experiences including surrounding frontier technologies, breakthroughs, innovative solutions and...
2015-12-01
whereas VFA suffers inaccuracies due to an assumption about FA. In this article , we propose an efficient method to tackle the quantification of T1 and...from a reduced number of VFA SPGR measurements and a gain in T1 precision from simultaneous least squares fitting. As we confirmed in this article ...Locker DR. Time saving in measurement of NMR and EPR relaxation times. Rev Sci Instrum 1970;41:250–251. 3. Shah NJ, Zaitsev M, Steinhoff S, Zilles K. A new
Simulation of three-dimensional, time-dependent, incompressible flows by a finite element method
International Nuclear Information System (INIS)
Chan, S.T.; Gresho, P.M.; Lee, R.L.; Upson, C.D.
1981-01-01
A finite element model has been developed for simulating the dynamics of problems encountered in atmospheric pollution and safety assessment studies. The model is based on solving the set of three-dimensional, time-dependent, conservation equations governing incompressible flows. Spatial discretization is performed via a modified Galerkin finite element method, and time integration is carried out via the forward Euler method (pressure is computed implicitly, however). Several cost-effective techniques (including subcycling, mass lumping, and reduced Gauss-Legendre quadrature) which have been implemented are discussed. Numerical results are presented to demonstrate the applicability of the model
Method and apparatus for real-time measurement of fuel gas compositions and heating values
Zelepouga, Serguei; Pratapas, John M.; Saveliev, Alexei V.; Jangale, Vilas V.
2016-03-22
An exemplary embodiment can be an apparatus for real-time, in situ measurement of gas compositions and heating values. The apparatus includes a near infrared sensor for measuring concentrations of hydrocarbons and carbon dioxide, a mid infrared sensor for measuring concentrations of carbon monoxide and a semiconductor based sensor for measuring concentrations of hydrogen gas. A data processor having a computer program for reducing the effects of cross-sensitivities of the sensors to components other than target components of the sensors is also included. Also provided are corresponding or associated methods for real-time, in situ determination of a composition and heating value of a fuel gas.
Directory of Open Access Journals (Sweden)
J Rasmus Nielsen
Full Text Available Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes zero observations and over-dispersion. The model utilises the fact the correlation between numbers of fish caught increases when the distance in space and time between the fish decreases, and the correlation between size groups in a haul increases when the difference in size decreases. Here the model is extended in two ways. Instead of assuming a natural scale size correlation, the model is further developed to allow for a transformed length scale. Furthermore, in the present application, the spatial- and size-dependent correlation between species was included. For cod (Gadus morhua and whiting (Merlangius merlangus, a common structured size correlation was fitted, and a separable structure between the time and space-size correlation was found for each species, whereas more complex structures were required to describe the correlation between species (and space-size. The within-species time correlation is strong, whereas the correlations between the species are weaker over time but strong within the year.
A modular method to handle multiple time-dependent quantities in Monte Carlo simulations
International Nuclear Information System (INIS)
Shin, J; Faddegon, B A; Perl, J; Schümann, J; Paganetti, H
2012-01-01
A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method. (paper)
Methods of Run-Time Error Detection in Distributed Process Control Software
DEFF Research Database (Denmark)
Drejer, N.
In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....
Calculation method for control rod dropping time in reactor
International Nuclear Information System (INIS)
Nogami, Takeki; Kato, Yoshifumi; Ishino, Jun-ichi; Doi, Isamu.
1996-01-01
If a control rod starts dropping, the dropping speed is rapidly increased, then settled substantially constant, rapidly decreased when it reaches a dash pot. A second detection signal generated by removing an AC component from a first detection signal is differentiated twice. The time when the maximum value among the twice differentiated values is generated is determined as a time when the control rods starts dropping. The time when minimum value among the twice differentiated values is generated is determined as a time when the control rod reaches the dash pot of the reactor. The measuring time within a range from the time when the control rod starts dropping to the time when the control rod reaches the dash pot of the reactor is determined. As a result, processing for the calculation of the dropping start time and dash pot reaching time of the control rod can be automatized. Further, it is suffice to conduct differentiation twice till the reaching time, which can facilitate the processing thereby enabling to determine a reliable time range. (N.H.)
International Nuclear Information System (INIS)
Lepetit-Coiffe, Matthieu; Quesson, Bruno; Moonen, Chrit T.W.; Laumonier, Herve; Trillaud, Herve; Seror, Olivier; Sesay, Musa-Bahazid; Grenier, Nicolas
2010-01-01
To assess the practical feasibility and effectiveness of real-time magnetic resonance (MR) temperature monitoring for the radiofrequency (RF) ablation of liver tumours in a clinical setting, nine patients (aged 49-87 years, five men and four women) with one malignant tumour (14-50 mm, eight hepatocellular carcinomas and one colorectal metastasis), were treated by 12-min RF ablation using a 1.5-T closed magnet for real-time temperature monitoring. The clinical monopolar RF device was filtered at 64 MHz to avoid electromagnetic interference. Real-time computation of thermal-dose (TD) maps, based on Sapareto and Dewey's equation, was studied to determine its ability to provide a clear end-point of the RF procedure. Absence of local recurrence on follow-up MR images obtained 45 days after the RF ablation was used to assess the apoptotic and necrotic prediction obtained by real-time TD maps. Seven out of nine tumours were completely ablated according to the real-time TD maps. Compared with 45-day follow-up MR images, TD maps accurately predicted two primary treatment failures, but were not relevant in the later progression of one case of secondary local tumour. The real-time TD concept is a feasible and promising monitoring method for the RF ablation of liver tumours. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Lepetit-Coiffe, Matthieu; Quesson, Bruno; Moonen, Chrit T.W. [Universite Victor Segalen Bordeaux 2, Laboratoire Imagerie Moleculaire et Fonctionnelle: de la physiologie a la therapie CNRS UMR 5231, Bordeaux Cedex (France); Laumonier, Herve; Trillaud, Herve [Universite Victor Segalen Bordeaux 2, Laboratoire Imagerie Moleculaire et Fonctionnelle: de la physiologie a la therapie CNRS UMR 5231, Bordeaux Cedex (France); Service de Radiologie, Hopital Saint-Andre, CHU Bordeaux, Bordeaux (France); Seror, Olivier [Universite Victor Segalen Bordeaux 2, Laboratoire Imagerie Moleculaire et Fonctionnelle: de la physiologie a la therapie CNRS UMR 5231, Bordeaux Cedex (France); Service de Radiologie, Hopital Jean Verdier, Bondy (France); Sesay, Musa-Bahazid [Service d' Anesthesie Reanimation III, Hopital Pellegrin, CHU Bordeaux, Bordeaux (France); Grenier, Nicolas [Universite Victor Segalen Bordeaux 2, Laboratoire Imagerie Moleculaire et Fonctionnelle: de la physiologie a la therapie CNRS UMR 5231, Bordeaux Cedex (France); Service d' Imagerie Diagnostique et Therapeutique de l' Adulte, Hopital Pellegrin, CHU Bordeaux, Bordeaux (France)
2010-01-15
To assess the practical feasibility and effectiveness of real-time magnetic resonance (MR) temperature monitoring for the radiofrequency (RF) ablation of liver tumours in a clinical setting, nine patients (aged 49-87 years, five men and four women) with one malignant tumour (14-50 mm, eight hepatocellular carcinomas and one colorectal metastasis), were treated by 12-min RF ablation using a 1.5-T closed magnet for real-time temperature monitoring. The clinical monopolar RF device was filtered at 64 MHz to avoid electromagnetic interference. Real-time computation of thermal-dose (TD) maps, based on Sapareto and Dewey's equation, was studied to determine its ability to provide a clear end-point of the RF procedure. Absence of local recurrence on follow-up MR images obtained 45 days after the RF ablation was used to assess the apoptotic and necrotic prediction obtained by real-time TD maps. Seven out of nine tumours were completely ablated according to the real-time TD maps. Compared with 45-day follow-up MR images, TD maps accurately predicted two primary treatment failures, but were not relevant in the later progression of one case of secondary local tumour. The real-time TD concept is a feasible and promising monitoring method for the RF ablation of liver tumours. (orig.)
El-Amin, Mohamed
2017-11-23
In this article, we consider a two-phase immiscible incompressible flow including nanoparticles transport in fractured heterogeneous porous media. The system of the governing equations consists of water saturation, Darcy’s law, nanoparticles concentration in water, deposited nanoparticles concentration on the pore-wall, and entrapped nanoparticles concentration in the pore-throat, as well as, porosity and permeability variation due to the nanoparticles deposition/entrapment on/in the pores. The discrete-fracture model (DFM) is used to describe the flow and transport in fractured porous media. Moreover, multiscale time-splitting strategy has been employed to manage different time-step sizes for different physics, such as saturation, concentration, etc. Numerical examples are provided to demonstrate the efficiency of the proposed multi-scale time splitting approach.
El-Amin, Mohamed; Kou, Jisheng; Sun, Shuyu
2017-01-01
In this article, we consider a two-phase immiscible incompressible flow including nanoparticles transport in fractured heterogeneous porous media. The system of the governing equations consists of water saturation, Darcy’s law, nanoparticles concentration in water, deposited nanoparticles concentration on the pore-wall, and entrapped nanoparticles concentration in the pore-throat, as well as, porosity and permeability variation due to the nanoparticles deposition/entrapment on/in the pores. The discrete-fracture model (DFM) is used to describe the flow and transport in fractured porous media. Moreover, multiscale time-splitting strategy has been employed to manage different time-step sizes for different physics, such as saturation, concentration, etc. Numerical examples are provided to demonstrate the efficiency of the proposed multi-scale time splitting approach.
Time reversal method with stabilizing boundary conditions for Photoacoustic tomography
International Nuclear Information System (INIS)
Chervova, Olga; Oksanen, Lauri
2016-01-01
We study an inverse initial source problem that models photoacoustic tomography measurements with array detectors, and introduce a method that can be viewed as a modification of the so called back and forth nudging method. We show that the method converges at an exponential rate under a natural visibility condition, with data given only on a part of the boundary of the domain of wave propagation. In this paper we consider the case of noiseless measurements. (paper)
A moving mesh method with variable relaxation time
Soheili, Ali Reza; Stockie, John M.
2006-01-01
We propose a moving mesh adaptive approach for solving time-dependent partial differential equations. The motion of spatial grid points is governed by a moving mesh PDE (MMPDE) in which a mesh relaxation time \\tau is employed as a regularization parameter. Previously reported results on MMPDEs have invariably employed a constant value of the parameter \\tau. We extend this standard approach by incorporating a variable relaxation time that is calculated adaptively alongside the solution in orde...
Part-Time Sick Leave as a Treatment Method?
Andrén D; Andrén T
2009-01-01
This paper analyzes the effects of being on part-time sick leave compared to full-time sick leave on the probability of recovering (i.e., returning to work with full recovery of lost work capacity). Using a discrete choice one-factor model, we estimate mean treatment parameters and distributional treatment parameters from a common set of structural parameters. Our results show that part-time sick leave increases the likelihood of recovering and dominates full-time sick leave for sickness spel...
Comparison of Interpolation Methods as Applied to Time Synchronous Averaging
National Research Council Canada - National Science Library
Decker, Harry
1999-01-01
Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...
A simple method for one-loop renormalization in curved space-time
Energy Technology Data Exchange (ETDEWEB)
Markkanen, Tommi [Helsinki Institute of Physics and Department of Physics, P.O. Box 64, FI-00014, University of Helsinki (Finland); Tranberg, Anders, E-mail: tommi.markkanen@helsinki.fi, E-mail: anders.tranberg@uis.no [Niels Bohr International Academy and Discovery Center, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen (Denmark)
2013-08-01
We present a simple method for deriving the renormalization counterterms from the components of the energy-momentum tensor in curved space-time. This method allows control over the finite parts of the counterterms and provides explicit expressions for each term separately. As an example, the method is used for the self-interacting scalar field in a Friedmann-Robertson-Walker metric in the adiabatic approximation, where we calculate the renormalized equation of motion for the field and the renormalized components of the energy-momentum tensor to fourth adiabatic order while including interactions to one-loop order. Within this formalism the trace anomaly, including contributions from interactions, is shown to have a simple derivation. We compare our results to those obtained by two standard methods, finding agreement with the Schwinger-DeWitt expansion but disagreement with adiabatic subtractions for interacting theories.
Directory of Open Access Journals (Sweden)
Guan Lian
2018-01-01
Full Text Available Accurate prediction of taxi-out time is significant precondition for improving the operationality of the departure process at an airport, as well as reducing the long taxi-out time, congestion, and excessive emission of greenhouse gases. Unfortunately, several of the traditional methods of predicting taxi-out time perform unsatisfactorily at congested airports. This paper describes and tests three of those conventional methods which include Generalized Linear Model, Softmax Regression Model, and Artificial Neural Network method and two improved Support Vector Regression (SVR approaches based on swarm intelligence algorithm optimization, which include Particle Swarm Optimization (PSO and Firefly Algorithm. In order to improve the global searching ability of Firefly Algorithm, adaptive step factor and Lévy flight are implemented simultaneously when updating the location function. Six factors are analysed, of which delay is identified as one significant factor in congested airports. Through a series of specific dynamic analyses, a case study of Beijing International Airport (PEK is tested with historical data. The performance measures show that the proposed two SVR approaches, especially the Improved Firefly Algorithm (IFA optimization-based SVR method, not only perform as the best modelling measures and accuracy rate compared with the representative forecast models, but also can achieve a better predictive performance when dealing with abnormal taxi-out time states.
Cholinesterase assay by an efficient fixed time endpoint method
Directory of Open Access Journals (Sweden)
Mónica Benabent
2014-01-01
The method may be adapted to the user needs by modifying the enzyme concentration and applied for simultaneously testing many samples in parallel; i.e. for complex experiments of kinetics assays with organophosphate inhibitors in different tissues.
Real-time trajectory analysis using stacked invariance methods
Kitts, B.
1998-01-01
Invariance methods are used widely in pattern recognition as a preprocessing stage before algorithms such as neural networks are applied to the problem. A pattern recognition system has to be able to recognise objects invariant to scale, translation, and rotation. Presumably the human eye implements some of these preprocessing transforms in making sense of incoming stimuli, for example, placing signals onto a log scale. This paper surveys many of the commonly used invariance methods, and asse...
A novel time series link prediction method: Learning automata approach
Moradabadi, Behnaz; Meybodi, Mohammad Reza
2017-09-01
Link prediction is a main social network challenge that uses the network structure to predict future links. The common link prediction approaches to predict hidden links use a static graph representation where a snapshot of the network is analyzed to find hidden or future links. For example, similarity metric based link predictions are a common traditional approach that calculates the similarity metric for each non-connected link and sort the links based on their similarity metrics and label the links with higher similarity scores as the future links. Because people activities in social networks are dynamic and uncertainty, and the structure of the networks changes over time, using deterministic graphs for modeling and analysis of the social network may not be appropriate. In the time-series link prediction problem, the time series link occurrences are used to predict the future links In this paper, we propose a new time series link prediction based on learning automata. In the proposed algorithm for each link that must be predicted there is one learning automaton and each learning automaton tries to predict the existence or non-existence of the corresponding link. To predict the link occurrence in time T, there is a chain consists of stages 1 through T - 1 and the learning automaton passes from these stages to learn the existence or non-existence of the corresponding link. Our preliminary link prediction experiments with co-authorship and email networks have provided satisfactory results when time series link occurrences are considered.
Time Series Analysis of Insar Data: Methods and Trends
Osmanoglu, Batuhan; Sunar, Filiz; Wdowinski, Shimon; Cano-Cabral, Enrique
2015-01-01
Time series analysis of InSAR data has emerged as an important tool for monitoring and measuring the displacement of the Earth's surface. Changes in the Earth's surface can result from a wide range of phenomena such as earthquakes, volcanoes, landslides, variations in ground water levels, and changes in wetland water levels. Time series analysis is applied to interferometric phase measurements, which wrap around when the observed motion is larger than one-half of the radar wavelength. Thus, the spatio-temporal ''unwrapping" of phase observations is necessary to obtain physically meaningful results. Several different algorithms have been developed for time series analysis of InSAR data to solve for this ambiguity. These algorithms may employ different models for time series analysis, but they all generate a first-order deformation rate, which can be compared to each other. However, there is no single algorithm that can provide optimal results in all cases. Since time series analyses of InSAR data are used in a variety of applications with different characteristics, each algorithm possesses inherently unique strengths and weaknesses. In this review article, following a brief overview of InSAR technology, we discuss several algorithms developed for time series analysis of InSAR data using an example set of results for measuring subsidence rates in Mexico City.
MO-FG-BRA-03: A Novel Method for Characterizing Gating Response Time in Radiation Therapy
Energy Technology Data Exchange (ETDEWEB)
Wiersma, R; McCabe, B; Belcher, A; Jenson, P [The University of Chicago, Chicago, IL (United States); Smith, B [University Illinois at Chicago, Orland Park, IL (United States); Aydogan, B [The University of Chicago, Chicago, IL (United States); University Illinois at Chicago, Orland Park, IL (United States)
2016-06-15
Purpose: Low temporal latency between a gating ON/OFF signal and the LINAC beam ON/OFF during respiratory gating is critical for patient safety. Current film based methods to assess gating response have poor temporal resolution and are highly qualitative. We describe a novel method to precisely measure gating lag times at high temporal resolutions and use it to characterize the temporal response of several gating systems. Methods: A respiratory gating simulator with an oscillating platform was modified to include a linear potentiometer for position measurement. A photon diode was placed at linear accelerator isocenter for beam output measurement. The output signals of the potentiometer and diode were recorded simultaneously at 2500 Hz (0.4 millisecond (ms) sampling interval) with an analogue-to-digital converter (ADC). The techniques was used on three commercial respiratory gating systems. The ON and OFF of the beam signal were located and compared to the expected gating window for both phase and position based gating and the temporal lag times extracted using a polynomial fit method. Results: A Varian RPM system with a monoscopic IR camera was measured to have mean beam ON and OFF lag times of 98.2 ms and 89.6 ms, respectively. A Varian RPM system with a stereoscopic IR camera was measured to have mean beam ON and OFF lag times of 86.0 ms and 44.0 ms, respectively. A Calypso magnetic fiducial tracking system was measured to have mean beam ON and OFF lag times of 209.0 ms and 60.0 ms, respectively. Conclusions: A novel method allowed for quantitative determination of gating timing accuracy for several clinically used gating systems. All gating systems met the 100 ms TG-142 criteria for mean beam OFF times. For beam ON response, the Calypso system exceeded the recommended response time.
DEFF Research Database (Denmark)
Tanderup, Kari; Fokdal, Lars Ulrik; Sturdza, Alina
2016-01-01
-center patient series (retroEMBRACE). Materials and methods This study analyzed 488 locally advanced cervical cancer patients treated with external beam radiotherapy ± chemotherapy combined with IGABT. Brachytherapy contouring and reporting was according to ICRU/GEC-ESTRO recommendations. The Cox Proportional...... Hazards model was applied to analyze the effect on local control of dose-volume metrics as well as overall treatment time (OTT), dose rate, chemotherapy, and tumor histology. Results With a median follow up of 46 months, 43 local failures were observed. Dose (D90) to the High Risk Clinical Target Volume...
Energy Technology Data Exchange (ETDEWEB)
Gabano, J.
1983-03-01
An electrolyte for an electric cell whose negative active material is constituted by lithium and whose positive active material is constituted by thionyl chloride. The electrolyte contains at least one solvent and at least one solute, said solvent being thionyl chloride and said solute being chosen from the group which includes lithium tetrachloroaluminate and lithium hexachloroantimonate. According to the invention said electrolyte further includes a complex chosen from the group which includes AlCl/sub 3/,SO/sub 2/ and SbCl/sub 5/,SO/sub 2/. The voltage rise of electric cells which include such an electrolyte takes negligible time.
International Nuclear Information System (INIS)
Magae, J.; Furukawa, C.; Kawakami, Y.; Hoshi, Y.; Ogata, H.
2003-01-01
Full text: Because biological responses to radiation are complex processes dependent on irradiation time as well as total dose, it is necessary to include dose, dose-rate and irradiation time simultaneously to predict the risk of low dose-rate irradiation. In this study, we analyzed quantitative relationship among dose, irradiation time and dose-rate, using chromosomal breakage and proliferation inhibition of human cells. For evaluation of chromosome breakage we assessed micronuclei induced by radiation. U2OS cells, a human osteosarcoma cell line, were exposed to gamma-ray in irradiation room bearing 50,000 Ci 60 Co. After the irradiation, they were cultured for 24 h in the presence of cytochalasin B to block cytokinesis, cytoplasm and nucleus were stained with DAPI and propidium iodide, and the number of binuclear cells bearing micronuclei was determined by fluorescent microscopy. For proliferation inhibition, cells were cultured for 48 h after the irradiation and [3H] thymidine was pulsed for 4 h before harvesting. Dose-rate in the irradiation room was measured with photoluminescence dosimeter. While irradiation time less than 24 h did not affect dose-response curves for both biological responses, they were remarkably attenuated as exposure time increased to more than 7 days. These biological responses were dependent on dose-rate rather than dose when cells were irradiated for 30 days. Moreover, percentage of micronucleus-forming cells cultured continuously for more than 60 days at the constant dose-rate, was gradually decreased in spite of the total dose accumulation. These results suggest that biological responses at low dose-rate, are remarkably affected by exposure time, that they are dependent on dose-rate rather than total dose in the case of long-term irradiation, and that cells are getting resistant to radiation after the continuous irradiation for 2 months. It is necessary to include effect of irradiation time and dose-rate sufficiently to evaluate risk
A simple method to calculate first-passage time densities with arbitrary initial conditions
Nyberg, Markus; Ambjörnsson, Tobias; Lizana, Ludvig
2016-06-01
Numerous applications all the way from biology and physics to economics depend on the density of first crossings over a boundary. Motivated by the lack of general purpose analytical tools for computing first-passage time densities (FPTDs) for complex problems, we propose a new simple method based on the independent interval approximation (IIA). We generalise previous formulations of the IIA to include arbitrary initial conditions as well as to deal with discrete time and non-smooth continuous time processes. We derive a closed form expression for the FPTD in z and Laplace-transform space to a boundary in one dimension. Two classes of problems are analysed in detail: discrete time symmetric random walks (Markovian) and continuous time Gaussian stationary processes (Markovian and non-Markovian). Our results are in good agreement with Langevin dynamics simulations.
Hatcher, Gerry; Okuda, Craig
2016-01-01
The effects of climate change on the near shore coastal environment including ocean acidification, accelerated erosion, destruction of coral reefs, and damage to marine habitat have highlighted the need for improved equipment to study, monitor, and evaluate these changes [1]. This is especially true where areas of study are remote, large, or beyond depths easily accessible to divers. To this end, we have developed three examples of low cost and easily deployable real-time ocean observation platforms. We followed a scalable design approach adding complexity and capability as familiarity and experience were gained with system components saving both time and money by reducing design mistakes. The purpose of this paper is to provide information for the researcher, technician, or engineer who finds themselves in need of creating or acquiring similar platforms.
Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng
2017-01-01
Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.
A Comparison of Various Forecasting Methods for Autocorrelated Time Series
Directory of Open Access Journals (Sweden)
Karin Kandananond
2012-07-01
Full Text Available The accuracy of forecasts significantly affects the overall performance of a whole supply chain system. Sometimes, the nature of consumer products might cause difficulties in forecasting for the future demands because of its complicated structure. In this study, two machine learning methods, artificial neural network (ANN and support vector machine (SVM, and a traditional approach, the autoregressive integrated moving average (ARIMA model, were utilized to predict the demand for consumer products. The training data used were the actual demand of six different products from a consumer product company in Thailand. Initially, each set of data was analysed using Ljung‐Box‐Q statistics to test for autocorrelation. Afterwards, each method was applied to different sets of data. The results indicated that the SVM method had a better forecast quality (in terms of MAPE than ANN and ARIMA in every category of products.
Simplified scintigraphic methods for measuring gastrointestinal transit times
DEFF Research Database (Denmark)
Graff, J; Brinch, K; Madsen, Jan Lysgård
2000-01-01
To investigate whether simple transit measurements based on scintigraphy performed only 0, 2, 4 and 24 h after intake of a radiolabelled meal can be used to predict the mean transit time values for the stomach, the small intestine, and the colon, a study was conducted in 16 healthy volunteers....... After ingestion of a meal containing 111indium-labelled water and 99mtechnetium-labelled omelette, imaging was performed at intervals of 30 min until all radioactivity was located in the colon and henceforth at intervals of 24 h until all radioactivity had cleared from the colon. Gastric, small...... intestinal and colonic mean transit times were calculated for both markers and compared with fractional gastric emptying at 2 h, fractional colonic filling at 4 h, and geometric centre of colonic content at 24 h, respectively. Highly significant correlations were found between gastric mean transit time...
Simplified scintigraphic methods for measuring gastrointestinal transit times
DEFF Research Database (Denmark)
Graff, J; Brinch, K; Madsen, Jan Lysgård
2000-01-01
. After ingestion of a meal containing 111indium-labelled water and 99mtechnetium-labelled omelette, imaging was performed at intervals of 30 min until all radioactivity was located in the colon and henceforth at intervals of 24 h until all radioactivity had cleared from the colon. Gastric, small...... intestinal and colonic mean transit times were calculated for both markers and compared with fractional gastric emptying at 2 h, fractional colonic filling at 4 h, and geometric centre of colonic content at 24 h, respectively. Highly significant correlations were found between gastric mean transit time...... and fractional gastric emptying at 2 h (111In: r=0.95, P
Effect of seed collection times and pretreatment methods on ...
African Journals Online (AJOL)
STORAGESEVER
2008-08-18
Aug 18, 2008 ... Several basic methods are used to overcome seed- coat dormancy in ... The experiment on seed pretreatment were conducted at Forestry. Research ..... applicability to rural areas where these trees are planted may be limited. .... Forestry. Research News: Indicators and Tools for Restoration & Sustainable.
Pharyngeal transit time measured by scintigraphic and biomagnetic method
International Nuclear Information System (INIS)
Miquelin, C.A.; Braga, F.J.H.N.; Baffa, O.
1996-01-01
A comparative evaluation between scintigraphic and biomagnetic method to measure the pharyngeal transit is presented. Three volunteers have been studied. The aliment (yogurt) was labeled with 9 9 m Technetium for the scintigraphic test and with ferrite for the biomagnetic one. The preliminary results indicate a difference between the values obtained, probably due to the biomagnetic detector resolution
Getting Over Method: Literacy Teaching as Work in "New Times."
Luke, Allan
1998-01-01
Shifts the terms of the "great debate" from technical questions about teaching method to questions about how various kinds of literacies work within communities--matters of government cutbacks and institutional downsizing, shrinking resource and taxation bases, and of students, communities, teachers, and schools trying to cope with rapid and…
Time Interval to Initiation of Contraceptive Methods Following ...
African Journals Online (AJOL)
Objectives: The objectives of the study were to determine factors affecting the interval between a woman's last childbirth and the initiation of contraception. Materials and Methods: This was a retrospective study. Family planning clinic records of the Barau Dikko Teaching Hospital Kaduna from January 2000 to March 2014 ...
Effect of seed collection times and pretreatment methods on ...
African Journals Online (AJOL)
STORAGESEVER
2008-08-18
Aug 18, 2008 ... Seeds were subjected to four treatment methods each at four ... were deep-green to brown while second collection was done when all .... discarded and the intact plump seeds were surface sterilized with .... Analysis of variance table for cumulative germination of Terminalia sericea for first seed collection.
Lung lesion doubling times: values and variability based on method of volume determination
International Nuclear Information System (INIS)
Eisenbud Quint, Leslie; Cheng, Joan; Schipper, Matthew; Chang, Andrew C.; Kalemkerian, Gregory
2008-01-01
Purpose: To determine doubling times (DTs) of lung lesions based on volumetric measurements from thin-section CT imaging. Methods: Previously untreated patients with ≥ two thin-section CT scans showing a focal lung lesion were identified. Lesion volumes were derived using direct volume measurements and volume calculations based on lesion area and diameter. Growth rates (GRs) were compared by tissue diagnosis and measurement technique. Results: 54 lesions were evaluated including 8 benign lesions, 10 metastases, 3 lymphomas, 15 adenocarcinomas, 11 squamous carcinomas, and 7 miscellaneous lung cancers. Using direct volume measurements, median DTs were 453, 111, 15, 181, 139 and 137 days, respectively. Lung cancer DTs ranged from 23-2239 days. There were no significant differences in GRs among the different lesion types. There was considerable variability among GRs using different volume determination methods. Conclusions: Lung cancer doubling times showed a substantial range, and different volume determination methods gave considerably different DTs
Multi-time-step domain coupling method with energy control
DEFF Research Database (Denmark)
Mahjoubi, N.; Krenk, Steen
2010-01-01
the individual time step. It is demonstrated that displacement continuity between the subdomains leads to cancelation of the interface contributions to the energy balance equation, and thus stability and algorithmic damping properties of the original algorithms are retained. The various subdomains can...... by a numerical example using a refined mesh around concentrated forces. Copyright © 2010 John Wiley & Sons, Ltd....
A simple data fusion method for instantaneous travel time estimation
Do, Michael; Pueboobpaphan, R.; Miska, Marc; Kuwahara, Masao; van Arem, Bart; Viegas, J.M.; Macario, R.
2010-01-01
Travel time is one of the most understandable parameters to describe traffic condition and an important input to many intelligent transportation systems applications. Direct measurement from Electronic Toll Collection (ETC) system is promising but the data arrives too late, only after the vehicles
An Optimization Method of Time Window Based on Travel Time and Reliability
Fu, Fengjie; Ma, Dongfang; Wang, Dianhai; Qian, Wei
2015-01-01
The dynamic change of urban road travel time was analyzed using video image detector data, and it showed cyclic variation, so the signal cycle length at the upstream intersection was conducted as the basic unit of time window; there was some evidence of bimodality in the actual travel time distributions; therefore, the fitting parameters of the travel time bimodal distribution were estimated using the EM algorithm. Then the weighted average value of the two means was indicated as the travel t...
Optimal control methods for rapidly time-varying Hamiltonians
International Nuclear Information System (INIS)
Motzoi, F.; Merkel, S. T.; Wilhelm, F. K.; Gambetta, J. M.
2011-01-01
In this article, we develop a numerical method to find optimal control pulses that accounts for the separation of timescales between the variation of the input control fields and the applied Hamiltonian. In traditional numerical optimization methods, these timescales are treated as being the same. While this approximation has had much success, in applications where the input controls are filtered substantially or mixed with a fast carrier, the resulting optimized pulses have little relation to the applied physical fields. Our technique remains numerically efficient in that the dimension of our search space is only dependent on the variation of the input control fields, while our simulation of the quantum evolution is accurate on the timescale of the fast variation in the applied Hamiltonian.
A negative-norm least-squares method for time-harmonic Maxwell equations
Copeland, Dylan M.
2012-04-01
This paper presents and analyzes a negative-norm least-squares finite element discretization method for the dimension-reduced time-harmonic Maxwell equations in the case of axial symmetry. The reduced equations are expressed in cylindrical coordinates, and the analysis consequently involves weighted Sobolev spaces based on the degenerate radial weighting. The main theoretical results established in this work include existence and uniqueness of the continuous and discrete formulations and error estimates for simple finite element functions. Numerical experiments confirm the error estimates and efficiency of the method for piecewise constant coefficients. © 2011 Elsevier Inc.
A new quantum statistical evaluation method for time correlation functions
International Nuclear Information System (INIS)
Loss, D.; Schoeller, H.
1989-01-01
Considering a system of N identical interacting particles, which obey Fermi-Dirac or Bose-Einstein statistics, the authors derive new formulas for correlation functions of the type C(t) = i= 1 N A i (t) Σ j=1 N B j > (where B j is diagonal in the free-particle states) in the thermodynamic limit. Thereby they apply and extend a superoperator formalism, recently developed for the derivation of long-time tails in semiclassical systems. As an illustrative application, the Boltzmann equation value of the time-integrated correlation function C(t) is derived in a straight-forward manner. Due to exchange effects, the obtained t-matrix and the resulting scattering cross section, which occurs in the Boltzmann collision operator, are now functionals of the Fermi-Dirac or Bose-Einstein distribution
Nonlinear system identification NARMAX methods in the time, frequency, and spatio-temporal domains
Billings, Stephen A
2013-01-01
Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains describes a comprehensive framework for the identification and analysis of nonlinear dynamic systems in the time, frequency, and spatio-temporal domains. This book is written with an emphasis on making the algorithms accessible so that they can be applied and used in practice. Includes coverage of: The NARMAX (nonlinear autoregressive moving average with exogenous inputs) modelThe orthogonal least squares algorithm that allows models to be built term by
Electron-phonon thermalization in a scalable method for real-time quantum dynamics
Rizzi, Valerio; Todorov, Tchavdar N.; Kohanoff, Jorge J.; Correa, Alfredo A.
2016-01-01
We present a quantum simulation method that follows the dynamics of out-of-equilibrium many-body systems of electrons and oscillators in real time. Its cost is linear in the number of oscillators and it can probe time scales from attoseconds to hundreds of picoseconds. Contrary to Ehrenfest dynamics, it can thermalize starting from a variety of initial conditions, including electronic population inversion. While an electronic temperature can be defined in terms of a nonequilibrium entropy, a Fermi-Dirac distribution in general emerges only after thermalization. These results can be used to construct a kinetic model of electron-phonon equilibration based on the explicit quantum dynamics.
Time and data synchronization methods in competition monitoring systems
Kerys, Julijus
2005-01-01
Information synchronization problems are analyzed in this thesis. Two aspects are being surveyed – clock synchronization, algorithms and their use, and data synchronization and maintaining the functionality of software at the times, when connection with database is broken. Existing products, their uses, cons and pros are overviewed. There are suggested models, how to solve these problems, which were implemented in “Distributed basketball competition registration and analysis software system”,...
International Nuclear Information System (INIS)
Aoki, Takayuki; Kobayashi, Hiroyuki; Higuchi, Shinichi; Shimizu, Sadato
2005-01-01
A Ni-base alloy weld, including cracks due to stress corrosion cracking found in the reactor internal of the oldest BWR in Japan, Tsuruga unit 1, in 1999, was examined by three (3) types of UT method. After this examination, a depth of each crack was confirmed by carrying out a little excavation with a grinder and PT examination by turns until each crack disappeared. Then, the depth measured by the former method was compared with the one measured by the latter method. In this fashion, performances of the UT methods were verified. As a result, a combination of the three types of UT method was found to meet the acceptance criteria given by ASME Sec.XI Appendix VIII, Performance Demonstration for Ultrasonic Examination Systems-Supplement 6. In this paper, the results of the UT examination described above and their evaluation are discussed. (author)
2008-08-01
ODOTs policy for Dynamic Message Sign : utilization requires travel time(s) to be displayed as : a default message. The current method of : calculating travel time involves a workstation : operator estimating the travel time based upon : observati...
Method of modeling transmissions for real-time simulation
Hebbale, Kumaraswamy V.
2012-09-25
A transmission modeling system includes an in-gear module that determines an in-gear acceleration when a vehicle is in gear. A shift module determines a shift acceleration based on a clutch torque when the vehicle is shifting between gears. A shaft acceleration determination module determines a shaft acceleration based on at least one of the in-gear acceleration and the shift acceleration.
Energy Technology Data Exchange (ETDEWEB)
Zhang, Y., E-mail: thuzhangyu@foxmail.com; Huang, S. L., E-mail: huangsling@tsinghua.edu.cn; Wang, S.; Zhao, W. [State Key Laboratory of Power Systems, Department of Electrical Engineering, Tsinghua University, Beijing 100084 (China)
2016-05-15
The time-of-flight of the Lamb wave provides an important basis for defect evaluation in metal plates and is the input signal for Lamb wave tomographic imaging. However, the time-of-flight can be difficult to acquire because of the Lamb wave dispersion characteristics. This work proposes a time-frequency energy density precipitation method to accurately extract the time-of-flight of narrowband Lamb wave detection signals in metal plates. In the proposed method, a discrete short-time Fourier transform is performed on the narrowband Lamb wave detection signals to obtain the corresponding discrete time-frequency energy density distribution. The energy density values at the center frequency for all discrete time points are then calculated by linear interpolation. Next, the time-domain energy density curve focused on that center frequency is precipitated by least squares fitting of the calculated energy density values. Finally, the peak times of the energy density curve obtained relative to the initial pulse signal are extracted as the time-of-flight for the narrowband Lamb wave detection signals. An experimental platform is established for time-of-flight extraction of narrowband Lamb wave detection signals, and sensitivity analysis of the proposed time-frequency energy density precipitation method is performed in terms of propagation distance, dispersion characteristics, center frequency, and plate thickness. For comparison, the widely used Hilbert–Huang transform method is also implemented for time-of-flight extraction. The results show that the time-frequency energy density precipitation method can accurately extract the time-of-flight with relative error of <1% and thus can act as a universal time-of-flight extraction method for narrowband Lamb wave detection signals.
International Nuclear Information System (INIS)
Zhang, Y.; Huang, S. L.; Wang, S.; Zhao, W.
2016-01-01
The time-of-flight of the Lamb wave provides an important basis for defect evaluation in metal plates and is the input signal for Lamb wave tomographic imaging. However, the time-of-flight can be difficult to acquire because of the Lamb wave dispersion characteristics. This work proposes a time-frequency energy density precipitation method to accurately extract the time-of-flight of narrowband Lamb wave detection signals in metal plates. In the proposed method, a discrete short-time Fourier transform is performed on the narrowband Lamb wave detection signals to obtain the corresponding discrete time-frequency energy density distribution. The energy density values at the center frequency for all discrete time points are then calculated by linear interpolation. Next, the time-domain energy density curve focused on that center frequency is precipitated by least squares fitting of the calculated energy density values. Finally, the peak times of the energy density curve obtained relative to the initial pulse signal are extracted as the time-of-flight for the narrowband Lamb wave detection signals. An experimental platform is established for time-of-flight extraction of narrowband Lamb wave detection signals, and sensitivity analysis of the proposed time-frequency energy density precipitation method is performed in terms of propagation distance, dispersion characteristics, center frequency, and plate thickness. For comparison, the widely used Hilbert–Huang transform method is also implemented for time-of-flight extraction. The results show that the time-frequency energy density precipitation method can accurately extract the time-of-flight with relative error of <1% and thus can act as a universal time-of-flight extraction method for narrowband Lamb wave detection signals.
Comparison of methods for determining the hydrologic recovery time after forest disturbance
Oda, T.; Green, M.; Ohte, N.; Urakawa, R.; Endo, I.; Scanlon, T. M.; Sebestyen, S. D.; McGuire, K. J.; Katsuyama, M.; Fukuzawa, K.; Tague, C.; Hiraoka, M.; Fukushima, K.; Giambelluca, T. W.
2013-12-01
Changes in forest hydrology changes after forest disturbance vary among catchments. Although studies have summarized the initial runoff changes following forest disturbance, the estimates of long-term recovery time are less frequently reported. To understand the mechanisms of long-term recovery processes and to predict the long-term changes in streamflow after forest disturbance, it is important to compare recovery times after disturbance. However, there is no clear consensus regarding the best methodology for such research, especially for watershed studies that were not designed as paired watersheds. We compared methods of determining the hydrologic recovery time to determine if there is a common method for sites in any hydroclimatic setting. We defined the hydrologic recovery time to be the time of disturbance to the time when hydrological factors first recovered to pre-disturbance levels. We acquired data on long-term rainfall and runoff at 16 sites in northeastern USA and Japan that had at least 10 years (and up to 50 years) of post disturbance data. The types of disturbance include harvesting, diseases and insect damages. We compared multiple indices of hydrological response including annual runoff, annual runoff ratio (annual runoff/annual rainfall), annual loss (annual rainfall-annual runoff), fiftieth-percentile annual flow, and seasonal water balance. The results showed that comparing annual runoff to a reference site was most robust at constraining the recovery time, followed by using pre-disturbance data as reference data and calculating the differences in annual runoff from pre-disturbance levels. However, in case of small disturbance at sites without reference data or long-term pre-disturbance data, the inter-annual variation of rainfall makes the effect of disturbance unclear. We found that annual loss had smaller inter-annual variation, and defining recovery time with annual loss was best in terms of matching the results from paired watersheds. The
International Nuclear Information System (INIS)
Faerman, V A; Cheremnov, A G; Avramchuk, V V; Luneva, E E
2014-01-01
In the current work the relevance of nondestructive test method development applied for pipeline leak detection is considered. It was shown that acoustic emission testing is currently one of the most widely spread leak detection methods. The main disadvantage of this method is that it cannot be applied in monitoring long pipeline sections, which in its turn complicates and slows down the inspection of the line pipe sections of main pipelines. The prospects of developing alternative techniques and methods based on the use of the spectral analysis of signals were considered and their possible application in leak detection on the basis of the correlation method was outlined. As an alternative, the time-frequency correlation function calculation is proposed. This function represents the correlation between the spectral components of the analyzed signals. In this work, the technique of time-frequency correlation function calculation is described. The experimental data that demonstrate obvious advantage of the time-frequency correlation function compared to the simple correlation function are presented. The application of the time-frequency correlation function is more effective in suppressing the noise components in the frequency range of the useful signal, which makes maximum of the function more pronounced. The main drawback of application of the time- frequency correlation function analysis in solving leak detection problems is a great number of calculations that may result in a further increase in pipeline time inspection. However, this drawback can be partially reduced by the development and implementation of efficient algorithms (including parallel) of computing the fast Fourier transform using computer central processing unit and graphic processing unit
A robust anomaly based change detection method for time-series remote sensing images
Shoujing, Yin; Qiao, Wang; Chuanqing, Wu; Xiaoling, Chen; Wandong, Ma; Huiqin, Mao
2014-03-01
Time-series remote sensing images record changes happening on the earth surface, which include not only abnormal changes like human activities and emergencies (e.g. fire, drought, insect pest etc.), but also changes caused by vegetation phenology and climate changes. Yet, challenges occur in analyzing global environment changes and even the internal forces. This paper proposes a robust Anomaly Based Change Detection method (ABCD) for time-series images analysis by detecting abnormal points in data sets, which do not need to follow a normal distribution. With ABCD we can detect when and where changes occur, which is the prerequisite condition of global change studies. ABCD was tested initially with 10-day SPOT VGT NDVI (Normalized Difference Vegetation Index) times series tracking land cover type changes, seasonality and noise, then validated to real data in a large area in Jiangxi, south of China. Initial results show that ABCD can precisely detect spatial and temporal changes from long time series images rapidly.
Methods for determining unimpeded aircraft taxiing time and evaluating airport taxiing performance
Directory of Open Access Journals (Sweden)
Yu Zhang
2017-04-01
Full Text Available The objective of this study is to improve the methods of determining unimpeded (nominal taxiing time, which is the reference time used for estimating taxiing delay, a widely accepted performance indicator of airport surface movement. After reviewing existing methods used widely by different air navigation service providers (ANSP, new methods relying on computer software and statistical tools, and econometrics regression models are proposed. Regression models are highly recommended because they require less detailed data and can serve the needs of general performance analysis of airport surface operations. The proposed econometrics model outperforms existing ones by introducing more explanatory variables, especially taking aircraft passing and over-passing into the considering of queue length calculation and including runway configuration, ground delay program, and weather factors. The length of the aircraft queue in the taxiway system and the interaction between queues are major contributors to long taxi-out times. The proposed method provides a consistent and more accurate method of calculating taxiing delay and it can be used for ATM-related performance analysis and international comparison.
Badiani, Anna; Montellato, Lara; Bochicchio, Davide; Anfossi, Paola; Zanardi, Emanuela; Maranesi, Magda
2004-08-11
Proximate composition and fatty acid profile, conjugated linoleic acid (CLA) isomers included, were determined in separable lean of raw and cooked lamb rib loins. The cooking methods compared, which were also investigated for cooking yields and true nutrient retention values, were dry heating of fat-on cuts and moist heating of fat-off cuts; the latter method was tested as a sort of dietetic approach against the more traditional former type. With significantly (P cooking losses, dry heating of fat-on rib-loins produced slightly (although only rarely significantly) higher retention values for all of the nutrients considered, including CLA isomers. On the basis of the retention values obtained, both techniques led to a minimum migration of lipids into the separable lean, which was higher (P cooking of the class of CLA isomers (including that of the nutritionally most important isomer cis-9,trans-11) was more similar to that of the monounsaturated than the polyunsaturated fatty acids.
Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci
2013-04-01
This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.
Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris
2018-02-01
We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.
Verhoeven, Ronald; Dalmau Codina, Ramon; Prats Menéndez, Xavier; de Gelder, Nico
2014-01-01
1 Abstract In this paper an initial implementation of a real - time aircraft trajectory optimization algorithm is presented . The aircraft trajectory for descent and approach is computed for minimum use of thrust and speed brake in support of a “green” continuous descent and approach flight operation, while complying with ATC time constraints for maintaining runway throughput and co...
Comparison of transfer entropy methods for financial time series
He, Jiayi; Shang, Pengjian
2017-09-01
There is a certain relationship between the global financial markets, which creates an interactive network of global finance. Transfer entropy, a measurement for information transfer, offered a good way to analyse the relationship. In this paper, we analysed the relationship between 9 stock indices from the U.S., Europe and China (from 1995 to 2015) by using transfer entropy (TE), effective transfer entropy (ETE), Rényi transfer entropy (RTE) and effective Rényi transfer entropy (ERTE). We compared the four methods in the sense of the effectiveness for identification of the relationship between stock markets. In this paper, two kinds of information flows are given. One reveals that the U.S. took the leading position when in terms of lagged-current cases, but when it comes to the same date, China is the most influential. And ERTE could provide superior results.
Method for Hot Real-Time Sampling of Pyrolysis Vapors
Energy Technology Data Exchange (ETDEWEB)
Pomeroy, Marc D [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-09-29
Biomass Pyrolysis has been an increasing topic of research, in particular as a replacement for crude oil. This process utilizes moderate temperatures to thermally deconstruct the biomass which is then condensed into a mixture of liquid oxygenates to be used as fuel precursors. Pyrolysis oils contain more than 400 compounds, up to 60 percent of which do not re-volatilize for subsequent chemical analysis. Vapor chemical composition is also complicated as additional condensation reactions occur during the condensation and collection of the product. Due to the complexity of the pyrolysis oil, and a desire to catalytically upgrade the vapor composition before condensation, online real-time analytical techniques such as Molecular Beam Mass Spectrometry (MBMS) are of great use. However, in order to properly sample hot pyrolysis vapors, many challenges must be overcome. Sampling must occur within a narrow range of temperatures to reduce product composition changes from overheating or partial condensation or plugging of lines from condensed products. Residence times must be kept at a minimum to reduce further reaction chemistries. Pyrolysis vapors also form aerosols that are carried far downstream and can pass through filters resulting in build-up in downstream locations. The co-produced bio-char and ash from the pyrolysis process can lead to plugging of the sample lines, and must be filtered out at temperature, even with the use of cyclonic separators. A practical approach for considerations and sampling system design, as well as lessons learned are integrated into the hot analytical sampling system of the National Renewable Energy Laboratory's (NREL) Thermochemical Process Development Unit (TCPDU) to provide industrially relevant demonstrations of thermochemical transformations of biomass feedstocks at the pilot scale.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Training plans; time of submission; where filed....3 Training plans; time of submission; where filed; information required; time for approval; method... training plan shall be filed with the District Manager for the area in which the mine is located. (c) Each...
Zhang, Y; Huang, S L; Wang, S; Zhao, W
2016-05-01
The time-of-flight of the Lamb wave provides an important basis for defect evaluation in metal plates and is the input signal for Lamb wave tomographic imaging. However, the time-of-flight can be difficult to acquire because of the Lamb wave dispersion characteristics. This work proposes a time-frequency energy density precipitation method to accurately extract the time-of-flight of narrowband Lamb wave detection signals in metal plates. In the proposed method, a discrete short-time Fourier transform is performed on the narrowband Lamb wave detection signals to obtain the corresponding discrete time-frequency energy density distribution. The energy density values at the center frequency for all discrete time points are then calculated by linear interpolation. Next, the time-domain energy density curve focused on that center frequency is precipitated by least squares fitting of the calculated energy density values. Finally, the peak times of the energy density curve obtained relative to the initial pulse signal are extracted as the time-of-flight for the narrowband Lamb wave detection signals. An experimental platform is established for time-of-flight extraction of narrowband Lamb wave detection signals, and sensitivity analysis of the proposed time-frequency energy density precipitation method is performed in terms of propagation distance, dispersion characteristics, center frequency, and plate thickness. For comparison, the widely used Hilbert-Huang transform method is also implemented for time-of-flight extraction. The results show that the time-frequency energy density precipitation method can accurately extract the time-of-flight with relative error of wave detection signals.
Mielke, Steven L; Dinpajooh, Mohammadhasan; Siepmann, J Ilja; Truhlar, Donald G
2013-01-07
We present a procedure to calculate ensemble averages, thermodynamic derivatives, and coordinate distributions by effective classical potential methods. In particular, we consider the displaced-points path integral (DPPI) method, which yields exact quantal partition functions and ensemble averages for a harmonic potential and approximate quantal ones for general potentials, and we discuss the implementation of the new procedure in two Monte Carlo simulation codes, one that uses uncorrelated samples to calculate absolute free energies, and another that employs Metropolis sampling to calculate relative free energies. The results of the new DPPI method are compared to those from accurate path integral calculations as well as to results of two other effective classical potential schemes for the case of an isolated water molecule. In addition to the partition function, we consider the heat capacity and expectation values of the energy, the potential energy, the bond angle, and the OH distance. We also consider coordinate distributions. The DPPI scheme performs best among the three effective potential schemes considered and achieves very good accuracy for all of the properties considered. A key advantage of the effective potential schemes is that they display much lower statistical sampling variances than those for accurate path integral calculations. The method presented here shows great promise for including quantum effects in calculations on large systems.
Qian, S.; Dunham, M.E.
1996-11-12
A system and method are disclosed for constructing a bank of filters which detect the presence of signals whose frequency content varies with time. The present invention includes a novel system and method for developing one or more time templates designed to match the received signals of interest and the bank of matched filters use the one or more time templates to detect the received signals. Each matched filter compares the received signal x(t) with a respective, unique time template that has been designed to approximate a form of the signals of interest. The robust time domain template is assumed to be of the order of w(t)=A(t)cos(2{pi}{phi}(t)) and the present invention uses the trajectory of a joint time-frequency representation of x(t) as an approximation of the instantaneous frequency function {phi}{prime}(t). First, numerous data samples of the received signal x(t) are collected. A joint time frequency representation is then applied to represent the signal, preferably using the time frequency distribution series. The joint time-frequency transformation represents the analyzed signal energy at time t and frequency f, P(t,f), which is a three-dimensional plot of time vs. frequency vs. signal energy. Then P(t,f) is reduced to a multivalued function f(t), a two dimensional plot of time vs. frequency, using a thresholding process. Curve fitting steps are then performed on the time/frequency plot, preferably using Levenberg-Marquardt curve fitting techniques, to derive a general instantaneous frequency function {phi}{prime}(t) which best fits the multivalued function f(t). Integrating {phi}{prime}(t) along t yields {phi}{prime}(t), which is then inserted into the form of the time template equation. A suitable amplitude A(t) is also preferably determined. Once the time template has been determined, one or more filters are developed which each use a version or form of the time template. 7 figs.
Energy Technology Data Exchange (ETDEWEB)
Beal, D.; McIlvaine , J.; Fonorow, K.; Martin, E.
2011-11-01
This document illustrates guidelines for the efficient installation of interior duct systems in new housing, including the fur-up chase method, the fur-down chase method, and interior ducts positioned in sealed attics or sealed crawl spaces. This document illustrates guidelines for the efficient installation of interior duct systems in new housing. Interior ducts result from bringing the duct work inside a home's thermal and air barrier. Architects, designers, builders, and new home buyers should thoroughly investigate any opportunity for energy savings that is as easy to implement during construction, such as the opportunity to construct interior duct work. In addition to enhanced energy efficiency, interior ductwork results in other important advantages, such as improved indoor air quality, increased system durability and increased homeowner comfort. While the advantages of well-designed and constructed interior duct systems are recognized, the implementation of this approach has not gained a significant market acceptance. This guideline describes a variety of methods to create interior ducts including the fur-up chase method, the fur-down chase method, and interior ducts positioned in sealed attics or sealed crawl spaces. As communication of the intent of an interior duct system, and collaboration on its construction are paramount to success, this guideline details the critical design, planning, construction, inspection, and verification steps that must be taken. Involved in this process are individuals from the design team; sales/marketing team; and mechanical, insulation, plumbing, electrical, framing, drywall and solar contractors.
International Nuclear Information System (INIS)
Park, Yujin; Kazantzis, Nikolaos; Parlos, Alexander G.; Chong, Kil To
2013-01-01
Highlights: • Numerical solution for stiff differential equations using matrix exponential method. • The approximation is based on First Order Hold assumption. • Various input examples applied to the point kinetics equations. • The method shows superior useful and effective activity. - Abstract: A system of nonlinear differential equations is derived to model the dynamics of neutron density and the delayed neutron precursors within a point kinetics equation modeling framework for a nuclear reactor. The point kinetic equations are mathematically characterized as stiff, occasionally nonlinear, ordinary differential equations, posing significant challenges when numerical solutions are sought and traditionally resulting in the need for smaller time step intervals within various computational schemes. In light of the above realization, the present paper proposes a new discretization method inspired by system-theoretic notions and technically based on a combination of the matrix exponential method (MEM) and the First-Order Hold (FOH) assumption. Under the proposed time discretization structure, the sampled-data representation of the nonlinear point kinetic system of equations is derived. The performance of the proposed time discretization procedure is evaluated using several case studies with sinusoidal reactivity profiles and multiple input examples (reactivity and neutron source function). It is shown, that by applying the proposed method under a First-Order Hold for the neutron density and the precursor concentrations at each time step interval, the stiffness problem associated with the point kinetic equations can be adequately addressed and resolved. Finally, as evidenced by the aforementioned detailed simulation studies, the proposed method retains its validity and accuracy for a wide range of reactor operating conditions, including large sampling periods dictated by physical and/or technical limitations associated with the current state of sensor and
Loustau, Marie-Therese; Verhoog, Roelof; Precigout, Claude
1996-09-24
A method of bonding a metal connection to an electrode including a core having a fiber or foam-type structure for an electrochemical cell, in which method at least one metal strip is pressed against one edge of the core and is welded thereto under compression, wherein, at least in line with the region in which said strip is welded to the core, which is referred to as the "main core", a retaining core of a type analogous to that of the main core is disposed prior to the welding.
Approximate k-NN delta test minimization method using genetic algorithms: Application to time series
Mateo, F; Gadea, Rafael; Sovilj, Dusan
2010-01-01
In many real world problems, the existence of irrelevant input variables (features) hinders the predictive quality of the models used to estimate the output variables. In particular, time series prediction often involves building large regressors of artificial variables that can contain irrelevant or misleading information. Many techniques have arisen to confront the problem of accurate variable selection, including both local and global search strategies. This paper presents a method based on genetic algorithms that intends to find a global optimum set of input variables that minimize the Delta Test criterion. The execution speed has been enhanced by substituting the exact nearest neighbor computation by its approximate version. The problems of scaling and projection of variables have been addressed. The developed method works in conjunction with MATLAB's Genetic Algorithm and Direct Search Toolbox. The goodness of the proposed methodology has been evaluated on several popular time series examples, and also ...
Performing dynamic time history analyses by extension of the response spectrum method
International Nuclear Information System (INIS)
Hulbert, G.M.
1983-01-01
A method is presented to calculate the dynamic time history response of finite-element models using results from response spectrum analyses. The proposed modified time history method does not represent a new mathamatical approach to dynamic analysis but suggests a more efficient ordering of the analytical equations and procedures. The modified time history method is considerably faster and less expensive to use than normal time hisory methods. This paper presents the theory and implementation of the modified time history approach along with comparisons of the modified and normal time history methods for a prototypic seismic piping design problem
Dakos, Vasilis; Carpenter, Stephen R.; Brock, William A.; Ellison, Aaron M.; Guttal, Vishwesha; Ives, Anthony R.; Kéfi, Sonia; Livina, Valerie; Seekell, David A.; van Nes, Egbert H.; Scheffer, Marten
2012-01-01
Many dynamical systems, including lakes, organisms, ocean circulation patterns, or financial markets, are now thought to have tipping points where critical transitions to a contrasting state can happen. Because critical transitions can occur unexpectedly and are difficult to manage, there is a need for methods that can be used to identify when a critical transition is approaching. Recent theory shows that we can identify the proximity of a system to a critical transition using a variety of so-called ‘early warning signals’, and successful empirical examples suggest a potential for practical applicability. However, while the range of proposed methods for predicting critical transitions is rapidly expanding, opinions on their practical use differ widely, and there is no comparative study that tests the limitations of the different methods to identify approaching critical transitions using time-series data. Here, we summarize a range of currently available early warning methods and apply them to two simulated time series that are typical of systems undergoing a critical transition. In addition to a methodological guide, our work offers a practical toolbox that may be used in a wide range of fields to help detect early warning signals of critical transitions in time series data. PMID:22815897
Duchêne, Sebastián; Geoghegan, Jemma L; Holmes, Edward C; Ho, Simon Y W
2016-11-15
In rapidly evolving pathogens, including viruses and some bacteria, genetic change can accumulate over short time-frames. Accordingly, their sampling times can be used to calibrate molecular clocks, allowing estimation of evolutionary rates. Methods for estimating rates from time-structured data vary in how they treat phylogenetic uncertainty and rate variation among lineages. We compiled 81 virus data sets and estimated nucleotide substitution rates using root-to-tip regression, least-squares dating and Bayesian inference. Although estimates from these three methods were often congruent, this largely relied on the choice of clock model. In particular, relaxed-clock models tended to produce higher rate estimates than methods that assume constant rates. Discrepancies in rate estimates were also associated with high among-lineage rate variation, and phylogenetic and temporal clustering. These results provide insights into the factors that affect the reliability of rate estimates from time-structured sequence data, emphasizing the importance of clock-model testing. sduchene@unimelb.edu.au or garzonsebastian@hotmail.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Time-Dependent Close-Coupling Methods for Electron-Atom/Molecule Scattering
International Nuclear Information System (INIS)
Colgan, James
2014-01-01
The time-dependent close-coupling (TDCC) method centers on an accurate representation of the interaction between two outgoing electrons moving in the presence of a Coulomb field. It has been extensively applied to many problems of electrons, photons, and ions scattering from light atomic targets. Theoretical Description: The TDCC method centers on a solution of the time-dependent Schrödinger equation for two interacting electrons. The advantages of a time-dependent approach are two-fold; one treats the electron-electron interaction essentially in an exact manner (within numerical accuracy) and a time-dependent approach avoids the difficult boundary condition encountered when two free electrons move in a Coulomb field (the classic three-body Coulomb problem). The TDCC method has been applied to many fundamental atomic collision processes, including photon-, electron- and ion-impact ionization of light atoms. For application to electron-impact ionization of atomic systems, one decomposes the two-electron wavefunction in a partial wave expansion and represents the subsequent two-electron radial wavefunctions on a numerical lattice. The number of partial waves required to converge the ionization process depends on the energy of the incoming electron wavepacket and on the ionization threshold of the target atom or ion.
Comparative Evaluations of Four Specification Methods for Real-Time Systems
1989-12-01
December 1989 Comparative Evaluations of Four Specification Methods for Real - Time Systems David P. Wood William G. Wood Specification and Design Methods...Methods for Real - Time Systems Abstract: A number of methods have been proposed in the last decade for the specification of system and software requirements...and software specification for real - time systems . Our process for the identification of methods that meet the above criteria is described in greater
Valls-Cantenys, Carme; Scheurer, Marco; Iglesias, Mònica; Sacher, Frank; Brauch, Heinz-Jürgen; Salvadó, Victoria
2016-09-01
A sensitive, multi-residue method using solid-phase extraction followed by liquid chromatography-tandem mass spectrometry (LC-MS/MS) was developed to determine a representative group of 35 analytes, including corrosion inhibitors, pesticides and pharmaceuticals such as analgesic and anti-inflammatory drugs, five iodinated contrast media, β-blockers and some of their metabolites and transformation products in water samples. Few other methods are capable of determining such a broad range of contrast media together with other analytes. We studied the parameters affecting the extraction of the target analytes, including sorbent selection and extraction conditions, their chromatographic separation (mobile phase composition and column) and detection conditions using two ionisation sources: electrospray ionisation (ESI) and atmospheric pressure chemical ionisation (APCI). In order to correct matrix effects, a total of 20 surrogate/internal standards were used. ESI was found to have better sensitivity than APCI. Recoveries ranging from 79 to 134 % for tap water and 66 to 144 % for surface water were obtained. Intra-day precision, calculated as relative standard deviation, was below 34 % for tap water and below 21 % for surface water, groundwater and effluent wastewater. Method quantification limits (MQL) were in the low ng L(-1) range, except for the contrast agents iomeprol, amidotrizoic acid and iohexol (22, 25.5 and 17.9 ng L(-1), respectively). Finally, the method was applied to the analysis of 56 real water samples as part of the validation procedure. All of the compounds were detected in at least some of the water samples analysed. Graphical Abstract Multi-residue method for the determination of micropollutants including pharmaceuticals, iodinated contrast media and pesticides in waters by LC-MS/MS.
Wu, Zhenkai; Ding, Jing; Zhao, Dahang; Zhao, Li; Li, Hai; Liu, Jianlin
2017-07-10
The multiplier method was introduced by Paley to calculate the timing for temporary hemiepiphysiodesis. However, this method has not been verified in terms of clinical outcome measure. We aimed to (1) predict the rate of angular correction per year (ACPY) at the various corresponding ages by means of multiplier method and verify the reliability based on the data from the published studies and (2) screen out risk factors for deviation of prediction. A comprehensive search was performed in the following electronic databases: Cochrane, PubMed, and EMBASE™. A total of 22 studies met the inclusion criteria. If the actual value of ACPY from the collected date was located out of the range of the predicted value based on the multiplier method, it was considered as the deviation of prediction (DOP). The associations of patient characteristics with DOP were assessed with the use of univariate logistic regression. Only one article was evaluated as moderate evidence; the remaining articles were evaluated as poor quality. The rate of DOP was 31.82%. In the detailed individual data of included studies, the rate of DOP was 55.44%. The multiplier method is not reliable in predicting the timing for temporary hemiepiphysiodesis, even though it is prone to be more reliable for the younger patients with idiopathic genu coronal deformity.
Directory of Open Access Journals (Sweden)
Jun Bi
2018-04-01
Full Text Available Battery electric vehicles (BEVs reduce energy consumption and air pollution as compared with conventional vehicles. However, the limited driving range and potential long charging time of BEVs create new problems. Accurate charging time prediction of BEVs helps drivers determine travel plans and alleviate their range anxiety during trips. This study proposed a combined model for charging time prediction based on regression and time-series methods according to the actual data from BEVs operating in Beijing, China. After data analysis, a regression model was established by considering the charged amount for charging time prediction. Furthermore, a time-series method was adopted to calibrate the regression model, which significantly improved the fitting accuracy of the model. The parameters of the model were determined by using the actual data. Verification results confirmed the accuracy of the model and showed that the model errors were small. The proposed model can accurately depict the charging time characteristics of BEVs in Beijing.
Ortleb, Sigrun; Seidel, Christian
2017-07-01
In this second symposium at the limits of experimental and numerical methods, recent research is presented on practically relevant problems. Presentations discuss experimental investigation as well as numerical methods with a strong focus on application. In addition, problems are identified which require a hybrid experimental-numerical approach. Topics include fast explicit diffusion applied to a geothermal energy storage tank, noise in experimental measurements of electrical quantities, thermal fluid structure interaction, tensegrity structures, experimental and numerical methods for Chladni figures, optimized construction of hydroelectric power stations, experimental and numerical limits in the investigation of rain-wind induced vibrations as well as the application of exponential integrators in a domain-based IMEX setting.
Sousa, Marcelo R; Jones, Jon P; Frind, Emil O; Rudolph, David L
2013-01-01
In contaminant travel from ground surface to groundwater receptors, the time taken in travelling through the unsaturated zone is known as the unsaturated zone time lag. Depending on the situation, this time lag may or may not be significant within the context of the overall problem. A method is presented for assessing the importance of the unsaturated zone in the travel time from source to receptor in terms of estimates of both the absolute and the relative advective times. A choice of different techniques for both unsaturated and saturated travel time estimation is provided. This method may be useful for practitioners to decide whether to incorporate unsaturated processes in conceptual and numerical models and can also be used to roughly estimate the total travel time between points near ground surface and a groundwater receptor. This method was applied to a field site located in a glacial aquifer system in Ontario, Canada. Advective travel times were estimated using techniques with different levels of sophistication. The application of the proposed method indicates that the time lag in the unsaturated zone is significant at this field site and should be taken into account. For this case, sophisticated and simplified techniques lead to similar assessments when the same knowledge of the hydraulic conductivity field is assumed. When there is significant uncertainty regarding the hydraulic conductivity, simplified calculations did not lead to a conclusive decision. Copyright © 2012 Elsevier B.V. All rights reserved.
Inverse methods for estimating primary input signals from time-averaged isotope profiles
Passey, Benjamin H.; Cerling, Thure E.; Schuster, Gerard T.; Robinson, Todd F.; Roeder, Beverly L.; Krueger, Stephen K.
2005-08-01
Mammalian teeth are invaluable archives of ancient seasonality because they record along their growth axes an isotopic record of temporal change in environment, plant diet, and animal behavior. A major problem with the intra-tooth method is that intra-tooth isotope profiles can be extremely time-averaged compared to the actual pattern of isotopic variation experienced by the animal during tooth formation. This time-averaging is a result of the temporal and spatial characteristics of amelogenesis (tooth enamel formation), and also results from laboratory sampling. This paper develops and evaluates an inverse method for reconstructing original input signals from time-averaged intra-tooth isotope profiles. The method requires that the temporal and spatial patterns of amelogenesis are known for the specific tooth and uses a minimum length solution of the linear system Am = d, where d is the measured isotopic profile, A is a matrix describing temporal and spatial averaging during amelogenesis and sampling, and m is the input vector that is sought. Accuracy is dependent on several factors, including the total measurement error and the isotopic structure of the measured profile. The method is shown to accurately reconstruct known input signals for synthetic tooth enamel profiles and the known input signal for a rabbit that underwent controlled dietary changes. Application to carbon isotope profiles of modern hippopotamus canines reveals detailed dietary histories that are not apparent from the measured data alone. Inverse methods show promise as an effective means of dealing with the time-averaging problem in studies of intra-tooth isotopic variation.
Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods
International Nuclear Information System (INIS)
Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris
2016-01-01
Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.
Geevers, Sjoerd; van der Vegt, J.J.W.
2017-01-01
We present sharp and sucient bounds for the interior penalty term and time step size to ensure stability of the symmetric interior penalty discontinuous Galerkin (SIPDG) method combined with an explicit time-stepping scheme. These conditions hold for generic meshes, including unstructured
Haseli, Y.; Oijen, van J.A.; Goey, de L.P.H.
2012-01-01
The main idea of this paper is to establish a simple approach for prediction of the ignition time of a wood particle assuming that the thermo-physical properties remain constant and ignition takes place at a characteristic ignition temperature. Using a time and space integral method, explicit
Change Semantic Constrained Online Data Cleaning Method for Real-Time Observational Data Stream
Ding, Yulin; Lin, Hui; Li, Rongrong
2016-06-01
to large estimation error. In order to achieve the best generalization error, it is an important challenge for the data cleaning methodology to be able to characterize the behavior of data stream distributions and adaptively update a model to include new information and remove old information. However, the complicated data changing property invalidates traditional data cleaning methods, which rely on the assumption of a stationary data distribution, and drives the need for more dynamic and adaptive online data cleaning methods. To overcome these shortcomings, this paper presents a change semantics constrained online filtering method for real-time observational data. Based on the principle that the filter parameter should vary in accordance to the data change patterns, this paper embeds semantic description, which quantitatively depicts the change patterns in the data distribution to self-adapt the filter parameter automatically. Real-time observational water level data streams of different precipitation scenarios are selected for testing. Experimental results prove that by means of this method, more accurate and reliable water level information can be available, which is prior to scientific and prompt flood assessment and decision-making.
CHANGE SEMANTIC CONSTRAINED ONLINE DATA CLEANING METHOD FOR REAL-TIME OBSERVATIONAL DATA STREAM
Directory of Open Access Journals (Sweden)
Y. Ding
2016-06-01
data streams, which may led to large estimation error. In order to achieve the best generalization error, it is an important challenge for the data cleaning methodology to be able to characterize the behavior of data stream distributions and adaptively update a model to include new information and remove old information. However, the complicated data changing property invalidates traditional data cleaning methods, which rely on the assumption of a stationary data distribution, and drives the need for more dynamic and adaptive online data cleaning methods. To overcome these shortcomings, this paper presents a change semantics constrained online filtering method for real-time observational data. Based on the principle that the filter parameter should vary in accordance to the data change patterns, this paper embeds semantic description, which quantitatively depicts the change patterns in the data distribution to self-adapt the filter parameter automatically. Real-time observational water level data streams of different precipitation scenarios are selected for testing. Experimental results prove that by means of this method, more accurate and reliable water level information can be available, which is prior to scientific and prompt flood assessment and decision-making.
Method and system for real-time analysis of biosensor data
Greenbaum, Elias; Rodriguez, Jr., Miguel
2014-08-19
A method of biosensor-based detection of toxins includes the steps of providing a fluid to be analyzed having a plurality of photosynthetic organisms therein, wherein chemical, biological or radiological agents alter a nominal photosynthetic activity of the photosynthetic organisms. At a first time a measured photosynthetic activity curve is obtained from the photosynthetic organisms. The measured curve is automatically compared to a reference photosynthetic activity curve to determine differences therebetween. The presence of the chemical, biological or radiological agents, or precursors thereof, are then identified if present in the fluid using the differences.
Directory of Open Access Journals (Sweden)
Mehrdad Gholami
2015-07-01
Full Text Available Introduction In radiography, dose and image quality are dependent on radiographic parameters. The problem is caused from incorrect use of radiography equipment and from the radiation exposure to patients much more than required. Therefore, the aim of this study was to implement a quality-control program to detect changes in exposure parameters, which may affect diagnosis or patient radiation dose. Materials and Methods This cross-sectional study was performed on seven stationary X-ray units in sixhospitals of Lorestan province. The measurements were performed, using a factory-calibrated Barracuda dosimeter (model: SE-43137. Results According to the results, the highest output was obtained in A Hospital (M1 device, ranging from 107×10-3 to 147×10-3 mGy/mAs. The evaluation of tube voltage accuracy showed a deviation from the standard value, which ranged between 0.81% (M1 device and 17.94% (M2 device at A Hospital. The deviation ranges at other hospitals were as follows: 0.30-27.52% in B Hospital (the highest in this study, 8.11-20.34% in C Hospital, 1.68-2.58% in D Hospital, 0.90-2.42% in E Hospital and 0.10-1.63% in F Hospital. The evaluation of exposure time accuracy showed that E, C, D and A (M2 device hospitals complied with the requirements (allowing a deviation of ±5%, whereas A (M1 device, F and B hospitals exceeded the permitted limit. Conclusion The results of this study showed that old X-ray equipments with poor or no maintenance are probably the main sources of reducing radiographic image quality and increasing patient radiation dose.
An analytical nodal method for time-dependent one-dimensional discrete ordinates problems
International Nuclear Information System (INIS)
Barros, R.C. de
1992-01-01
In recent years, relatively little work has been done in developing time-dependent discrete ordinates (S N ) computer codes. Therefore, the topic of time integration methods certainly deserves further attention. In this paper, we describe a new coarse-mesh method for time-dependent monoenergetic S N transport problesm in slab geometry. This numerical method preserves the analytic solution of the transverse-integrated S N nodal equations by constants, so we call our method the analytical constant nodal (ACN) method. For time-independent S N problems in finite slab geometry and for time-dependent infinite-medium S N problems, the ACN method generates numerical solutions that are completely free of truncation errors. Bsed on this positive feature, we expect the ACN method to be more accurate than conventional numerical methods for S N transport calculations on coarse space-time grids
Energy Technology Data Exchange (ETDEWEB)
Cobb, J.W.
1995-02-01
There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.
International Nuclear Information System (INIS)
Paraschiv, M.; Paraschiv, A.
1991-01-01
A method to rewrite Fick's second law for a region with a moving boundary when the moving law in time of this boundary is known, has been proposed. This method was applied to Booth's sphere model for radioactive and stable fission product diffusion from the oxide fuel grain in order to take into account the grain growth. The solution of this new equation was presented in the mathematical formulation for power histories from ANS 5.4 model for the stable species. It is very simple to apply and very accurate. The results obtained with this solution for constant and transient temperatures show that the fission gas release (FGR) at grain boundary is strongly dependent on kinetics of grain growth. The utilization of two semiempirical grain growth laws, from published information, shows that the fuel microstructural properties need to be multicitly considered in the fission gas release for every manufacturer of fuel. (orig.)
International Nuclear Information System (INIS)
Ott, S.H.
1992-01-01
This dissertation uses the real options framework to study the valuation and optimal investment policies for R and D projects. The models developed integrate and extend the literature by taking into account the unique characteristics of such projects including uncertain investment in R and D, time-to-build, and multiple investment opportunities. The models were developed to examine the optimal R and D investment policy for the Lunar Helium-3 fusion project but have general applicability. Models are development which model R and D investment as an information gathering process where R and D investment remaining changes as investment is expended. The value of the project increased as the variance of required investment increases. An extension of this model combines a stochastic benefit with stochastic investment. Both the value of the R and D project and the region prescribing continued investment increased. The policy implications are significant: When uncertainty of R and D investment is ignored, the value of the project is underestimated and a tendency toward underinvestment in R and D will result; the existence of uncertainty in R and D investment will cause R and D projects to experience larger declines in value before discontinuation of investment. The model combining stochastic investment with the stochastic benefit is applied to the Lunar Helium-3 fusion project. Investment in fusion should continue at the maximum level of $1 billion annually given current levels of costs of alternative fuels and the perceived uncertainty of R and D investment in the project. A model is developed to examine the valuation and optimal split of funding between R and D projects when there are two competing new technologies. Without interaction between research expenditures and benefits across technologies, the optimal investment strategy is to invest in one or the other technology or neither. The multiple technology model is applied to analyze competing R and D projects, namely
Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves
Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua
2017-09-01
In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.
Covariate measurement error correction methods in mediation analysis with failure time data.
Zhao, Shanshan; Prentice, Ross L
2014-12-01
Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.
A Timing Estimation Method Based-on Skewness Analysis in Vehicular Wireless Networks.
Cui, Xuerong; Li, Juan; Wu, Chunlei; Liu, Jian-Hang
2015-11-13
Vehicle positioning technology has drawn more and more attention in vehicular wireless networks to reduce transportation time and traffic accidents. Nowadays, global navigation satellite systems (GNSS) are widely used in land vehicle positioning, but most of them are lack precision and reliability in situations where their signals are blocked. Positioning systems base-on short range wireless communication are another effective way that can be used in vehicle positioning or vehicle ranging. IEEE 802.11p is a new real-time short range wireless communication standard for vehicles, so a new method is proposed to estimate the time delay or ranges between vehicles based on the IEEE 802.11p standard which includes three main steps: cross-correlation between the received signal and the short preamble, summing up the correlated results in groups, and finding the maximum peak using a dynamic threshold based on the skewness analysis. With the range between each vehicle or road-side infrastructure, the position of neighboring vehicles can be estimated correctly. Simulation results were presented in the International Telecommunications Union (ITU) vehicular multipath channel, which show that the proposed method provides better precision than some well-known timing estimation techniques, especially in low signal to noise ratio (SNR) environments.
A Timing Estimation Method Based-on Skewness Analysis in Vehicular Wireless Networks
Directory of Open Access Journals (Sweden)
Xuerong Cui
2015-11-01
Full Text Available Vehicle positioning technology has drawn more and more attention in vehicular wireless networks to reduce transportation time and traffic accidents. Nowadays, global navigation satellite systems (GNSS are widely used in land vehicle positioning, but most of them are lack precision and reliability in situations where their signals are blocked. Positioning systems base-on short range wireless communication are another effective way that can be used in vehicle positioning or vehicle ranging. IEEE 802.11p is a new real-time short range wireless communication standard for vehicles, so a new method is proposed to estimate the time delay or ranges between vehicles based on the IEEE 802.11p standard which includes three main steps: cross-correlation between the received signal and the short preamble, summing up the correlated results in groups, and finding the maximum peak using a dynamic threshold based on the skewness analysis. With the range between each vehicle or road-side infrastructure, the position of neighboring vehicles can be estimated correctly. Simulation results were presented in the International Telecommunications Union (ITU vehicular multipath channel, which show that the proposed method provides better precision than some well-known timing estimation techniques, especially in low signal to noise ratio (SNR environments.
A standard curve based method for relative real time PCR data processing
Directory of Open Access Journals (Sweden)
Krause Andreas
2005-03-01
Full Text Available Abstract Background Currently real time PCR is the most precise method by which to measure gene expression. The method generates a large amount of raw numerical data and processing may notably influence final results. The data processing is based either on standard curves or on PCR efficiency assessment. At the moment, the PCR efficiency approach is preferred in relative PCR whilst the standard curve is often used for absolute PCR. However, there are no barriers to employ standard curves for relative PCR. This article provides an implementation of the standard curve method and discusses its advantages and limitations in relative real time PCR. Results We designed a procedure for data processing in relative real time PCR. The procedure completely avoids PCR efficiency assessment, minimizes operator involvement and provides a statistical assessment of intra-assay variation. The procedure includes the following steps. (I Noise is filtered from raw fluorescence readings by smoothing, baseline subtraction and amplitude normalization. (II The optimal threshold is selected automatically from regression parameters of the standard curve. (III Crossing points (CPs are derived directly from coordinates of points where the threshold line crosses fluorescence plots obtained after the noise filtering. (IV The means and their variances are calculated for CPs in PCR replicas. (V The final results are derived from the CPs' means. The CPs' variances are traced to results by the law of error propagation. A detailed description and analysis of this data processing is provided. The limitations associated with the use of parametric statistical methods and amplitude normalization are specifically analyzed and found fit to the routine laboratory practice. Different options are discussed for aggregation of data obtained from multiple reference genes. Conclusion A standard curve based procedure for PCR data processing has been compiled and validated. It illustrates that
International Nuclear Information System (INIS)
Yamada, Kazuo; Hoshino, Seiichi; Hirao, Hiroshi; Yamashita, Hiroki
2008-01-01
X-ray diffraction (XRD)/Rietveld method was applied to measure the phase composition of cement. The quantative analysis concerning the progress of hydration was accomplished in an error of about the maximum 2-3% in spite of including amorphous materials such as blast furnace slag, fly ash, silica fume and C-S-H. The influence of the compressive strength on the lime stone fine powder mixture material was studied from the hydration analysis by Rietveld method. The two stages were observed in the strength development mechanism of cement; the hydration promotion of C 3 S in the early stage and the filling of cavities by carbonate hydrate for the longer term. It is useful to use various mixture materials for the formation of the resource recycling society and the durability improvement of concrete. (author)
DEFF Research Database (Denmark)
Bysted, Anette; Cold, S; Hølmer, Gunhild Kofoed
1999-01-01
Considering the need for a quick direct method for measurement of the fatty acid composition including trans isomers ofhuman adipose tissue we have developed a procedure using gas-liquid chromatography (GLC) alone, which is thussuitable for validation of fatty acid status in epidemiological studies...... for 25 min, and finally raised at 25 degrees C/min to 225 degrees C. The trans and cis isomers of18:1 were well separated from each other, as shown by silver-ion thin-layer chromatography. Verification by standardsshowed that the trans 18:1 isomers with a double bond in position 12 or lower were...
A TWO-MOMENT RADIATION HYDRODYNAMICS MODULE IN ATHENA USING A TIME-EXPLICIT GODUNOV METHOD
Energy Technology Data Exchange (ETDEWEB)
Skinner, M. Aaron; Ostriker, Eve C., E-mail: askinner@astro.umd.edu, E-mail: eco@astro.princeton.edu [Department of Astronomy, University of Maryland, College Park, MD 20742-2421 (United States)
2013-06-01
We describe a module for the Athena code that solves the gray equations of radiation hydrodynamics (RHD), based on the first two moments of the radiative transfer equation. We use a combination of explicit Godunov methods to advance the gas and radiation variables including the non-stiff source terms, and a local implicit method to integrate the stiff source terms. We adopt the M{sub 1} closure relation and include all leading source terms to O({beta}{tau}). We employ the reduced speed of light approximation (RSLA) with subcycling of the radiation variables in order to reduce computational costs. Our code is dimensionally unsplit in one, two, and three space dimensions and is parallelized using MPI. The streaming and diffusion limits are well described by the M{sub 1} closure model, and our implementation shows excellent behavior for a problem with a concentrated radiation source containing both regimes simultaneously. Our operator-split method is ideally suited for problems with a slowly varying radiation field and dynamical gas flows, in which the effect of the RSLA is minimal. We present an analysis of the dispersion relation of RHD linear waves highlighting the conditions of applicability for the RSLA. To demonstrate the accuracy of our method, we utilize a suite of radiation and RHD tests covering a broad range of regimes, including RHD waves, shocks, and equilibria, which show second-order convergence in most cases. As an application, we investigate radiation-driven ejection of a dusty, optically thick shell in the ISM. Finally, we compare the timing of our method with other well-known iterative schemes for the RHD equations. Our code implementation, Hyperion, is suitable for a wide variety of astrophysical applications and will be made freely available on the Web.
Rico, H.; Hauksson, E.; Thomas, E.; Friberg, P.; Given, D.
2002-12-01
The California Integrated Seismic Network (CISN) Display is part of a Web-enabled earthquake notification system alerting users in near real-time of seismicity, and also valuable geophysical information following a large earthquake. It will replace the Caltech/USGS Broadcast of Earthquakes (CUBE) and Rapid Earthquake Data Integration (REDI) Display as the principal means of delivering graphical earthquake information to users at emergency operations centers, and other organizations. Features distinguishing the CISN Display from other GUI tools are a state-full client/server relationship, a scalable message format supporting automated hyperlink creation, and a configurable platform-independent client with a GIS mapping tool; supporting the decision-making activities of critical users. The CISN Display is the front-end of a client/server architecture known as the QuakeWatch system. It is comprised of the CISN Display (and other potential clients), message queues, server, server "feeder" modules, and messaging middleware, schema and generators. It is written in Java, making it platform-independent, and offering the latest in Internet technologies. QuakeWatch's object-oriented design allows components to be easily upgraded through a well-defined set of application programming interfaces (APIs). Central to the CISN Display's role as a gateway to other earthquake products is its comprehensive XML-schema. The message model starts with the CUBE message format, but extends it by provisioning additional attributes for currently available products, and those yet to be considered. The supporting metadata in the XML-message provides the data necessary for the client to create a hyperlink and associate it with a unique event ID. Earthquake products deliverable to the CISN Display are ShakeMap, Ground Displacement, Focal Mechanisms, Rapid Notifications, OES Reports, and Earthquake Commentaries. Leveraging the power of the XML-format, the CISN Display provides prompt access to
Directory of Open Access Journals (Sweden)
S. Stoll
2011-01-01
Full Text Available Climate change related modifications in the spatio-temporal distribution of precipitation and evapotranspiration will have an impact on groundwater resources. This study presents a modelling approach exploiting the advantages of integrated hydrological modelling and a broad climate model basis. We applied the integrated MIKE SHE model on a perialpine, small catchment in northern Switzerland near Zurich. To examine the impact of climate change we forced the hydrological model with data from eight GCM-RCM combinations showing systematic biases which are corrected by three different statistical downscaling methods, not only for precipitation but also for the variables that govern potential evapotranspiration. The downscaling methods are evaluated in a split sample test and the sensitivity of the downscaling procedure on the hydrological fluxes is analyzed. The RCMs resulted in very different projections of potential evapotranspiration and, especially, precipitation. All three downscaling methods reduced the differences between the predictions of the RCMs and all corrected predictions showed no future groundwater stress which can be related to an expected increase in precipitation during winter. It turned out that especially the timing of the precipitation and thus recharge is very important for the future development of the groundwater levels. However, the simulation experiments revealed the weaknesses of the downscaling methods which directly influence the predicted hydrological fluxes, and thus also the predicted groundwater levels. The downscaling process is identified as an important source of uncertainty in hydrological impact studies, which has to be accounted for. Therefore it is strongly recommended to test different downscaling methods by using verification data before applying them to climate model data.
Singh, Samiksha; Upadhyaya, Sanjeev; Deshmukh, Pradeep; Dongre, Amol; Dwivedi, Neha; Dey, Deepak; Kumar, Vijay
2018-04-02
In India, amidst the increasing number of health programmes, there are concerns about the performance of frontline health workers (FLHW). We assessed the time utilisation and factors affecting the work of frontline health workers from South India. This is a mixed methods study using time and motion (TAM) direct observations and qualitative enquiry among frontline/community health workers. These included 43 female and 6 male multipurpose health workers (namely, auxiliary nurse midwives (ANMs) and male-MPHWs), 12 nutrition and health workers (Anganwadi workers, AWWs) and 53 incentive-based community health workers (accredited social health activists, ASHAs). We conducted the study in two phases. In the formative phase, we conducted an in-depth inductive investigation to develop observation checklists and qualitative tools. The main study involved deductive approach for TAM observations. This enabled us to observe a larger sample to capture variations across non-tribal and tribal regions and different health cadres. For the main study, we developed GPRS-enabled android-based application to precisely record time, multi-tasking and field movement. We conducted non-participatory direct observations (home to home) for consecutively 6 days for each participant. We conducted in-depth interviews with all the participants and 33 of their supervisors and relevant officials. We conducted six focus group discussions (FGDs) with ASHAs and one FGD with ANMs to validate preliminary findings. We established a mechanism for quality assurance of data collection and analysis. We analysed the data separately for each cadre and stratified for non-tribal and tribal regions. On any working day, the ANMs spent median 7:04 h, male-MPHWs spent median 5:44 h and AWWs spent median 6:50 h on the job. The time spent on the job was less among the FLHWs from tribal areas as compared to those from non-tribal areas. ANMs and AWWs prioritised maternal and child health, while male-MPHWs were
Directory of Open Access Journals (Sweden)
Koivistoinen Teemu
2007-01-01
Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an -by-1 or 1-by- array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD.'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Directory of Open Access Journals (Sweden)
Alpo Värri
2007-01-01
Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an m-by-1 or 1-by-m array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ‘‘time-frequency moments singular value decomposition (TFM-SVD.’’ In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo
2006-12-01
As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Combined Effects of Numerical Method Type and Time Step on Water Stressed Actual Crop ET
Directory of Open Access Journals (Sweden)
B. Ghahraman
2016-02-01
Full Text Available Introduction: Actual crop evapotranspiration (Eta is important in hydrologic modeling and irrigation water management issues. Actual ET depends on an estimation of a water stress index and average soil water at crop root zone, and so depends on a chosen numerical method and adapted time step. During periods with no rainfall and/or irrigation, actual ET can be computed analytically or by using different numerical methods. Overal, there are many factors that influence actual evapotranspiration. These factors are crop potential evapotranspiration, available root zone water content, time step, crop sensitivity, and soil. In this paper different numerical methods are compared for different soil textures and different crops sensitivities. Materials and Methods: During a specific time step with no rainfall or irrigation, change in soil water content would be equal to evapotranspiration, ET. In this approach, however, deep percolation is generally ignored due to deep water table and negligible unsaturated hydraulic conductivity below rooting depth. This differential equation may be solved analytically or numerically considering different algorithms. We adapted four different numerical methods, as explicit, implicit, and modified Euler, midpoint method, and 3-rd order Heun method to approximate the differential equation. Three general soil types of sand, silt, and clay, and three different crop types of sensitive, moderate, and resistant under Nishaboor plain were used. Standard soil fraction depletion (corresponding to ETc=5 mm.d-1, pstd, below which crop faces water stress is adopted for crop sensitivity. Three values for pstd were considered in this study to cover the common crops in the area, including winter wheat and barley, cotton, alfalfa, sugar beet, saffron, among the others. Based on this parameter, three classes for crop sensitivity was considered, sensitive crops with pstd=0.2, moderate crops with pstd=0.5, and resistive crops with pstd=0
Directory of Open Access Journals (Sweden)
Peter Celec
2004-01-01
Full Text Available Cyclic variations of variables are ubiquitous in biomedical science. A number of methods for detecting rhythms have been developed, but they are often difficult to interpret. A simple procedure for detecting cyclic variations in biological time series and quantification of their probability is presented here. Analysis of rhythmic variance (ANORVA is based on the premise that the variance in groups of data from rhythmic variables is low when a time distance of one period exists between the data entries. A detailed stepwise calculation is presented including data entry and preparation, variance calculating, and difference testing. An example for the application of the procedure is provided, and a real dataset of the number of papers published per day in January 2003 using selected keywords is compared to randomized datasets. Randomized datasets show no cyclic variations. The number of papers published daily, however, shows a clear and significant (p<0.03 circaseptan (period of 7 days rhythm, probably of social origin
The inverse method parametric verification of real-time embedded systems
André , Etienne
2013-01-01
This book introduces state-of-the-art verification techniques for real-time embedded systems, based on the inverse method for parametric timed automata. It reviews popular formalisms for the specification and verification of timed concurrent systems and, in particular, timed automata as well as several extensions such as timed automata equipped with stopwatches, linear hybrid automata and affine hybrid automata.The inverse method is introduced, and its benefits for guaranteeing robustness in real-time systems are shown. Then, it is shown how an iteration of the inverse method can solv
Energy Technology Data Exchange (ETDEWEB)
Yokogawa, D., E-mail: d.yokogawa@chem.nagoya-u.ac.jp [Department of Chemistry, Graduate School of Science, Nagoya University, Chikusa, Nagoya 464-8602 (Japan); Institute of Transformative Bio-Molecules (WPI-ITbM), Nagoya University, Chikusa, Nagoya 464-8602 (Japan)
2016-09-07
Theoretical approach to design bright bio-imaging molecules is one of the most progressing ones. However, because of the system size and computational accuracy, the number of theoretical studies is limited to our knowledge. To overcome the difficulties, we developed a new method based on reference interaction site model self-consistent field explicitly including spatial electron density distribution and time-dependent density functional theory. We applied it to the calculation of indole and 5-cyanoindole at ground and excited states in gas and solution phases. The changes in the optimized geometries were clearly explained with resonance structures and the Stokes shift was correctly reproduced.
METHODS FOR CLUSTERING TIME SERIES DATA ACQUIRED FROM MOBILE HEALTH APPS.
Tignor, Nicole; Wang, Pei; Genes, Nicholas; Rogers, Linda; Hershman, Steven G; Scott, Erick R; Zweig, Micol; Yvonne Chan, Yu-Feng; Schadt, Eric E
2017-01-01
In our recent Asthma Mobile Health Study (AMHS), thousands of asthma patients across the country contributed medical data through the iPhone Asthma Health App on a daily basis for an extended period of time. The collected data included daily self-reported asthma symptoms, symptom triggers, and real time geographic location information. The AMHS is just one of many studies occurring in the context of now many thousands of mobile health apps aimed at improving wellness and better managing chronic disease conditions, leveraging the passive and active collection of data from mobile, handheld smart devices. The ability to identify patient groups or patterns of symptoms that might predict adverse outcomes such as asthma exacerbations or hospitalizations from these types of large, prospectively collected data sets, would be of significant general interest. However, conventional clustering methods cannot be applied to these types of longitudinally collected data, especially survey data actively collected from app users, given heterogeneous patterns of missing values due to: 1) varying survey response rates among different users, 2) varying survey response rates over time of each user, and 3) non-overlapping periods of enrollment among different users. To handle such complicated missing data structure, we proposed a probability imputation model to infer missing data. We also employed a consensus clustering strategy in tandem with the multiple imputation procedure. Through simulation studies under a range of scenarios reflecting real data conditions, we identified favorable performance of the proposed method over other strategies that impute the missing value through low-rank matrix completion. When applying the proposed new method to study asthma triggers and symptoms collected as part of the AMHS, we identified several patient groups with distinct phenotype patterns. Further validation of the methods described in this paper might be used to identify clinically important
Comparative Analysis of Neural Network Training Methods in Real-time Radiotherapy
Directory of Open Access Journals (Sweden)
Nouri S.
2017-03-01
Full Text Available Background: The motions of body and tumor in some regions such as chest during radiotherapy treatments are one of the major concerns protecting normal tissues against high doses. By using real-time radiotherapy technique, it is possible to increase the accuracy of delivered dose to the tumor region by means of tracing markers on the body of patients. Objective: This study evaluates the accuracy of some artificial intelligence methods including neural network and those of combination with genetic algorithm as well as particle swarm optimization (PSO estimating tumor positions in real-time radiotherapy. Method: One hundred recorded signals of three external markers were used as input data. The signals from 3 markers thorough 10 breathing cycles of a patient treated via a cyber-knife for a lung tumor were used as data input. Then, neural network method and its combination with genetic or PSO algorithms were applied determining the tumor locations using MATLAB© software program. Results: The accuracies were obtained 0.8%, 12% and 14% in neural network, genetic and particle swarm optimization algorithms, respectively. Conclusion: The internal target volume (ITV should be determined based on the applied neural network algorithm on training steps.
Ni, Jianjun; Wu, Liuying; Shi, Pengfei; Yang, Simon X
2017-01-01
Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently.
Predicting hepatitis B monthly incidence rates using weighted Markov chains and time series methods.
Shahdoust, Maryam; Sadeghifar, Majid; Poorolajal, Jalal; Javanrooh, Niloofar; Amini, Payam
2015-01-01
Hepatitis B (HB) is a major global mortality. Accurately predicting the trend of the disease can provide an appropriate view to make health policy disease prevention. This paper aimed to apply three different to predict monthly incidence rates of HB. This historical cohort study was conducted on the HB incidence data of Hamadan Province, the west of Iran, from 2004 to 2012. Weighted Markov Chain (WMC) method based on Markov chain theory and two time series models including Holt Exponential Smoothing (HES) and SARIMA were applied on the data. The results of different applied methods were compared to correct percentages of predicted incidence rates. The monthly incidence rates were clustered into two clusters as state of Markov chain. The correct predicted percentage of the first and second clusters for WMC, HES and SARIMA methods was (100, 0), (84, 67) and (79, 47) respectively. The overall incidence rate of HBV is estimated to decrease over time. The comparison of results of the three models indicated that in respect to existing seasonality trend and non-stationarity, the HES had the most accurate prediction of the incidence rates.
Samson, Arnaud; Thibaudeau, Christian; Bouchard, Jonathan; Gaudin, Émilie; Paulin, Caroline; Lecomte, Roger; Fontaine, Réjean
2018-05-01
A fully automated time alignment method based on a positron timing probe was developed to correct the channel-to-channel coincidence time dispersion of the LabPET II avalanche photodiode-based positron emission tomography (PET) scanners. The timing probe was designed to directly detect positrons and generate an absolute time reference. The probe-to-channel coincidences are recorded and processed using firmware embedded in the scanner hardware to compute the time differences between detector channels. The time corrections are then applied in real-time to each event in every channel during PET data acquisition to align all coincidence time spectra, thus enhancing the scanner time resolution. When applied to the mouse version of the LabPET II scanner, the calibration of 6 144 channels was performed in less than 15 min and showed a 47% improvement on the overall time resolution of the scanner, decreasing from 7 ns to 3.7 ns full width at half maximum (FWHM).
8-channel system for neutron-nuclear investigations by time-of-flight method
International Nuclear Information System (INIS)
Shvetsov, V.N.; Enik, T.L.; Mitsyna, L.V.; Popov, A.B.; Salamatin, I.M.; Sedyshev, P.V.; Sirotin, A.P.; Astakhova, N.V.; Salamatin, K.M.
2011-01-01
In connection with commissioning of the IREN pulsed resonance neutron source, new electronics and appropriate software are developed for registration of time-of-flight spectra with small width of the channel (10 ns). The hardware-software system is intended for research of the IREN neutron beam characteristics, properties of new detectors, and also for performance of precision experiments under conditions of low intensity or registration of rare events. The time encoder is the key element of the system hardware. It is developed on the basis of the Cypress-technologies. The unit can measure time intervals for signals intensity up to 10 5 for each of eight inputs. Using a USB interface provides system mobility. The TOF System Software includes the control program, driver software layer, data sorting program and data processing utilities and other units, performed as executable applications. The interprocess communication between units is provided by network and/or by specially designed interface based on the mechanism of named files mapped into memory. This method provides fastest possible communication between processes. The developed methods of integrating the executable components into a system provide a distributed system, improve the reusing of the software and provide the ability to assemble the system by the user
Directory of Open Access Journals (Sweden)
Jian Zhang
2016-12-01
Full Text Available In the environment of intelligent transportation systems, traffic condition data would have higher resolution in time and space, which is especially valuable for managing the interrupted traffic at signalized intersections. There exist a lot of algorithms for offset tuning, but few of them take the advantage of modern traffic detection methods such as probe vehicle data. This study proposes a method using probe trajectory data to optimize and adjust offsets in real time. The critical point, representing the changing vehicle dynamics, is first defined as the basis of this approach. Using the critical points related to different states of traffic conditions, such as free flow, queue formation, and dissipation, various traffic status parameters can be estimated, including actual travel speed, queue dissipation rate, and standing queue length. The offset can then be adjusted on a cycle-by-cycle basis. The performance of this approach is evaluated using a simulation network. The results show that the trajectory-based approach can reduce travel time of the coordinated traffic flow when compared with using well-defined offline offset.
openPSTD: The open source pseudospectral time-domain method for acoustic propagation
Hornikx, Maarten; Krijnen, Thomas; van Harten, Louis
2016-06-01
An open source implementation of the Fourier pseudospectral time-domain (PSTD) method for computing the propagation of sound is presented, which is geared towards applications in the built environment. Being a wave-based method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory usage as it allows to spatially sample close to the Nyquist criterion, thus keeping both the required spatial and temporal resolution coarse. In the implementation it has been opted to model the physical geometry as a composition of rectangular two-dimensional subdomains, hence initially restricting the implementation to orthogonal and two-dimensional situations. The strategy of using subdomains divides the problem domain into local subsets, which enables the simulation software to be built according to Object-Oriented Programming best practices and allows room for further computational parallelization. The software is built using the open source components, Blender, Numpy and Python, and has been published under an open source license itself as well. For accelerating the software, an option has been included to accelerate the calculations by a partial implementation of the code on the Graphical Processing Unit (GPU), which increases the throughput by up to fifteen times. The details of the implementation are reported, as well as the accuracy of the code.
An Accurate Method to Determine the Muzzle Leaving Time of Guns
Directory of Open Access Journals (Sweden)
H. X. Chao
2014-11-01
Full Text Available This paper states the importance of determining the muzzle leaving time of guns with a high degree of accuracy. Two commonly used methods are introduced, which are the high speed photography method and photoelectric transducer method, and the advantage and disadvantage of these two methods are analyzed. Furthermore, a new method to determine the muzzle leaving time of guns based on the combination of high speed photography and synchronized trigger technology is present in this paper, and its principle and uncertainty of measurement are evaluated. The firing experiments shows that the present method has distinguish advantage in accuracy and reliability from other methods.
International Nuclear Information System (INIS)
Tonoike, Kotaro; Yamamoto, Toshihiro; Watanabe, Shoichi; Miyoshi, Yoshinori
2003-01-01
As a part of the development of a subcriticality monitoring system, a system which has a time series data acquisition function of detector signals and a real time evaluation function of alpha value with the Feynman-alpha method was established, with which the kinetic parameter (alpha value) was measured at the STACY heterogeneous core. The Hashimoto's difference filter was implemented in the system, which enables the measurement at a critical condition. The measurement result of the new system agreed with the pulsed neutron method. (author)
Real-time biscuit tile image segmentation method based on edge detection.
Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter
2018-05-01
In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Innovative methods for calculation of freeway travel time using limited data : final report.
2008-01-01
Description: Travel time estimations created by processing of simulated freeway loop detector data using proposed method have been compared with travel times reported from VISSIM model. An improved methodology was proposed to estimate freeway corrido...
Nonlinear Time Reversal Acoustic Method of Friction Stir Weld Assessment, Phase I
National Aeronautics and Space Administration — The goal of the project is demonstration of the feasibility of Friction Stir Weld (FSW) assessment by novel Nonlinear Time Reversal Acoustic (TRA) method. Time...
Directional spectrum of ocean waves from array measurements using phase/time/path difference methods
Digital Repository Service at National Institute of Oceanography (India)
Fernandes, A.A.; Sarma, Y.V.B.; Menon, H.B.
Wave direction has for the first time been consistently, accurately and unambiguously evaluated from array measurements using the phase/time/path difference (PTPD) methods of Esteva in case of polygonal arrays and Borgman in case of linear arrays...
The Effect of Temperature and Drying Method on Drying Time and Color Quality of Mint
Directory of Open Access Journals (Sweden)
H Bahmanpour
2017-10-01
Full Text Available Introduction Mint (Mentha spicata L. cbelongs to the Lamiaceae family, is an herbaceous, perennial, aromatic and medicinal plant that cultivated for its essential oils and spices. Since the essential oil is extracted from dried plant, choosing the appropriate drying method is essential for gaining high quality essential oil.Vacuum drying technology is an alternative to conventional drying methods and reported by many authors as an efficient method for improving the drying quality especially color characteristics. On the other side, solar dryers are also useful for saving time and energy. In this study the effect of two method of dryings including vacuum-infrared versus solar at three different conventional temperatures (30, 40 and 50°C on mint plant is evaluated while factorial experiment with randomized complete block is applied. Drying time as well as color characteristics areconsidered for evaluation of each method of drying. Materials and Methods Factorial experiment with randomized complete block was applied in order to evaluate the effect of drying methods (vacuum-infrared versus solar and temperature (30, 40 and 50°C on drying time and color characteristics of mint. The initially moisture content of mint leaves measured according to the standard ASABE S358.2 during 24 hours inside an oven at 104 °C. Drying the samples continued until the moisture content (which real time measured reached to 10% wet basis. The components of a vacuum dryer consisted of a cylindrical vacuum chamber (0.335 m3 and a vacuum pump (piston version. The temperature of the chamber was controlled using three infrared bulbs using on-off controller. Temperature and weight of the products registered real time using a data acquisition system. The components of a solar dryer were consisting of a solar collector and a temperature control system which was turning the exhaust fan on and off in order to maintain the specific temperature. A date acquisition system was
International Nuclear Information System (INIS)
Langner, Ulrich W.; Keall, Paul J.
2010-01-01
Purpose: To quantify the magnitude and frequency of artifacts in simulated four-dimensional computed tomography (4D CT) images using three real-time acquisition methods- direction-dependent displacement acquisition, simultaneous displacement and phase acquisition, and simultaneous displacement and velocity acquisition- and to compare these methods with commonly used retrospective phase sorting. Methods and Materials: Image acquisition for the four 4D CT methods was simulated with different displacement and velocity tolerances for spheres with radii of 0.5 cm, 1.5 cm, and 2.5 cm, using 58 patient-measured tumors and respiratory motion traces. The magnitude and frequency of artifacts, CT doses, and acquisition times were computed for each method. Results: The mean artifact magnitude was 50% smaller for the three real-time methods than for retrospective phase sorting. The dose was ∼50% lower, but the acquisition time was 20% to 100% longer for the real-time methods than for retrospective phase sorting. Conclusions: Real-time acquisition methods can reduce the frequency and magnitude of artifacts in 4D CT images, as well as the imaging dose, but they increase the image acquisition time. The results suggest that direction-dependent displacement acquisition is the preferred real-time 4D CT acquisition method, because on average, the lowest dose is delivered to the patient and the acquisition time is the shortest for the resulting number and magnitude of artifacts.
Real-Time Detection Methods to Monitor TRU Compositions in UREX+Process Streams
Energy Technology Data Exchange (ETDEWEB)
McDeavitt, Sean; Charlton, William; Indacochea, J Ernesto; taleyarkhan, Rusi; Pereira, Candido
2013-03-01
The U.S. Department of Energy has developed advanced methods for reprocessing spent nuclear fuel. The majority of this development was accomplished under the Advanced Fuel Cycle Initiative (AFCI), building on the strong legacy of process development R&D over the past 50 years. The most prominent processing method under development is named UREX+. The name refers to a family of processing methods that begin with the Uranium Extraction (UREX) process and incorporate a variety of other methods to separate uranium, selected fission products, and the transuranic (TRU) isotopes from dissolved spent nuclear fuel. It is important to consider issues such as safeguards strategies and materials control and accountability methods. Monitoring of higher actinides during aqueous separations is a critical research area. By providing on-line materials accountability for the processes, covert diversion of the materials streams becomes much more difficult. The importance of the nuclear fuel cycle continues to rise on national and international agendas. The U.S. Department of Energy is evaluating and developing advanced methods for safeguarding nuclear materials along with instrumentation in various stages of the fuel cycle, especially in material balance areas (MBAs) and during reprocessing of used nuclear fuel. One of the challenges related to the implementation of any type of MBA and/or reprocessing technology (e.g., PUREX or UREX) is the real-time quantification and control of the transuranic (TRU) isotopes as they move through the process. Monitoring of higher actinides from their neutron emission (including multiplicity) and alpha signatures during transit in MBAs and in aqueous separations is a critical research area. By providing on-line real-time materials accountability, diversion of the materials becomes much more difficult. The objective of this consortium was to develop real time detection methods to monitor the efficacy of the UREX+ process and to safeguard the separated
Investigation of the Adaptability of Transient Stability Assessment Methods to Real-Time Operation
Weckesser, Johannes Tilman Gabriel; Jóhannsson, Hjörtur; Sommer, Stefan; Østergaard, Jacob
2012-01-01
In this paper, an investigation of the adaptability of available transient stability assessment methods to real-time operation and their real-time performance is carried out. Two approaches based on Lyapunov’s method and the equal area criterion are analyzed. The results allow to determine the runtime of each method with respect to the number of inputs. Furthermore, it allows to identify, which method is preferable in case of changes in the power system such as the integration of distributed ...
Homotopy decomposition method for solving one-dimensional time-fractional diffusion equation
Abuasad, Salah; Hashim, Ishak
2018-04-01
In this paper, we present the homotopy decomposition method with a modified definition of beta fractional derivative for the first time to find exact solution of one-dimensional time-fractional diffusion equation. In this method, the solution takes the form of a convergent series with easily computable terms. The exact solution obtained by the proposed method is compared with the exact solution obtained by using fractional variational homotopy perturbation iteration method via a modified Riemann-Liouville derivative.
Trend analysis using non-stationary time series clustering based on the finite element method
Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M. K.; Liess, S.
2014-01-01
In order to analyze low-frequency variability of climate, it is useful to model the climatic time series with multiple linear trends and locate the times of significant changes. In this paper, we have used non-stationary time series clustering to find change points in the trends. Clustering in a multi-dimensional non-stationary time series is challenging, since the problem is mathematically ill-posed. Clustering based on the finite element method (FEM) is one of the methods ...
A new integrated dual time-point amyloid PET/MRI data analysis method
Energy Technology Data Exchange (ETDEWEB)
Cecchin, Diego; Zucchetta, Pietro; Turco, Paolo; Bui, Franco [University Hospital of Padua, Nuclear Medicine Unit, Department of Medicine - DIMED, Padua (Italy); Barthel, Henryk; Tiepolt, Solveig; Sabri, Osama [Leipzig University, Department of Nuclear Medicine, Leipzig (Germany); Poggiali, Davide; Cagnin, Annachiara; Gallo, Paolo [University Hospital of Padua, Neurology, Department of Neurosciences (DNS), Padua (Italy); Frigo, Anna Chiara [University Hospital of Padua, Biostatistics, Epidemiology and Public Health Unit, Department of Cardiac, Thoracic and Vascular Sciences, Padua (Italy)
2017-11-15
In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ({sup 18}F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative ''dual time-point'' indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between
A new integrated dual time-point amyloid PET/MRI data analysis method
International Nuclear Information System (INIS)
Cecchin, Diego; Zucchetta, Pietro; Turco, Paolo; Bui, Franco; Barthel, Henryk; Tiepolt, Solveig; Sabri, Osama; Poggiali, Davide; Cagnin, Annachiara; Gallo, Paolo; Frigo, Anna Chiara
2017-01-01
In the initial evaluation of patients with suspected dementia and Alzheimer's disease, there is no consensus on how to perform semiquantification of amyloid in such a way that it: (1) facilitates visual qualitative interpretation, (2) takes the kinetic behaviour of the tracer into consideration particularly with regard to at least partially correcting for blood flow dependence, (3) analyses the amyloid load based on accurate parcellation of cortical and subcortical areas, (4) includes partial volume effect correction (PVEC), (5) includes MRI-derived topographical indexes, (6) enables application to PET/MRI images and PET/CT images with separately acquired MR images, and (7) allows automation. A method with all of these characteristics was retrospectively tested in 86 subjects who underwent amyloid ( 18 F-florbetaben) PET/MRI in a clinical setting (using images acquired 90-110 min after injection, 53 were classified visually as amyloid-negative and 33 as amyloid-positive). Early images after tracer administration were acquired between 0 and 10 min after injection, and later images were acquired between 90 and 110 min after injection. PVEC of the PET data was carried out using the geometric transfer matrix method. Parametric images and some regional output parameters, including two innovative ''dual time-point'' indexes, were obtained. Subjects classified visually as amyloid-positive showed a sparse tracer uptake in the primary sensory, motor and visual areas in accordance with the isocortical stage of the topographic distribution of the amyloid plaque (Braak stages V/VI). In patients classified visually as amyloid-negative, the method revealed detectable levels of tracer uptake in the basal portions of the frontal and temporal lobes, areas that are known to be sites of early deposition of amyloid plaques that probably represented early accumulation (Braak stage A) that is typical of normal ageing. There was a strong correlation between age
DEFF Research Database (Denmark)
Bourdakis, Eleftherios; Olesen, Bjarne W.; Grossule, Fabio
Night sky radiative cooling technology using PhotoVoltaic/Thermal panels (PVT) and night time ventilation have been studied both by means of simulations and experiments to evaluate their potential and to validate the created simulation model used to describe it. An experimental setup has been...... depending on the sky clearness. This cooling power was enough to remove the stored heat and regenerate the ceiling panels. The validation simulation model results related to PCM were close to the corresponding results extracted from the experiment, while the results related to the production of cold water...... through the night sky radiative cooling differed significantly. The possibility of night time ventilation was studied through simulations for three different latitudes. It was concluded that for Danish climatic conditions night time ventilation would also be able to regenerate the panels while its...
2007-11-01
281.4 -281.2 -281.0 MJD 54270.0 to 54277.0 (June 2007) MJD 53767.0 to 53773.0 (Feb 2006) S ag na c de la y N IC T to P TB (n s) days from MJD...standards in Europe and the US at the 10-15 uncertainty level,” Metrologia , 43, 109-120. [2] D. Piester, A. Bauch, L. Breakiron, D. Matsakis, B...Blanzano, and O. Koudelka, 2008, “Time transfer with nanosecond accuracy for the realization of International Atomic Time,” submitted to Metrologia
Spruce, Joseph; Hargrove, William W.; Gasser, Gerald; Norman, Steve
2013-01-01
U.S. forests occupy approx.1/3 of total land area (approx. 304 million ha). Since 2000, a growing number of regionally evident forest disturbances have occurred due to abiotic and biotic agents. Regional forest disturbances can threaten human life and property, bio-diversity and water supplies. Timely regional forest disturbance monitoring products are needed to aid forest health management work. Near Real Time (NRT) twice daily MODIS NDVI data provide a means to monitor U.S. regional forest disturbances every 8 days. Since 2010, these NRT forest change products have been produced and posted on the US Forest Service ForWarn Early Warning System for Forest Threats.
International Nuclear Information System (INIS)
Caffrey, M.; Hing, F.S.
1987-01-01
A method that enables temperature-composition phase diagram construction at unprecedented rates is described and evaluated. The method involves establishing a known temperature gradient along the length of a metal rod. Samples of different compositions contained in long, thin-walled capillaries are positioned lengthwise on the rod and equilibrated such that the temperature gradient is communicated into the sample. The sample is then moved through a focused, monochromatic synchrotron-derived x-ray beam and the image-intensified diffraction pattern from the sample is recorded on videotape continuously in live-time as a function of position and, thus, temperature. The temperature at which the diffraction pattern changes corresponds to a phase boundary, and the phase(s) existing (coexisting) on either side of the boundary can be identified on the basis of the diffraction pattern. Repeating the measurement on samples covering the entire composition range completes the phase diagram. These additional samples can be conveniently placed at different locations around the perimeter of the cylindrical rod and rotated into position for diffraction measurement. Temperature-composition phase diagrams for the fully hydrated binary mixtures, dimyristoylphosphatidylcholine (DMPC)/dipalmitoylphosphatidylcholine (DPPC) and dipalmitoylphosphatidylethanolamine (DPPE)/DPPC, have been constructed using the new temperature gradient method. They agree well with and extend the results obtained by other techniques. In the DPPE/DPPC system structural parameters as a function of temperature in the various phases including the subgel phase are reported. The potential limitations of this steady-state method are discussed
Hansen, J V; Nelson, R D
1997-01-01
Ever since the initial planning for the 1997 Utah legislative session, neural-network forecasting techniques have provided valuable insights for analysts forecasting tax revenues. These revenue estimates are critically important since agency budgets, support for education, and improvements to infrastructure all depend on their accuracy. Underforecasting generates windfalls that concern taxpayers, whereas overforecasting produces budget shortfalls that cause inadequately funded commitments. The pattern finding ability of neural networks gives insightful and alternative views of the seasonal and cyclical components commonly found in economic time series data. Two applications of neural networks to revenue forecasting clearly demonstrate how these models complement traditional time series techniques. In the first, preoccupation with a potential downturn in the economy distracts analysis based on traditional time series methods so that it overlooks an emerging new phenomenon in the data. In this case, neural networks identify the new pattern that then allows modification of the time series models and finally gives more accurate forecasts. In the second application, data structure found by traditional statistical tools allows analysts to provide neural networks with important information that the networks then use to create more accurate models. In summary, for the Utah revenue outlook, the insights that result from a portfolio of forecasts that includes neural networks exceeds the understanding generated from strictly statistical forecasting techniques. In this case, the synergy clearly results in the whole of the portfolio of forecasts being more accurate than the sum of the individual parts.
Energy Technology Data Exchange (ETDEWEB)
Ersland, B.G.
1996-05-01
This mathematical doctoral thesis contains the theory, algorithms and numerical simulations for a heterogeneous oil reservoir. It presents the equations, which apply to immiscible and incompressible two-phase fluid flow in the reservoir, including the effect of capillary pressure forces, and emphasises in particular the interior boundary conditions at the interface between two sediments. Two different approaches are discussed. The first approach is to decompose the computational domain along the interior boundary and iterate between the subdomains until mass balance is achieved. The second approach accounts for the interior boundary conditions in the basis in which the solution is expanded, the basis being discontinuous over the interior boundaries. An overview of the construction of iterative solvers for partial differential equations by means of Schwartz methods is given, and the algorithm for local refinement with Schwartz iterations as iterative solver is described. The theory is then applied to a core plug problem in one and two space dimensions and the results of different methods compared. A general description is given of the computer simulation model, which is implemented in C++. 64 refs., 49 figs., 7 tabs.
International Nuclear Information System (INIS)
Tsujita, K.; Endo, T.; Yamamoto, A.
2013-01-01
An efficient numerical method for time-dependent transport equation, the mutigrid amplitude function (MAF) method, is proposed. The method of characteristics (MOC) is being widely used for reactor analysis thanks to the advances of numerical algorithms and computer hardware. However, efficient kinetic calculation method for MOC is still desirable since it requires significant computation time. Various efficient numerical methods for solving the space-dependent kinetic equation, e.g., the improved quasi-static (IQS) and the frequency transform methods, have been developed so far mainly for diffusion calculation. These calculation methods are known as effective numerical methods and they offer a way for faster computation. However, they have not been applied to the kinetic calculation method using MOC as the authors' knowledge. Thus, the MAF method is applied to the kinetic calculation using MOC aiming to reduce computation time. The MAF method is a unified numerical framework of conventional kinetic calculation methods, e.g., the IQS, the frequency transform, and the theta methods. Although the MAF method is originally developed for the space-dependent kinetic calculation based on the diffusion theory, it is extended to transport theory in the present study. The accuracy and computational time are evaluated though the TWIGL benchmark problem. The calculation results show the effectiveness of the MAF method. (authors)
International Nuclear Information System (INIS)
Adib, M.; Salama, M.; Abd-Kawi, A.; Sadek, S.; Hamouda, I.
1975-01-01
A new method is developed to measure the dead time of a detector channel for a neutron time-of-flight spectrometer. The method is based on the simultaneous use of two identical BF 3 detectors but with two different efficiencies, due to their different enrichment in B 10 . The measurements were performed using the T.O.F. spectrometer installed at channel No. 6 of the ET-RR-1 reactor. The main contribution to the dead time was found to be due to the time analyser and the neutron detector used. The analyser dead time has been determined using a square wave pulse generator with frequency of 1 MC/S. For channel widths of 24.4 us, 48.8 ud and 97.6 us, the weighted dead times for statistical pulse distribution were found to be 3.25 us, 1.87 us respectively. The dead time of the detector contributes mostly to the counting losses and its value was found to be (33+-3) us
Directory of Open Access Journals (Sweden)
Frederic D Sigoillot
Full Text Available Automated time-lapse microscopy can visualize proliferation of large numbers of individual cells, enabling accurate measurement of the frequency of cell division and the duration of interphase and mitosis. However, extraction of quantitative information by manual inspection of time-lapse movies is too time-consuming to be useful for analysis of large experiments.Here we present an automated time-series approach that can measure changes in the duration of mitosis and interphase in individual cells expressing fluorescent histone 2B. The approach requires analysis of only 2 features, nuclear area and average intensity. Compared to supervised learning approaches, this method reduces processing time and does not require generation of training data sets. We demonstrate that this method is as sensitive as manual analysis in identifying small changes in interphase or mitotic duration induced by drug or siRNA treatment.This approach should facilitate automated analysis of high-throughput time-lapse data sets to identify small molecules or gene products that influence timing of cell division.
Recognition of Time Stamps on Full-Disk Hα Images Using Machine Learning Methods
Xu, Y.; Huang, N.; Jing, J.; Liu, C.; Wang, H.; Fu, G.
2016-12-01
Observation and understanding of the physics of the 11-year solar activity cycle and 22-year magnetic cycle are among the most important research topics in solar physics. The solar cycle is responsible for magnetic field and particle fluctuation in the near-earth environment that have been found increasingly important in affecting the living of human beings in the modern era. A systematic study of large-scale solar activities, as made possible by our rich data archive, will further help us to understand the global-scale magnetic fields that are closely related to solar cycles. The long-time-span data archive includes both full-disk and high-resolution Hα images. Prior to the widely use of CCD cameras in 1990s, 35-mm films were the major media to store images. The research group at NJIT recently finished the digitization of film data obtained by the National Solar Observatory (NSO) and Big Bear Solar Observatory (BBSO) covering the period of 1953 to 2000. The total volume of data exceeds 60 TB. To make this huge database scientific valuable, some processing and calibration are required. One of the most important steps is to read the time stamps on all of the 14 million images, which is almost impossible to be done manually. We implemented three different methods to recognize the time stamps automatically, including Optical Character Recognition (OCR), Classification Tree and TensorFlow. The latter two are known as machine learning algorithms which are very popular now a day in pattern recognition area. We will present some sample images and the results of clock recognition from all three methods.
Directory of Open Access Journals (Sweden)
C Hauman
2014-06-01
Full Text Available The vehicle routing problem with time windows is a widely studied problem with many real-world applications. The problem considered here entails the construction of routes that a number of identical vehicles travel to service different nodes within a certain time window. New benchmark problems with multi-objective features were recently suggested in the literature and the multi-objective optimisation cross-entropy method is applied to these problems to investigate the feasibility of the method and to determine and propose reference solutions for the benchmark problems. The application of the cross-entropy method to the multi-objective vehicle routing problem with soft time windows is investigated. The objectives that are evaluated include the minimisation of the total distance travelled, the number of vehicles and/or routes, the total waiting time and delay time of the vehicles and the makespan of a route.
Bonants, P.J.M.; Gent-Pelzer, van M.P.E.; Hooftman, R.; Cooke, D.E.L.; Guy, D.C.; Duncan, J.M.
2004-01-01
Phytophthora fragariae, the cause of strawberry red stele disease, is a quarantine pathogen in Europe. Detecting low levels of infection requires sensitive and specific methods. In the past, Dutch and English inspection services have used bait plants to test strawberry propagation stocks destined
Cipolloni, Marco; Kaleta, Jiří; Mašát, Milan; Dron, Paul I; Shen, Yongqiang; Zhao, Ke; Rogers, Charles T; Shoemaker, Richard K; Michl, Josef
2015-04-23
We examine the fluorescence anisotropy of rod-shaped guests held inside the channels of tris( o -phenylenedioxy)cyclotriphosphazene (TPP) host nanocrystals, characterized by powder X-ray diffraction and solid state NMR spectroscopy. We address two issues: (i) are light polarization measurements on an aqueous colloidal solution of TPP nanocrystals meaningful, or is depolarization by scattering excessive? (ii) Can measurements of the rotational mobility of the included guests be performed at low enough loading levels to suppress depolarization by intercrystallite energy transfer? We find that meaningful measurements are possible and demonstrate that the long axis of molecular rods included in TPP channels performs negligible vibrational motion.
Research on Monte Carlo improved quasi-static method for reactor space-time dynamics
International Nuclear Information System (INIS)
Xu Qi; Wang Kan; Li Shirui; Yu Ganglin
2013-01-01
With large time steps, improved quasi-static (IQS) method can improve the calculation speed for reactor dynamic simulations. The Monte Carlo IQS method was proposed in this paper, combining the advantages of both the IQS method and MC method. Thus, the Monte Carlo IQS method is beneficial for solving space-time dynamics problems of new concept reactors. Based on the theory of IQS, Monte Carlo algorithms for calculating adjoint neutron flux, reactor kinetic parameters and shape function were designed and realized. A simple Monte Carlo IQS code and a corresponding diffusion IQS code were developed, which were used for verification of the Monte Carlo IQS method. (authors)
Increased efficacy for in-house validation of real-time PCR GMO detection methods.
Scholtens, I M J; Kok, E J; Hougs, L; Molenaar, B; Thissen, J T N M; van der Voet, H
2010-03-01
To improve the efficacy of the in-house validation of GMO detection methods (DNA isolation and real-time PCR, polymerase chain reaction), a study was performed to gain insight in the contribution of the different steps of the GMO detection method to the repeatability and in-house reproducibility. In the present study, 19 methods for (GM) soy, maize canola and potato were validated in-house of which 14 on the basis of an 8-day validation scheme using eight different samples and five on the basis of a more concise validation protocol. In this way, data was obtained with respect to the detection limit, accuracy and precision. Also, decision limits were calculated for declaring non-conformance (>0.9%) with 95% reliability. In order to estimate the contribution of the different steps in the GMO analysis to the total variation variance components were estimated using REML (residual maximum likelihood method). From these components, relative standard deviations for repeatability and reproducibility (RSD(r) and RSD(R)) were calculated. The results showed that not only the PCR reaction but also the factors 'DNA isolation' and 'PCR day' are important factors for the total variance and should therefore be included in the in-house validation. It is proposed to use a statistical model to estimate these factors from a large dataset of initial validations so that for similar GMO methods in the future, only the PCR step needs to be validated. The resulting data are discussed in the light of agreed European criteria for qualified GMO detection methods.
Perfectly Matched Layer for the Wave Equation Finite Difference Time Domain Method
Miyazaki, Yutaka; Tsuchiya, Takao
2012-07-01
The perfectly matched layer (PML) is introduced into the wave equation finite difference time domain (WE-FDTD) method. The WE-FDTD method is a finite difference method in which the wave equation is directly discretized on the basis of the central differences. The required memory of the WE-FDTD method is less than that of the standard FDTD method because no particle velocity is stored in the memory. In this study, the WE-FDTD method is first combined with the standard FDTD method. Then, Berenger's PML is combined with the WE-FDTD method. Some numerical demonstrations are given for the two- and three-dimensional sound fields.
Energy Technology Data Exchange (ETDEWEB)
Lynch, Ryan S.; Kaspi, Victoria M.; Archibald, Anne M.; Karako-Argaman, Chen [Department of Physics, McGill University, 3600 University Street, Montreal, QC H3A 2T8 (Canada); Boyles, Jason; Lorimer, Duncan R.; McLaughlin, Maura A.; Cardoso, Rogerio F. [Department of Physics, West Virginia University, 111 White Hall, Morgantown, WV 26506 (United States); Ransom, Scott M. [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Stairs, Ingrid H.; Berndsen, Aaron; Cherry, Angus; McPhee, Christie A. [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada); Hessels, Jason W. T.; Kondratiev, Vladislav I.; Van Leeuwen, Joeri [ASTRON, The Netherlands Institute for Radio Astronomy, Postbus 2, 7990-AA Dwingeloo (Netherlands); Epstein, Courtney R. [Department of Astronomy, Ohio State University, 140 West 18th Avenue, Columbus, OH 43210 (United States); Pennucci, Tim [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904 (United States); Roberts, Mallory S. E. [Eureka Scientific Inc., 2452 Delmer Street, Suite 100, Oakland, CA 94602 (United States); Stovall, Kevin, E-mail: rlynch@physics.mcgill.ca [Center for Advanced Radio Astronomy and Department of Physics and Astronomy, University of Texas at Brownsville, Brownsville, TX 78520 (United States)
2013-02-15
We have completed a 350 MHz Drift-scan Survey using the Robert C. Byrd Green Bank Telescope with the goal of finding new radio pulsars, especially millisecond pulsars that can be timed to high precision. This survey covered {approx}10,300 deg{sup 2} and all of the data have now been fully processed. We have discovered a total of 31 new pulsars, 7 of which are recycled pulsars. A companion paper by Boyles et al. describes the survey strategy, sky coverage, and instrumental setup, and presents timing solutions for the first 13 pulsars. Here we describe the data analysis pipeline, survey sensitivity, and follow-up observations of new pulsars, and present timing solutions for 10 other pulsars. We highlight several sources-two interesting nulling pulsars, an isolated millisecond pulsar with a measurement of proper motion, and a partially recycled pulsar, PSR J0348+0432, which has a white dwarf companion in a relativistic orbit. PSR J0348+0432 will enable unprecedented tests of theories of gravity.
Directory of Open Access Journals (Sweden)
Gökçen Uysal
2018-03-01
Full Text Available Optimal control of reservoirs is a challenging task due to conflicting objectives, complex system structure, and uncertainties in the system. Real time control decisions suffer from streamflow forecast uncertainty. This study aims to use Probabilistic Streamflow Forecasts (PSFs having a lead-time up to 48 h as input for the recurrent reservoir operation problem. A related technique for decision making is multi-stage stochastic optimization using scenario trees, referred to as Tree-based Model Predictive Control (TB-MPC. Deterministic Streamflow Forecasts (DSFs are provided by applying random perturbations on perfect data. PSFs are synthetically generated from DSFs by a new approach which explicitly presents dynamic uncertainty evolution. We assessed different variables in the generation of stochasticity and compared the results using different scenarios. The developed real-time hourly flood control was applied to a test case which had limited reservoir storage and restricted downstream condition. According to hindcasting closed-loop experiment results, TB-MPC outperforms the deterministic counterpart in terms of decreased downstream flood risk according to different independent forecast scenarios. TB-MPC was also tested considering different number of tree branches, forecast horizons, and different inflow conditions. We conclude that using synthetic PSFs in TB-MPC can provide more robust solutions against forecast uncertainty by resolution of uncertainty in trees.
Weigel, A. M.; Griffin, R.; Gallagher, D.
2015-12-01
Storm surge has enough destructive power to damage buildings and infrastructure, erode beaches, and threaten human life across large geographic areas, hence posing the greatest threat of all the hurricane hazards. The United States Gulf of Mexico has proven vulnerable to hurricanes as it has been hit by some of the most destructive hurricanes on record. With projected rises in sea level and increases in hurricane activity, there is a need to better understand the associated risks for disaster mitigation, preparedness, and response. GIS has become a critical tool in enhancing disaster planning, risk assessment, and emergency response by communicating spatial information through a multi-layer approach. However, there is a need for a near real-time method of identifying areas with a high risk of being impacted by storm surge. Research was conducted alongside Baron, a private industry weather enterprise, to facilitate automated modeling and visualization of storm surge inundation and vulnerability on a near real-time basis. This research successfully automated current flood hazard mapping techniques using a GIS framework written in a Python programming environment, and displayed resulting data through an Application Program Interface (API). Data used for this methodology included high resolution topography, NOAA Probabilistic Surge model outputs parsed from Rich Site Summary (RSS) feeds, and the NOAA Census tract level Social Vulnerability Index (SoVI). The development process required extensive data processing and management to provide high resolution visualizations of potential flooding and population vulnerability in a timely manner. The accuracy of the developed methodology was assessed using Hurricane Isaac as a case study, which through a USGS and NOAA partnership, contained ample data for statistical analysis. This research successfully created a fully automated, near real-time method for mapping high resolution storm surge inundation and vulnerability for the
International Nuclear Information System (INIS)
Park, Jong Woon; Choi, Hyun Gyung
2014-01-01
A turbulent fluid flow over staggered tube bundles is of great interest in many engineering fields including nuclear fuel rods, heat exchangers and especially a gas cooled reactor lower plenum. Computational methods have evolved for the simulation of such flow for decades and lattice Boltzmann method (LBM) is one of the attractive methods due to its sound physical basis and ease of computerization including parallelization. In this study to find computational performance of the LBM in turbulent flows over staggered tubes, a fluid flow analysis code employing multi-relaxation time lattice Boltzmann method (MRT-LBM) is developed based on a 2-dimensional D2Q9 lattice model and classical sub-grid eddy viscosity model of Smagorinsky. As a first step, fundamental performance MRT-LBM is investigated against a standard problem of a flow past a cylinder at low Reynolds number in terms of drag forces. As a major step, benchmarking of the MRT-LBM is performed over a turbulent flow through staggered tube bundles at Reynolds number of 18,000. For a flow past a single cylinder, the accuracy is validated against existing experimental data and previous computations in terms of drag forces on the cylinder. Mainly, the MRT-LBM computation for a flow through staggered tube bundles is performed and compared with experimental data and general purpose computational fluid dynamic (CFD) analyses with standard k-ω turbulence and large eddy simulation (LES) equipped with turbulence closures of Smagrinsky-Lilly and wall-adapting local eddy-viscosity (WALE) model. The agreement between the experimental and the computational results from the present MRT-LBM is found to be reasonably acceptable and even comparable to the LES whereas the computational efficiency is superior. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Park, Jong Woon; Choi, Hyun Gyung [Dongguk Univ., Gyeongju (Korea, Republic of). Nuclear and Energy Engineering Dept.
2014-02-15
A turbulent fluid flow over staggered tube bundles is of great interest in many engineering fields including nuclear fuel rods, heat exchangers and especially a gas cooled reactor lower plenum. Computational methods have evolved for the simulation of such flow for decades and lattice Boltzmann method (LBM) is one of the attractive methods due to its sound physical basis and ease of computerization including parallelization. In this study to find computational performance of the LBM in turbulent flows over staggered tubes, a fluid flow analysis code employing multi-relaxation time lattice Boltzmann method (MRT-LBM) is developed based on a 2-dimensional D2Q9 lattice model and classical sub-grid eddy viscosity model of Smagorinsky. As a first step, fundamental performance MRT-LBM is investigated against a standard problem of a flow past a cylinder at low Reynolds number in terms of drag forces. As a major step, benchmarking of the MRT-LBM is performed over a turbulent flow through staggered tube bundles at Reynolds number of 18,000. For a flow past a single cylinder, the accuracy is validated against existing experimental data and previous computations in terms of drag forces on the cylinder. Mainly, the MRT-LBM computation for a flow through staggered tube bundles is performed and compared with experimental data and general purpose computational fluid dynamic (CFD) analyses with standard k-ω turbulence and large eddy simulation (LES) equipped with turbulence closures of Smagrinsky-Lilly and wall-adapting local eddy-viscosity (WALE) model. The agreement between the experimental and the computational results from the present MRT-LBM is found to be reasonably acceptable and even comparable to the LES whereas the computational efficiency is superior. (orig.)
Comparison of LMFBR piping response obtained using response spectrum and time history methods
International Nuclear Information System (INIS)
Hulbert, G.M.
1981-04-01
The dynamic response to a seismic event is calculated for a piping system using a response spectrum analysis method and two time history analysis methods. The results from the analytical methods are compared to identify causes for the differences between the sets of analytical results. Comparative methods are also presented which help to gain confidence in the accuracy of the analytical methods in predicting piping system structure response during seismic events
A Summary of the Space-Time Conservation Element and Solution Element (CESE) Method
Wang, Xiao-Yen J.
2015-01-01
The space-time Conservation Element and Solution Element (CESE) method for solving conservation laws is examined for its development motivation and design requirements. The characteristics of the resulting scheme are discussed. The discretization of the Euler equations is presented to show readers how to construct a scheme based on the CESE method. The differences and similarities between the CESE method and other traditional methods are discussed. The strengths and weaknesses of the method are also addressed.
A New Method for Calibrating the Time Delay of a Piezoelectric Probe
DEFF Research Database (Denmark)
Hansen, Bengt Hurup
1974-01-01
A simple method for calibrating the time delay of a piezoelectric probe of the type often used in plasma physics is described.......A simple method for calibrating the time delay of a piezoelectric probe of the type often used in plasma physics is described....
OpenPSTD : The open source implementation of the pseudospectral time-domain method
Krijnen, T.; Hornikx, M.C.J.; Borkowski, B.
2014-01-01
An open source implementation of the pseudospectral time-domain method for the propagation of sound is presented, which is geared towards applications in the built environment. Being a wavebased method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory
THE PSTD ALGORITHM: A TIME-DOMAIN METHOD REQUIRING ONLY TWO CELLS PER WAVELENGTH. (R825225)
A pseudospectral time-domain (PSTD) method is developed for solutions of Maxwell's equations. It uses the fast Fourier transform (FFT), instead of finite differences on conventional finite-difference-time-domain (FDTD) methods, to represent spatial derivatives. Because the Fourie...
Time-dependent density-functional theory in the projector augmented-wave method
DEFF Research Database (Denmark)
Walter, Michael; Häkkinen, Hannu; Lehtovaara, Lauri
2008-01-01
We present the implementation of the time-dependent density-functional theory both in linear-response and in time-propagation formalisms using the projector augmented-wave method in real-space grids. The two technically very different methods are compared in the linear-response regime where we...
Investigation of the Adaptability of Transient Stability Assessment Methods to Real-Time Operation
DEFF Research Database (Denmark)
Weckesser, Johannes Tilman Gabriel; Jóhannsson, Hjörtur; Sommer, Stefan
2012-01-01
In this paper, an investigation of the adaptability of available transient stability assessment methods to real-time operation and their real-time performance is carried out. Two approaches based on Lyapunov’s method and the equal area criterion are analyzed. The results allow to determine...
A new method for real-time monitoring of grout spread through fractured rocks
International Nuclear Information System (INIS)
Henderson, A. E.; Robertson, I. A.; Whitfield, J. M.; Garrard, G. F. G.; Swannell, N. G.; Fisch, H.
2008-01-01
Reducing water ingress into the Shaft at Dounreay is essential for the success of future intermediate level waste (ILW) recovery using the dry retrieval method. The reduction is being realised by forming an engineered barrier of ultrafine cementitious grout injected into the fractured rock surrounding the Shaft. Grout penetration of 6 m in <50μm fractures is being reliably achieved, with a pattern of repeated injections ultimately reducing rock mass permeability by up to three orders of magnitude. An extensive field trials period, involving over 200 grout mix designs and the construction of a full scale demonstration barrier, has yielded several new field techniques that improve the quality and reliability of cementitious grout injection for engineered barriers. In particular, a new method has been developed for tracking in real-time the spread of ultrafine cementitious grout through fractured rock and relating the injection characteristics to barrier design. Fieldwork by the multi-disciplinary international team included developing the injection and real-time monitoring techniques, pre- and post injection hydro-geological testing to quantify the magnitude and extent of changes in rock mass permeability, and correlation of grout spread with injection parameters to inform the main works grouting programme. (authors)
Methods and tools to support real time risk-based flood forecasting - a UK pilot application
Directory of Open Access Journals (Sweden)
Brown Emma
2016-01-01
Full Text Available Flood managers have traditionally used probabilistic models to assess potential flood risk for strategic planning and non-operational applications. Computational restrictions on data volumes and simulation times have meant that information on the risk of flooding has not been available for operational flood forecasting purposes. In practice, however, the operational flood manager has probabilistic questions to answer, which are not completely supported by the outputs of traditional, deterministic flood forecasting systems. In a collaborative approach, HR Wallingford and Deltares have developed methods, tools and techniques to extend existing flood forecasting systems with elements of strategic flood risk analysis, including probabilistic failure analysis, two dimensional flood spreading simulation and the analysis of flood impacts and consequences. This paper presents the results of the application of these new operational flood risk management tools to a pilot catchment in the UK. It discusses the problems of performing probabilistic flood risk assessment in real time and how these have been addressed in this study. It also describes the challenges of the communication of risk to operational flood managers and to the general public, and how these new methods and tools can provide risk-based supporting evidence to assist with this process.
A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method
Zhan, Lei; Xiong, Juntao; Liu, Feng
2016-05-01
The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.
DEFF Research Database (Denmark)
Perez, Angel; Jóhannsson, Hjörtur; Østergaard, Jacob
2015-01-01
This article characterizes experimentally the relation between phase and magnitude error from Phasor Measurement Units (PMU) in steady state and study its effect on real-time stability assessment methods. This is achieved by a set of laboratory tests applied to four different devices, where...... a bivariate Gaussian mixture distribution was used to represent the error, obtained experimentally, and later include it in the synthesized PMU measurement using the Monte Carlo Method. Two models for including uncertainty are compared and the results show that taking into account the correlation between...
Solving the Schroedinger equation using the finite difference time domain method
International Nuclear Information System (INIS)
Sudiarta, I Wayan; Geldart, D J Wallace
2007-01-01
In this paper, we solve the Schroedinger equation using the finite difference time domain (FDTD) method to determine energies and eigenfunctions. In order to apply the FDTD method, the Schroedinger equation is first transformed into a diffusion equation by the imaginary time transformation. The resulting time-domain diffusion equation is then solved numerically by the FDTD method. The theory and an algorithm are provided for the procedure. Numerical results are given for illustrative examples in one, two and three dimensions. It is shown that the FDTD method accurately determines eigenfunctions and energies of these systems
Directory of Open Access Journals (Sweden)
I. Fatorova
2014-01-01
Full Text Available Hematopoietic stem cells (HSCs, still represent a certain mystery in biology, have a unique property of dividing into equal cells and repopulating the hematopoietic tissue. This potential enables their use in transplantation treatments. The quality of the HSC grafts for transplantation is evaluated by flow cytometric determination of the CD34+ cells, which enables optimal timing of the first apheresis and the acquisition of maximal yield of the peripheral blood stem cells (PBSCs. To identify a more efficient method for evaluating CD34+ cells, we compared the following alternative methods with the reference method: hematopoietic progenitor cells (HPC enumeration (using the Sysmex XE-2100 analyser, detection of CD133+ cells, and quantification of aldehyde dehydrogenase activity in the PBSCs. 266 aphereses (84 patients were evaluated. In the preapheretic blood, the new methods produced data that were in agreement with the reference method. The ROC curves have shown that for the first-day apheresis target, the optimal predictive cut-off value was 0.032 cells/mL for the HPC method (sensitivity 73.4%, specificity 69.3%. HPC method exhibited a definite practical superiority as compared to other methods tested. HPC enumeration could serve as a supplementary method for the optimal timing of the first apheresis; it is simple, rapid, and cheap.
Determination of the response time of pressure transducers using the direct method
International Nuclear Information System (INIS)
Perillo, S.R.P.
1994-01-01
The available methods to determine the response time of nuclear safety related pressure transducers are discussed, with emphasis to the direct method. In order to perform the experiments, a Hydraulic Ramp Generator was built. The equipment produces ramp pressure transients simultaneously to a reference transducer and to the transducer under test. The time lag between the output of the two transducers, when they reach a predetermined setpoint, is measured as the time delay of the transducer under test. Some results using the direct method to determine the time delay of pressure transducers (1 E Class Conventional) are presented. (author). 18 refs, 35 figs, 12 tabs
Study on APD real time compensation methods of laser Detection system
International Nuclear Information System (INIS)
Feng Ying; Zhang He; Zhang Xiangjin; Liu Kun
2011-01-01
their operating principles. The constant false alarm rate compensation can't detect the pulse signal which comes randomly. Therefore real-time performance can't be realized. The noise compensation can meet the request of real-time performance. If it is used in the environment where background light is intense or changes acutely, there is a better effect. The temperature compensation can also achieve the real-time performance request. If it is used in the environment where temperature changes acutely, there is also a better effect. Aim at such problems, this paper presents that different APD real-time compensations should be adopt to adapt to different environments. The exiting temperature compensation adjusts output voltage by using variable resistance to regulate input voltage. Its structure is complex; the real-time performance is worse. In order to remedy these defects, a real-time temperature compensation which is based on switch on-off time of switching power supply is designed. Its feasibility and operating stability are confirmed by plate making and experiment. At last, the comparison experiments between the real-time noise compensation and the real-time temperature compensation is carried out in the environments where temperature is almost invariant and background light acutely changes from5lux to150lux . The result shows that the operating effect of the real-time noise compensation is better here, the noise minifies to a sixth of original noise. The comparison experiments between the real-time noise compensation and the real-time temperature compensation is carried out in darkroom where background light is 5lux and temperature almost rapidly changes from -20 deg. C to 80 deg. C. The result shows that the operating effect of the real-time temperature compensation is better here, the noise minifies to a seventh of original noise. Moreover, these methods can be applied to other type detection systems of weak photoelectric signal; they have high actual application
Study on APD real time compensation methods of laser Detection system
Energy Technology Data Exchange (ETDEWEB)
Feng Ying; Zhang He; Zhang Xiangjin; Liu Kun, E-mail: fy_caimi@163.com [ZNDY of Ministerial Key Laboratory, Nanjing University of Science and Technology, Nanjing 210094 (China)
2011-02-01
by analyzing their operating principles. The constant false alarm rate compensation can't detect the pulse signal which comes randomly. Therefore real-time performance can't be realized. The noise compensation can meet the request of real-time performance. If it is used in the environment where background light is intense or changes acutely, there is a better effect. The temperature compensation can also achieve the real-time performance request. If it is used in the environment where temperature changes acutely, there is also a better effect. Aim at such problems, this paper presents that different APD real-time compensations should be adopt to adapt to different environments. The exiting temperature compensation adjusts output voltage by using variable resistance to regulate input voltage. Its structure is complex; the real-time performance is worse. In order to remedy these defects, a real-time temperature compensation which is based on switch on-off time of switching power supply is designed. Its feasibility and operating stability are confirmed by plate making and experiment. At last, the comparison experiments between the real-time noise compensation and the real-time temperature compensation is carried out in the environments where temperature is almost invariant and background light acutely changes from5lux to150lux . The result shows that the operating effect of the real-time noise compensation is better here, the noise minifies to a sixth of original noise. The comparison experiments between the real-time noise compensation and the real-time temperature compensation is carried out in darkroom where background light is 5lux and temperature almost rapidly changes from -20 deg. C to 80 deg. C. The result shows that the operating effect of the real-time temperature compensation is better here, the noise minifies to a seventh of original noise. Moreover, these methods can be applied to other type detection systems of weak photoelectric signal; they
Study on APD real time compensation methods of laser Detection system
Ying, Feng; He, Zhang; Xiangjin, Zhang; Kun, Liu
2011-02-01
their operating principles. The constant false alarm rate compensation can't detect the pulse signal which comes randomly. Therefore real-time performance can't be realized. The noise compensation can meet the request of real-time performance. If it is used in the environment where background light is intense or changes acutely, there is a better effect. The temperature compensation can also achieve the real-time performance request. If it is used in the environment where temperature changes acutely, there is also a better effect. Aim at such problems, this paper presents that different APD real-time compensations should be adopt to adapt to different environments. The exiting temperature compensation adjusts output voltage by using variable resistance to regulate input voltage. Its structure is complex; the real-time performance is worse. In order to remedy these defects, a real-time temperature compensation which is based on switch on-off time of switching power supply is designed. Its feasibility and operating stability are confirmed by plate making and experiment. At last, the comparison experiments between the real-time noise compensation and the real-time temperature compensation is carried out in the environments where temperature is almost invariant and background light acutely changes from5lux to150lux . The result shows that the operating effect of the real-time noise compensation is better here, the noise minifies to a sixth of original noise. The comparison experiments between the real-time noise compensation and the real-time temperature compensation is carried out in darkroom where background light is 5lux and temperature almost rapidly changes from -20°C to 80°C. The result shows that the operating effect of the real-time temperature compensation is better here, the noise minifies to a seventh of original noise. Moreover, these methods can be applied to other type detection systems of weak photoelectric signal; they have high actual application value.
Development of efficient time-evolution method based on three-term recurrence relation
International Nuclear Information System (INIS)
Akama, Tomoko; Kobayashi, Osamu; Nanbu, Shinkoh
2015-01-01
The advantage of the real-time (RT) propagation method is a direct solution of the time-dependent Schrödinger equation which describes frequency properties as well as all dynamics of a molecular system composed of electrons and nuclei in quantum physics and chemistry. Its applications have been limited by computational feasibility, as the evaluation of the time-evolution operator is computationally demanding. In this article, a new efficient time-evolution method based on the three-term recurrence relation (3TRR) was proposed to reduce the time-consuming numerical procedure. The basic formula of this approach was derived by introducing a transformation of the operator using the arcsine function. Since this operator transformation causes transformation of time, we derived the relation between original and transformed time. The formula was adapted to assess the performance of the RT time-dependent Hartree-Fock (RT-TDHF) method and the time-dependent density functional theory. Compared to the commonly used fourth-order Runge-Kutta method, our new approach decreased computational time of the RT-TDHF calculation by about factor of four, showing the 3TRR formula to be an efficient time-evolution method for reducing computational cost
A systematic method for characterizing the time-range performance of ground penetrating radar
International Nuclear Information System (INIS)
Strange, A D
2013-01-01
The fundamental performance of ground penetrating radar (GPR) is linked to the ability to measure the signal time-of-flight in order to provide an accurate radar-to-target range estimate. Having knowledge of the actual time range and timing nonlinearities of a trace is therefore important when seeking to make quantitative range estimates. However, very few practical methods have been formally reported in the literature to characterize GPR time-range performance. This paper describes a method to accurately measure the true time range of a GPR to provide a quantitative assessment of the timing system performance and detect and quantify the effects of timing nonlinearity due to timing jitter. The effect of varying the number of samples per trace on the true time range has also been investigated and recommendations on how to minimize the effects of timing errors are described. The approach has been practically applied to characterize the timing performance of two commercial GPR systems. The importance of the method is that it provides the GPR community with a practical method to readily characterize the underlying accuracy of GPR systems. This in turn leads to enhanced target depth estimation as well as facilitating the accuracy of more sophisticated GPR signal processing methods. (paper)
El-Deftar, Moteaa M; Robertson, James; Foster, Simon; Lennard, Chris
2015-06-01
Laser-induced breakdown spectroscopy (LIBS) is an emerging atomic emission based solid sampling technique that has many potential forensic applications. In this study, the analytical performance of LIBS, as well as that of inductively coupled plasma mass spectrometry (ICP-MS), laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and X-ray microfluorescence (μXRF), was evaluated for the ability to conduct elemental analyses on Cannabis plant material, with a specific investigation of the possible links between hydroponic nutrients and elemental profiles from associated plant material. No such study has been previously published in the literature. Good correlation among the four techniques was observed when the concentrations or peak areas of the elements of interest were monitored. For Cannabis samples collected at the same growth time, the elemental profiles could be related to the use of particular commercial nutrients. In addition, the study demonstrated that ICP-MS, LA-ICP-MS and LIBS are suitable techniques for the comparison of Cannabis samples from different sources, with high discriminating powers being achieved. On the other hand, μXRF method was not suitable for the discrimination of Cannabis samples originating from different growth nutrients. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Bagherinejad, Jafar; Niknam, Azar
2018-03-01
In this paper, a leader-follower competitive facility location problem considering the reactions of the competitors is studied. A model for locating new facilities and determining levels of quality for the facilities of the leader firm is proposed. Moreover, changes in the location and quality of existing facilities in a competitive market where a competitor offers the same goods or services are taken into account. The competitor could react by opening new facilities, closing existing ones, and adjusting the quality levels of its existing facilities. The market share, captured by each facility, depends on its distance to customer and its quality that is calculated based on the probabilistic Huff's model. Each firm aims to maximize its profit subject to constraints on quality levels and budget of setting up new facilities. This problem is formulated as a bi-level mixed integer non-linear model. The model is solved using a combination of Tabu Search with an exact method. The performance of the proposed algorithm is compared with an upper bound that is achieved by applying Karush-Kuhn-Tucker conditions. Computational results show that our algorithm finds near the upper bound solutions in a reasonable time.
A method for real-time implementation of HOG feature extraction
Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai
2011-08-01
Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.
Integral transform method for solving time fractional systems and fractional heat equation
Directory of Open Access Journals (Sweden)
Arman Aghili
2014-01-01
Full Text Available In the present paper, time fractional partial differential equation is considered, where the fractional derivative is defined in the Caputo sense. Laplace transform method has been applied to obtain an exact solution. The authors solved certain homogeneous and nonhomogeneous time fractional heat equations using integral transform. Transform method is a powerful tool for solving fractional singular Integro - differential equations and PDEs. The result reveals that the transform method is very convenient and effective.
Testing the multi-configuration time-dependent Hartree-Fock method
International Nuclear Information System (INIS)
Zanghellini, Juergen; Kitzler, Markus; Brabec, Thomas; Scrinzi, Armin
2004-01-01
We test the multi-configuration time-dependent Hartree-Fock method as a new approach towards the numerical calculation of dynamical processes in multi-electron systems using the harmonic quantum dot and one-dimensional helium in strong laser pulses as models. We find rapid convergence for quantities such as ground-state population, correlation coefficient and single ionization towards the exact results. The method converges, where the time-dependent Hartree-Fock method fails qualitatively
Liu, Meilin
2011-07-01
A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results show that this new time integration scheme uses considerably larger time steps than the fourth-order Runge-Kutta method when combined with a DG-FEM using higher-order spatial discretization/basis functions for high accuracy. © 2011 IEEE.
Yan, Zhifeng; Yang, Xiaofan; Li, Siliang; Hilpert, Markus
2017-11-01
The lattice Boltzmann method (LBM) based on single-relaxation-time (SRT) or multiple-relaxation-time (MRT) collision operators is widely used in simulating flow and transport phenomena. The LBM based on two-relaxation-time (TRT) collision operators possesses strengths from the SRT and MRT LBMs, such as its simple implementation and good numerical stability, although tedious mathematical derivations and presentations of the TRT LBM hinder its application to a broad range of flow and transport phenomena. This paper describes the TRT LBM clearly and provides a pseudocode for easy implementation. Various transport phenomena were simulated using the TRT LBM to illustrate its applications in subsurface environments. These phenomena include advection-diffusion in uniform flow, Taylor dispersion in a pipe, solute transport in a packed column, reactive transport in uniform flow, and bacterial chemotaxis in porous media. The TRT LBM demonstrated good numerical performance in terms of accuracy and stability in predicting these transport phenomena. Therefore, the TRT LBM is a powerful tool to simulate various geophysical and biogeochemical processes in subsurface environments.
Introduction to the Finite-Difference Time-Domain (FDTD) Method for Electromagnetics
Gedney, Stephen
2011-01-01
Introduction to the Finite-Difference Time-Domain (FDTD) Method for Electromagnetics provides a comprehensive tutorial of the most widely used method for solving Maxwell's equations -- the Finite Difference Time-Domain Method. This book is an essential guide for students, researchers, and professional engineers who want to gain a fundamental knowledge of the FDTD method. It can accompany an undergraduate or entry-level graduate course or be used for self-study. The book provides all the background required to either research or apply the FDTD method for the solution of Maxwell's equations to p
Directory of Open Access Journals (Sweden)
Mathieu Lepot
2017-10-01
Full Text Available A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are many methods and criteria to estimate efficiencies of these methods, but uncertainties on the interpolated values are rarely calculated. Furthermore, while they are estimated according to standard methods, the prediction uncertainty is not taken into account: a discussion is thus presented on the uncertainty estimation of interpolated/extrapolated data. Finally, some suggestions for further research and a new method are proposed.
Directory of Open Access Journals (Sweden)
Gabriel Felipe Aguilera
2014-07-01
Full Text Available The hydrocyclone is one of the most used classification equipment in industry, particularly in mineral processing. Maybe its main characteristic is to be a hydrodynamic separation equipment, whereby it has a high production capability and different levels of efficiency are depending on the geometrical configuration, operational parameters and the type of material to be processed. Nevertheless, there are a few successful studies regarding the modelling and simulation of its hydrodynamic principles, because the flow behavior inside is quite complex. Most of the current models are empirical and they are not applicable to all cases and types of minerals. One of the most important problems to be solved, besides the cut size and the effect of the physical properties of the particles, is the distribution of the flow inside the hydrocyclone, because if the work of the equipment is at low slurry densities, very clear for small hydrocyclones, its mechanic behavior is a consequence of the kind of liquid used as continuous phase, being water the most common liquid. This work shows the modelling and simulation of the hydrodynamic behavior of a suspension inside a hydrocyclone, including the air core effect, through the use of finite differences method. For the developing of the model, the Reynolds Stress Model (RSM for the evaluation of turbulence, and the Volume of Fluid (VOF to study the interaction between water and air were used. Finally, the model shows to be significant for experimental data, and for different conditions of an industrial plant.
An Energy Conservative Ray-Tracing Method With a Time Interpolation of the Force Field
Energy Technology Data Exchange (ETDEWEB)
Yao, Jin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-02-10
A new algorithm that constructs a continuous force field interpolated in time is proposed for resolving existing difficulties in numerical methods for ray-tracing. This new method has improved accuracy, but with the same degree of algebraic complexity compared to Kaisers method.
Lepot, M.J.; Aubin, Jean Baptiste; Clemens, F.H.L.R.
2017-01-01
A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are
Analysis of time integration methods for the compressible two-fluid model for pipe flow simulations
B. Sanderse (Benjamin); I. Eskerud Smith (Ivar); M.H.W. Hendrix (Maurice)
2017-01-01
textabstractIn this paper we analyse different time integration methods for the two-fluid model and propose the BDF2 method as the preferred choice to simulate transient compressible multiphase flow in pipelines. Compared to the prevailing Backward Euler method, the BDF2 scheme has a significantly
A maintenance time prediction method considering ergonomics through virtual reality simulation.
Zhou, Dong; Zhou, Xin-Xin; Guo, Zi-Yue; Lv, Chuan
2016-01-01
Maintenance time is a critical quantitative index in maintainability prediction. An efficient maintenance time measurement methodology plays an important role in early stage of the maintainability design. While traditional way to measure the maintenance time ignores the differences between line production and maintenance action. This paper proposes a corrective MOD method considering several important ergonomics factors to predict the maintenance time. With the help of the DELMIA analysis tools, the influence coefficient of several factors are discussed to correct the MOD value and the designers can measure maintenance time by calculating the sum of the corrective MOD time of each maintenance therbligs. Finally a case study is introduced, by maintaining the virtual prototype of APU motor starter in DELMIA, designer obtains the actual maintenance time by the proposed method, and the result verifies the effectiveness and accuracy of the proposed method.
A comparison of moving object detection methods for real-time moving object detection
Roshan, Aditya; Zhang, Yun
2014-06-01
Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.
Trend analysis using non-stationary time series clustering based on the finite element method
Gorji Sefidmazgi, M.; Sayemuzzaman, M.; Homaifar, A.; Jha, M. K.; Liess, S.
2014-05-01
In order to analyze low-frequency variability of climate, it is useful to model the climatic time series with multiple linear trends and locate the times of significant changes. In this paper, we have used non-stationary time series clustering to find change points in the trends. Clustering in a multi-dimensional non-stationary time series is challenging, since the problem is mathematically ill-posed. Clustering based on the finite element method (FEM) is one of the methods that can analyze multidimensional time series. One important attribute of this method is that it is not dependent on any statistical assumption and does not need local stationarity in the time series. In this paper, it is shown how the FEM-clustering method can be used to locate change points in the trend of temperature time series from in situ observations. This method is applied to the temperature time series of North Carolina (NC) and the results represent region-specific climate variability despite higher frequency harmonics in climatic time series. Next, we investigated the relationship between the climatic indices with the clusters/trends detected based on this clustering method. It appears that the natural variability of climate change in NC during 1950-2009 can be explained mostly by AMO and solar activity.
International Nuclear Information System (INIS)
2002-01-01
The past years have brought some significant changes in the world energy market, where the nuclear power plants and utilities are operating. Part of NPPs is privatised now; the electricity markets are liberalized and become more and more international. Due to the increase of competition, the power production costs are now monitored more closely than before. The opening of electricity markets has led the nuclear power plants to be under the serious economic pressure with a demand for continuous cost reduction. All these require from NPPs to make their personnel training more cost-effective. In addition, based on modern technology, a great amount of new training tools, aids and technologies have been introduced during the last 2-3 years, these new opportunities can be quite useful for training cost optimization. On the basis of experience gained worldwide in the application of the systematic approach to training (SAT), SAT based training is now a broad integrated approach emphasizing not only technical knowledge and skills but also human factor related knowledge, skills and attitudes. In this way, all competency requirements for attaining and maintaining personnel competence and qualification can be met, thus promoting and strengthening quality culture and safety culture, which should be fostered throughout the initial and continuing training programmes. The subject of the present technical meeting was suggested by the members of the Technical Working Group on Training and Qualification of NPP Personnel (TWG-T and Q) and supported by a number of the IAEA meetings on NPP personnel training. The technical Meeting on 'Lessons Learned with Respect to SAT Implementation, Including Development of Trainers and Use of Cost Effective Training Methods' was organized by the IAEA in co-operation with the Tecnatom A.S. and was held from 21 to 24 October 2002 in San Sebastian de los Reyes/ Madrid, Spain. The main objective of the meeting was to provide an international forum for
Energy Technology Data Exchange (ETDEWEB)
Jonsson, Pontus [Poeyry SwedPower AB, Stockholm (Sweden); Cervantes, Michel [Luleaa Univ. of Technology, Luleaa (Sweden)
2013-02-15
The pressure-time method is an absolute method common for flow measurements in power plants. The method determines the flow rate by measuring the pressure and estimating the losses between two sections in the penstock during a closure of the guide vanes. The method has limitations according to the IEC41 standard, which makes it difficult to use at Swedish plants where the head is generally low. This means that there is limited experience/knowledge in Sweden on this method, where the Winter-Kennedy is usually used. Since several years, Luleaa University of Technology works actively in the development of the pressure-time method for low-head hydraulic machines with encouraging results. Focus has been in decreasing the distance between both measuring sections and evaluation of the viscous losses. Measurements were performed on a pipe test rig (D=0.3 m) in a laboratory under well controlled conditions with 7
International Nuclear Information System (INIS)
Park, Moon Kyu; Kim, Yong Hee; Cha, Kune Ho; Kim, Myung Ki
1998-01-01
A method is described to develop an H∞ filtering method for the dynamic compensation of self-powered neutron detectors normally used for fixed incore instruments. An H∞ norm of the filter transfer matrix is used as the optimization criteria in the worst-case estimation error sense. Filter modeling is performed for discrete-time model. The filter gains are optimized in the sense of noise attenuation level of H∞ setting. By introducing Bounded Real Lemma, the conventional algebraic Riccati inequalities are converted into Linear Matrix Inequalities (LMIs). Finally, the filter design problem is solved via the convex optimization framework using LMIs. The simulation results show that remarkable improvements are achieved in view of the filter response time and the filter design efficiency
Brenner, Hermann; Jansen, Lina
2016-02-01
Monitoring cancer survival is a key task of cancer registries, but timely disclosure of progress in long-term survival remains a challenge. We introduce and evaluate a novel method, denoted "boomerang method," for deriving more up-to-date estimates of long-term survival. We applied three established methods (cohort, complete, and period analysis) and the boomerang method to derive up-to-date 10-year relative survival of patients diagnosed with common solid cancers and hematological malignancies in the United States. Using the Surveillance, Epidemiology and End Results 9 database, we compared the most up-to-date age-specific estimates that might have been obtained with the database including patients diagnosed up to 2001 with 10-year survival later observed for patients diagnosed in 1997-2001. For cancers with little or no increase in survival over time, the various estimates of 10-year relative survival potentially available by the end of 2001 were generally rather similar. For malignancies with strongly increasing survival over time, including breast and prostate cancer and all hematological malignancies, the boomerang method provided estimates that were closest to later observed 10-year relative survival in 23 of the 34 groups assessed. The boomerang method can substantially improve up-to-dateness of long-term cancer survival estimates in times of ongoing improvement in prognosis. Copyright © 2016 Elsevier Inc. All rights reserved.
Taniguchi, H
1998-01-01
This article describes the US and Japan's "Common Agenda for Cooperation in Global Perspective." This agenda was launched in July 1993. The aim was to use a bilateral partnership to address critical global challenges in 1) Promotion of Health and Human Development; 2) Protection of the Environment; 3) Responses to Challenges to Global Stability; and 4) Advancement of Science and Technology. The bilateral effort has resulted in 18 initiatives worldwide. Six major accomplishments have occurred in coping with natural disasters in Kobe, Japan, and Los Angeles, US; coral reefs; assistance for women in developing countries; AIDS, children's health; and population problems. The bilateral effort has been successful due to the active involvement of the private sector, including businesses and nongovernmental organizations (NGOs). Many initiatives are developed and implemented in cooperation with local NGOs. The government needs the private sector's technical and managerial fields of expertise. Early investment in NGO efforts ensures the development of self-sustaining programs and public support. An Open Forum was held in March 12-13, 1998, as a commemoration of the 5-year cooperative bilateral effort. Over 300 people attended the Forum. Plenary sessions were devoted to the partnership between public and private sectors under the US-Japan Agenda. Working sessions focused on health and conservation. Participants suggested improved legal systems and social structures for facilitating activities of NGOs, further development by NGOs of their capacities, and support to NGOs from corporations.
Energy Technology Data Exchange (ETDEWEB)
Choi, Young Chul; Park, Tae Jin [KAERI, Daejeon (Korea, Republic of)
2016-05-15
Source localization in a dispersive medium has been carried out based on the time-of-arrival-differences (TOADs) method: a triangulation method and a circle intersection technique. Recent signal processing advances have led to calculation TOAD using a joint time-frequency analysis of the signal, where a short-time Fourier transform(STFT) and wavelet transform can be included as popular algorithms. The time-frequency analysis method is able to provide various information and more reliable results such as seismic-attenuation estimation, dispersive characteristics, a wave mode analysis, and temporal energy distribution of signals compared with previous methods. These algorithms, however, have their own limitations for signal processing. In this paper, the effective use of proposed algorithm in detecting crack wave arrival time and source localization in rock masses suggest that the evaluation and real-time monitoring on the intensity of damages related to the tunnels or other underground facilities is possible. Calculation of variances resulted from moving windows as a function of their size differentiates the signature from noise and from crack signal, which lead us to determine the crack wave arrival time. Then, the source localization is determined to be where the variance of crack wave velocities from real and virtual crack localization becomes a minimum. To validate our algorithm, we have performed experiments at the tunnel, which resulted in successful determination of the wave arrival time and crack localization.