WorldWideScience

Sample records for accurate timing improves

  1. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan

    2014-01-01

    The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...

  2. Can We Accurately Time the Administration of Antenatal Corticosteroids for Preterm Labor?

    Directory of Open Access Journals (Sweden)

    Paola Aghajanian

    2016-01-01

    Full Text Available Background. Accurate timing of antenatal corticosteroids (ACS has resulted in improved neonatal outcomes. Objectives. Our primary objective was to determine predictors for optimal timing of ACS in women presenting with spontaneous preterm labor. Study Design. A retrospective cohort study of women receiving ACS for spontaneous preterm birth was conducted. Women were included if they presented with preterm labor or preterm premature rupture of membranes. Accurate timing of ACS was defined as administration within 7 days of delivery. Maternal demographic and obstetrics characteristics were compared between the groups receiving ACS ≤7 days and >7 days from delivery. Statistical analyses were performed using parametric and nonparametric tests. P<0.05 was considered significant. Results. The study included 215 subjects. Median latency from ACS administration to delivery was 6 days (IQR 32. Accurate timing of ACS occurred in 113 (53% women and was associated with rupture of membranes (OR 13.8, 95% CI 5.9–32.6, cervical change (OR 7.1, 95% CI 3.0–17.1, and cervical dilation ≥ 2 cm (OR 3.9, 95% CI 1.5–10.3. Conclusions. Rupture of membranes, cervical change, and cervical dilation ≥ 2 cm were strong predictors of optimal timing. 53% of women with preterm labor received ACS optimally.

  3. Improving Spiking Dynamical Networks: Accurate Delays, Higher-Order Synapses, and Time Cells.

    Science.gov (United States)

    Voelker, Aaron R; Eliasmith, Chris

    2018-03-01

    Researchers building spiking neural networks face the challenge of improving the biological plausibility of their model networks while maintaining the ability to quantitatively characterize network behavior. In this work, we extend the theory behind the neural engineering framework (NEF), a method of building spiking dynamical networks, to permit the use of a broad class of synapse models while maintaining prescribed dynamics up to a given order. This theory improves our understanding of how low-level synaptic properties alter the accuracy of high-level computations in spiking dynamical networks. For completeness, we provide characterizations for both continuous-time (i.e., analog) and discrete-time (i.e., digital) simulations. We demonstrate the utility of these extensions by mapping an optimal delay line onto various spiking dynamical networks using higher-order models of the synapse. We show that these networks nonlinearly encode rolling windows of input history, using a scale invariant representation, with accuracy depending on the frequency content of the input signal. Finally, we reveal that these methods provide a novel explanation of time cell responses during a delay task, which have been observed throughout hippocampus, striatum, and cortex.

  4. Accurate Rapid Lifetime Determination on Time-Gated FLIM Microscopy with Optical Sectioning.

    Science.gov (United States)

    Silva, Susana F; Domingues, José Paulo; Morgado, António Miguel

    2018-01-01

    Time-gated fluorescence lifetime imaging microscopy (FLIM) is a powerful technique to assess the biochemistry of cells and tissues. When applied to living thick samples, it is hampered by the lack of optical sectioning and the need of acquiring many images for an accurate measurement of fluorescence lifetimes. Here, we report on the use of processing techniques to overcome these limitations, minimizing the acquisition time, while providing optical sectioning. We evaluated the application of the HiLo and the rapid lifetime determination (RLD) techniques for accurate measurement of fluorescence lifetimes with optical sectioning. HiLo provides optical sectioning by combining the high-frequency content from a standard image, obtained with uniform illumination, with the low-frequency content of a second image, acquired using structured illumination. Our results show that HiLo produces optical sectioning on thick samples without degrading the accuracy of the measured lifetimes. We also show that instrument response function (IRF) deconvolution can be applied with the RLD technique on HiLo images, improving greatly the accuracy of the measured lifetimes. These results open the possibility of using the RLD technique with pulsed diode laser sources to determine accurately fluorescence lifetimes in the subnanosecond range on thick multilayer samples, providing that offline processing is allowed.

  5. High accurate time system of the Low Latitude Meridian Circle.

    Science.gov (United States)

    Yang, Jing; Wang, Feng; Li, Zhiming

    In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.

  6. Multigrid time-accurate integration of Navier-Stokes equations

    Science.gov (United States)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  7. Accurate Lithium-ion battery parameter estimation with continuous-time system identification methods

    International Nuclear Information System (INIS)

    Xia, Bing; Zhao, Xin; Callafon, Raymond de; Garnier, Hugues; Nguyen, Truong; Mi, Chris

    2016-01-01

    Highlights: • Continuous-time system identification is applied in Lithium-ion battery modeling. • Continuous-time and discrete-time identification methods are compared in detail. • The instrumental variable method is employed to further improve the estimation. • Simulations and experiments validate the advantages of continuous-time methods. - Abstract: The modeling of Lithium-ion batteries usually utilizes discrete-time system identification methods to estimate parameters of discrete models. However, in real applications, there is a fundamental limitation of the discrete-time methods in dealing with sensitivity when the system is stiff and the storage resolutions are limited. To overcome this problem, this paper adopts direct continuous-time system identification methods to estimate the parameters of equivalent circuit models for Lithium-ion batteries. Compared with discrete-time system identification methods, the continuous-time system identification methods provide more accurate estimates to both fast and slow dynamics in battery systems and are less sensitive to disturbances. A case of a 2"n"d-order equivalent circuit model is studied which shows that the continuous-time estimates are more robust to high sampling rates, measurement noises and rounding errors. In addition, the estimation by the conventional continuous-time least squares method is further improved in the case of noisy output measurement by introducing the instrumental variable method. Simulation and experiment results validate the analysis and demonstrate the advantages of the continuous-time system identification methods in battery applications.

  8. Accurate Sample Time Reconstruction of Inertial FIFO Data

    Directory of Open Access Journals (Sweden)

    Sebastian Stieber

    2017-12-01

    Full Text Available In the context of modern cyber-physical systems, the accuracy of underlying sensor data plays an increasingly important role in sensor data fusion and feature extraction. The raw events of multiple sensors have to be aligned in time to enable high quality sensor fusion results. However, the growing number of simultaneously connected sensor devices make the energy saving data acquisition and processing more and more difficult. Hence, most of the modern sensors offer a first-in-first-out (FIFO interface to store multiple data samples and to relax timing constraints, when handling multiple sensor devices. However, using the FIFO interface increases the negative influence of individual clock drifts—introduced by fabrication inaccuracies, temperature changes and wear-out effects—onto the sampling data reconstruction. Furthermore, additional timing offset errors due to communication and software latencies increases with a growing number of sensor devices. In this article, we present an approach for an accurate sample time reconstruction independent of the actual clock drift with the help of an internal sensor timer. Such timers are already available in modern sensors, manufactured in micro-electromechanical systems (MEMS technology. The presented approach focuses on calculating accurate time stamps using the sensor FIFO interface in a forward-only processing manner as a robust and energy saving solution. The proposed algorithm is able to lower the overall standard deviation of reconstructed sampling periods below 40 μ s, while run-time savings of up to 42% are achieved, compared to single sample acquisition.

  9. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Science.gov (United States)

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  10. Implicit time accurate simulation of unsteady flow

    Science.gov (United States)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  11. Time Accurate Unsteady Pressure Loads Simulated for the Space Launch System at a Wind Tunnel Condition

    Science.gov (United States)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.

    2015-01-01

    Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.

  12. Towards cycle-accurate performance predictions for real-time embedded systems

    NARCIS (Netherlands)

    Triantafyllidis, K.; Bondarev, E.; With, de P.H.N.; Arabnia, H.R.; Deligiannidis, L.; Jandieri, G.

    2013-01-01

    In this paper we present a model-based performance analysis method for component-based real-time systems, featuring cycle-accurate predictions of latencies and enhanced system robustness. The method incorporates the following phases: (a) instruction-level profiling of SW components, (b) modeling the

  13. A New Multiscale Technique for Time-Accurate Geophysics Simulations

    Science.gov (United States)

    Omelchenko, Y. A.; Karimabadi, H.

    2006-12-01

    Large-scale geophysics systems are frequently described by multiscale reactive flow models (e.g., wildfire and climate models, multiphase flows in porous rocks, etc.). Accurate and robust simulations of such systems by traditional time-stepping techniques face a formidable computational challenge. Explicit time integration suffers from global (CFL and accuracy) timestep restrictions due to inhomogeneous convective and diffusion processes, as well as closely coupled physical and chemical reactions. Application of adaptive mesh refinement (AMR) to such systems may not be always sufficient since its success critically depends on a careful choice of domain refinement strategy. On the other hand, implicit and timestep-splitting integrations may result in a considerable loss of accuracy when fast transients in the solution become important. To address this issue, we developed an alternative explicit approach to time-accurate integration of such systems: Discrete-Event Simulation (DES). DES enables asynchronous computation by automatically adjusting the CPU resources in accordance with local timescales. This is done by encapsulating flux- conservative updates of numerical variables in the form of events, whose execution and synchronization is explicitly controlled by imposing accuracy and causality constraints. As a result, at each time step DES self- adaptively updates only a fraction of the global system state, which eliminates unnecessary computation of inactive elements. DES can be naturally combined with various mesh generation techniques. The event-driven paradigm results in robust and fast simulation codes, which can be efficiently parallelized via a new preemptive event processing (PEP) technique. We discuss applications of this novel technology to time-dependent diffusion-advection-reaction and CFD models representative of various geophysics applications.

  14. Improving Efficiency Using Time-Driven Activity-Based Costing Methodology.

    Science.gov (United States)

    Tibor, Laura C; Schultz, Stacy R; Menaker, Ronald; Weber, Bradley D; Ness, Jay; Smith, Paula; Young, Phillip M

    2017-03-01

    The aim of this study was to increase efficiency in MR enterography using a time-driven activity-based costing methodology. In February 2015, a multidisciplinary team was formed to identify the personnel, equipment, space, and supply costs of providing outpatient MR enterography. The team mapped the current state, completed observations, performed timings, and calculated costs associated with each element of the process. The team used Pareto charts to understand the highest cost and most time-consuming activities, brainstormed opportunities, and assessed impact. Plan-do-study-act cycles were developed to test the changes, and run charts were used to monitor progress. The process changes consisted of revising the workflow associated with the preparation and administration of glucagon, with completed implementation in November 2015. The time-driven activity-based costing methodology allowed the radiology department to develop a process to more accurately identify the costs of providing MR enterography. The primary process modification was reassigning responsibility for the administration of glucagon from nurses to technologists. After implementation, the improvements demonstrated success by reducing non-value-added steps and cost by 13%, staff time by 16%, and patient process time by 17%. The saved process time was used to augment existing examination time slots to more accurately accommodate the entire enterographic examination. Anecdotal comments were captured to validate improved staff satisfaction within the multidisciplinary team. This process provided a successful outcome to address daily workflow frustrations that could not previously be improved. A multidisciplinary team was necessary to achieve success, in addition to the use of a structured problem-solving approach. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  15. Accurate and efficient calculation of response times for groundwater flow

    Science.gov (United States)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.

  16. Technical Note: Using experimentally determined proton spot scanning timing parameters to accurately model beam delivery time.

    Science.gov (United States)

    Shen, Jiajian; Tryggestad, Erik; Younkin, James E; Keole, Sameer R; Furutani, Keith M; Kang, Yixiu; Herman, Michael G; Bues, Martin

    2017-10-01

    To accurately model the beam delivery time (BDT) for a synchrotron-based proton spot scanning system using experimentally determined beam parameters. A model to simulate the proton spot delivery sequences was constructed, and BDT was calculated by summing times for layer switch, spot switch, and spot delivery. Test plans were designed to isolate and quantify the relevant beam parameters in the operation cycle of the proton beam therapy delivery system. These parameters included the layer switch time, magnet preparation and verification time, average beam scanning speeds in x- and y-directions, proton spill rate, and maximum charge and maximum extraction time for each spill. The experimentally determined parameters, as well as the nominal values initially provided by the vendor, served as inputs to the model to predict BDTs for 602 clinical proton beam deliveries. The calculated BDTs (T BDT ) were compared with the BDTs recorded in the treatment delivery log files (T Log ): ∆t = T Log -T BDT . The experimentally determined average layer switch time for all 97 energies was 1.91 s (ranging from 1.9 to 2.0 s for beam energies from 71.3 to 228.8 MeV), average magnet preparation and verification time was 1.93 ms, the average scanning speeds were 5.9 m/s in x-direction and 19.3 m/s in y-direction, the proton spill rate was 8.7 MU/s, and the maximum proton charge available for one acceleration is 2.0 ± 0.4 nC. Some of the measured parameters differed from the nominal values provided by the vendor. The calculated BDTs using experimentally determined parameters matched the recorded BDTs of 602 beam deliveries (∆t = -0.49 ± 1.44 s), which were significantly more accurate than BDTs calculated using nominal timing parameters (∆t = -7.48 ± 6.97 s). An accurate model for BDT prediction was achieved by using the experimentally determined proton beam therapy delivery parameters, which may be useful in modeling the interplay effect and patient throughput. The model may

  17. Towards an accurate real-time locator of infrasonic sources

    Science.gov (United States)

    Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.

    2017-11-01

    Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability

  18. Reducing dose calculation time for accurate iterative IMRT planning

    International Nuclear Information System (INIS)

    Siebers, Jeffrey V.; Lauterbach, Marc; Tong, Shidong; Wu Qiuwen; Mohan, Radhe

    2002-01-01

    A time-consuming component of IMRT optimization is the dose computation required in each iteration for the evaluation of the objective function. Accurate superposition/convolution (SC) and Monte Carlo (MC) dose calculations are currently considered too time-consuming for iterative IMRT dose calculation. Thus, fast, but less accurate algorithms such as pencil beam (PB) algorithms are typically used in most current IMRT systems. This paper describes two hybrid methods that utilize the speed of fast PB algorithms yet achieve the accuracy of optimizing based upon SC algorithms via the application of dose correction matrices. In one method, the ratio method, an infrequently computed voxel-by-voxel dose ratio matrix (R=D SC /D PB ) is applied for each beam to the dose distributions calculated with the PB method during the optimization. That is, D PB xR is used for the dose calculation during the optimization. The optimization proceeds until both the IMRT beam intensities and the dose correction ratio matrix converge. In the second method, the correction method, a periodically computed voxel-by-voxel correction matrix for each beam, defined to be the difference between the SC and PB dose computations, is used to correct PB dose distributions. To validate the methods, IMRT treatment plans developed with the hybrid methods are compared with those obtained when the SC algorithm is used for all optimization iterations and with those obtained when PB-based optimization is followed by SC-based optimization. In the 12 patient cases studied, no clinically significant differences exist in the final treatment plans developed with each of the dose computation methodologies. However, the number of time-consuming SC iterations is reduced from 6-32 for pure SC optimization to four or less for the ratio matrix method and five or less for the correction method. Because the PB algorithm is faster at computing dose, this reduces the inverse planning optimization time for our implementation

  19. Precise and accurate train run data: Approximation of actual arrival and departure times

    DEFF Research Database (Denmark)

    Richter, Troels; Landex, Alex; Andersen, Jonas Lohmann Elkjær

    with the approximated actual arrival and departure times. As a result, all future statistics can now either be based on track circuit data with high precision or approximated actual arrival times with a high accuracy. Consequently, performance analysis will be more accurate, punctuality statistics more correct, KPI...

  20. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  1. Capsule-odometer: a concept to improve accurate lesion localisation.

    Science.gov (United States)

    Karargyris, Alexandros; Koulaouzidis, Anastasios

    2013-09-21

    In order to improve lesion localisation in small-bowel capsule endoscopy, a modified capsule design has been proposed incorporating localisation and - in theory - stabilization capabilities. The proposed design consists of a capsule fitted with protruding wheels attached to a spring-mechanism. This would act as a miniature odometer, leading to more accurate lesion localization information in relation to the onset of the investigation (spring expansion e.g., pyloric opening). Furthermore, this capsule could allow stabilization of the recorded video as any erratic, non-forward movement through the gut is minimised. Three-dimensional (3-D) printing technology was used to build a capsule prototype. Thereafter, miniature wheels were also 3-D printed and mounted on a spring which was attached to conventional capsule endoscopes for the purpose of this proof-of-concept experiment. In vitro and ex vivo experiments with porcine small-bowel are presented herein. Further experiments have been scheduled.

  2. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    Science.gov (United States)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  3. Time-Accurate Simulations of Synthetic Jet-Based Flow Control for An Axisymmetric Spinning Body

    National Research Council Canada - National Science Library

    Sahu, Jubaraj

    2004-01-01

    .... A time-accurate Navier-Stokes computational technique has been used to obtain numerical solutions for the unsteady jet-interaction flow field for a spinning projectile at a subsonic speed, Mach...

  4. Improved management of radiotherapy departments through accurate cost data

    International Nuclear Information System (INIS)

    Kesteloot, K.; Lievens, Y.; Schueren, E. van der

    2000-01-01

    Escalating health care expenses urge Governments towards cost containment. More accurate data on the precise costs of health care interventions are needed. We performed an aggregate cost calculation of radiation therapy departments and treatments and discussed the different cost components. The costs of a radiotherapy department were estimated, based on accreditation norms for radiotherapy departments set forth in the Belgian legislation. The major cost components of radiotherapy are the cost of buildings and facilities, equipment, medical and non-medical staff, materials and overhead. They respectively represent around 3, 30, 50, 4 and 13% of the total costs, irrespective of the department size. The average cost per patient lowers with increasing department size and optimal utilization of resources. Radiotherapy treatment costs vary in a stepwise fashion: minor variations of patient load do not affect the cost picture significantly due to a small impact of variable costs. With larger increases in patient load however, additional equipment and/or staff will become necessary, resulting in additional semi-fixed costs and an important increase in costs. A sensitivity analysis of these two major cost inputs shows that a decrease in total costs of 12-13% can be obtained by assuming a 20% less than full time availability of personnel; that due to evolving seniority levels, the annual increase in wage costs is estimated to be more than 1%; that by changing the clinical life-time of buildings and equipment with unchanged interest rate, a 5% reduction of total costs and cost per patient can be calculated. More sophisticated equipment will not have a very large impact on the cost (±4000 BEF/patient), provided that the additional equipment is adapted to the size of the department. That the recommendations we used, based on the Belgian legislation, are not outrageous is shown by replacing them by the USA Blue book recommendations. Depending on the department size, costs in

  5. Improved quantification accuracy for duplex real-time PCR detection of genetically modified soybean and maize in heat processed foods

    Directory of Open Access Journals (Sweden)

    CHENG Fang

    2013-04-01

    Full Text Available Real-time PCR technique has been widely used in quantitative GMO detection in recent years.The accuracy of GMOs quantification based on the real-time PCR methods is still a difficult problem,especially for the quantification of high processed samples.To develop the suitable and accurate real-time PCR system for high processed GM samples,we made ameliorations to several real-time PCR parameters,including re-designed shorter target DNA fragment,similar lengths of amplified endogenous and exogenous gene targets,similar GC contents and melting temperatures of PCR primers and TaqMan probes.Also,one Heat-Treatment Processing Model (HTPM was established using soybean flour samples containing GM soybean GTS 40-3-2 to validate the effectiveness of the improved real-time PCR system.Tested results showed that the quantitative bias of GM content in heat processed samples were lowered using the new PCR system.The improved duplex real-time PCR was further validated using processed foods derived from GM soybean,and more accurate GM content values in these foods was also achieved.These results demonstrated that the improved duplex real-time PCR would be quite suitable in quantitative detection of high processed food products.

  6. Time scale controversy: Accurate orbital calibration of the early Paleogene

    Science.gov (United States)

    Roehl, U.; Westerhold, T.; Laskar, J.

    2012-12-01

    Timing is crucial to understanding the causes and consequences of events in Earth history. The calibration of geological time relies heavily on the accuracy of radioisotopic and astronomical dating. Uncertainties in the computations of Earth's orbital parameters and in radioisotopic dating have hampered the construction of a reliable astronomically calibrated time scale beyond 40 Ma. Attempts to construct a robust astronomically tuned time scale for the early Paleogene by integrating radioisotopic and astronomical dating are only partially consistent. Here, using the new La2010 and La2011 orbital solutions, we present the first accurate astronomically calibrated time scale for the early Paleogene (47-65 Ma) uniquely based on astronomical tuning and thus independent of the radioisotopic determination of the Fish Canyon standard. Comparison with geological data confirms the stability of the new La2011 solution back to 54 Ma. Subsequent anchoring of floating chronologies to the La2011 solution using the very long eccentricity nodes provides an absolute age of 55.530 ± 0.05 Ma for the onset of the Paleocene/Eocene Thermal Maximum (PETM), 54.850 ± 0.05 Ma for the early Eocene ash -17, and 65.250 ± 0.06 Ma for the K/Pg boundary. The new astrochronology presented here indicates that the intercalibration and synchronization of U/Pb and 40Ar/39Ar radioisotopic geochronology is much more challenging than previously thought.

  7. Approaches for the accurate definition of geological time boundaries

    Science.gov (United States)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  8. A discontinous Galerkin finite element method with an efficient time integration scheme for accurate simulations

    KAUST Repository

    Liu, Meilin; Bagci, Hakan

    2011-01-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results

  9. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    International Nuclear Information System (INIS)

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-01-01

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  10. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    Energy Technology Data Exchange (ETDEWEB)

    Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036, Spain and Servei de Física Mèdica i Protecció Radiològica, Institut Català d’Oncologia, L’Hospitalet de Llobregat 08907 (Spain); Roé, Nuria [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036 (Spain); Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Complexo Hospitalario Universitario de Santiago de Compostela 15706, Spain and Grupo de Imagen Molecular, Instituto de Investigacións Sanitarias de Santiago de Compostela (IDIS), Galicia 15782 (Spain); Falcon, Carles; Ros, Domènec [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 08036, Spain and CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Pavía, Javier [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 080836 (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); and Servei de Medicina Nuclear, Hospital Clínic, Barcelona 08036 (Spain)

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  11. Producing accurate wave propagation time histories using the global matrix method

    International Nuclear Information System (INIS)

    Obenchain, Matthew B; Cesnik, Carlos E S

    2013-01-01

    This paper presents a reliable method for producing accurate displacement time histories for wave propagation in laminated plates using the global matrix method. The existence of inward and outward propagating waves in the general solution is highlighted while examining the axisymmetric case of a circular actuator on an aluminum plate. Problems with previous attempts to isolate the outward wave for anisotropic laminates are shown. The updated method develops a correction signal that can be added to the original time history solution to cancel the inward wave and leave only the outward propagating wave. The paper demonstrates the effectiveness of the new method for circular and square actuators bonded to the surface of isotropic laminates, and these results are compared with exact solutions. Results for circular actuators on cross-ply laminates are also presented and compared with experimental results, showing the ability of the new method to successfully capture the displacement time histories for composite laminates. (paper)

  12. Decision tree for accurate infection timing in individuals newly diagnosed with HIV-1 infection.

    Science.gov (United States)

    Verhofstede, Chris; Fransen, Katrien; Van Den Heuvel, Annelies; Van Laethem, Kristel; Ruelle, Jean; Vancutsem, Ellen; Stoffels, Karolien; Van den Wijngaert, Sigi; Delforge, Marie-Luce; Vaira, Dolores; Hebberecht, Laura; Schauvliege, Marlies; Mortier, Virginie; Dauwe, Kenny; Callens, Steven

    2017-11-29

    There is today no gold standard method to accurately define the time passed since infection at HIV diagnosis. Infection timing and incidence measurement is however essential to better monitor the dynamics of local epidemics and the effect of prevention initiatives. Three methods for infection timing were evaluated using 237 serial samples from documented seroconversions and 566 cross sectional samples from newly diagnosed patients: identification of antibodies against the HIV p31 protein in INNO-LIA, SediaTM BED CEIA and SediaTM LAg-Avidity EIA. A multi-assay decision tree for infection timing was developed. Clear differences in recency window between BED CEIA, LAg-Avidity EIA and p31 antibody presence were observed with a switch from recent to long term infection a median of 169.5, 108.0 and 64.5 days after collection of the pre-seroconversion sample respectively. BED showed high reliability for identification of long term infections while LAg-Avidity is highly accurate for identification of recent infections. Using BED as initial assay to identify the long term infections and LAg-Avidity as a confirmatory assay for those classified as recent infection by BED, explores the strengths of both while reduces the workload. The short recency window of p31 antibodies allows to discriminate very early from early infections based on this marker. BED recent infection results not confirmed by LAg-Avidity are considered to reflect a period more distant from the infection time. False recency predictions in this group can be minimized by elimination of patients with a CD4 count of less than 100 cells/mm3 or without no p31 antibodies. For 566 cross sectional sample the outcome of the decision tree confirmed the infection timing based on the results of all 3 markers but reduced the overall cost from 13.2 USD to 5.2 USD per sample. A step-wise multi assay decision tree allows accurate timing of the HIV infection at diagnosis at affordable effort and cost and can be an important

  13. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Eric R. Edelman

    2017-06-01

    Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

  14. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

    Science.gov (United States)

    Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

    2017-01-01

    For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

  15. A discontinous Galerkin finite element method with an efficient time integration scheme for accurate simulations

    KAUST Repository

    Liu, Meilin

    2011-07-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly-accurate time integration scheme is presented. The scheme achieves its high accuracy using numerically constructed predictor-corrector integration coefficients. Numerical results show that this new time integration scheme uses considerably larger time steps than the fourth-order Runge-Kutta method when combined with a DG-FEM using higher-order spatial discretization/basis functions for high accuracy. © 2011 IEEE.

  16. A Real-Time Accurate Model and Its Predictive Fuzzy PID Controller for Pumped Storage Unit via Error Compensation

    Directory of Open Access Journals (Sweden)

    Jianzhong Zhou

    2017-12-01

    Full Text Available Model simulation and control of pumped storage unit (PSU are essential to improve the dynamic quality of power station. Only under the premise of the PSU models reflecting the actual transient process, the novel control method can be properly applied in the engineering. The contributions of this paper are that (1 a real-time accurate equivalent circuit model (RAECM of PSU via error compensation is proposed to reconcile the conflict between real-time online simulation and accuracy under various operating conditions, and (2 an adaptive predicted fuzzy PID controller (APFPID based on RAECM is put forward to overcome the instability of conventional control under no-load conditions with low water head. Respectively, all hydraulic factors in pipeline system are fully considered based on equivalent lumped-circuits theorem. The pretreatment, which consists of improved Suter-transformation and BP neural network, and online simulation method featured by two iterative loops are synthetically proposed to improve the solving accuracy of the pump-turbine. Moreover, the modified formulas for compensating error are derived with variable-spatial discretization to improve the accuracy of the real-time simulation further. The implicit RadauIIA method is verified to be more suitable for PSUGS owing to wider stable domain. Then, APFPID controller is constructed based on the integration of fuzzy PID and the model predictive control. Rolling prediction by RAECM is proposed to replace rolling optimization with its computational speed guaranteed. Finally, the simulation and on-site measurements are compared to prove trustworthy of RAECM under various running conditions. Comparative experiments also indicate that APFPID controller outperforms other controllers in most cases, especially low water head conditions. Satisfying results of RAECM have been achieved in engineering and it provides a novel model reference for PSUGS.

  17. Time-driven Activity-based Costing More Accurately Reflects Costs in Arthroplasty Surgery.

    Science.gov (United States)

    Akhavan, Sina; Ward, Lorrayne; Bozic, Kevin J

    2016-01-01

    Cost estimates derived from traditional hospital cost accounting systems have inherent limitations that restrict their usefulness for measuring process and quality improvement. Newer approaches such as time-driven activity-based costing (TDABC) may offer more precise estimates of true cost, but to our knowledge, the differences between this TDABC and more traditional approaches have not been explored systematically in arthroplasty surgery. The purposes of this study were to compare the costs associated with (1) primary total hip arthroplasty (THA); (2) primary total knee arthroplasty (TKA); and (3) three surgeons performing these total joint arthroplasties (TJAs) as measured using TDABC versus traditional hospital accounting (TA). Process maps were developed for each phase of care (preoperative, intraoperative, and postoperative) for patients undergoing primary TJA performed by one of three surgeons at a tertiary care medical center. Personnel costs for each phase of care were measured using TDABC based on fully loaded labor rates, including physician compensation. Costs associated with consumables (including implants) were calculated based on direct purchase price. Total costs for 677 primary TJAs were aggregated over 17 months (January 2012 to May 2013) and organized into cost categories (room and board, implant, operating room services, drugs, supplies, other services). Costs derived using TDABC, based on actual time and intensity of resources used, were compared with costs derived using TA techniques based on activity-based costing and indirect costs calculated as a percentage of direct costs from the hospital decision support system. Substantial differences between cost estimates using TDABC and TA were found for primary THA (USD 12,982 TDABC versus USD 23,915 TA), primary TKA (USD 13,661 TDABC versus USD 24,796 TA), and individually across all three surgeons for both (THA: TDABC = 49%-55% of TA total cost; TKA: TDABC = 53%-55% of TA total cost). Cost

  18. Renal contrast-enhanced MR angiography: timing errors and accurate depiction of renal artery origins.

    Science.gov (United States)

    Schmidt, Maria A; Morgan, Robert

    2008-10-01

    To investigate bolus timing artifacts that impair depiction of renal arteries at contrast material-enhanced magnetic resonance (MR) angiography and to determine the effect of contrast agent infusion rates on artifact generation. Renal contrast-enhanced MR angiography was simulated for a variety of infusion schemes, assuming both correct and incorrect timing between data acquisition and contrast agent injection. In addition, the ethics committee approved the retrospective evaluation of clinical breath-hold renal contrast-enhanced MR angiographic studies obtained with automated detection of contrast agent arrival. Twenty-two studies were evaluated for their ability to depict the origin of renal arteries in patent vessels and for any signs of timing errors. Simulations showed that a completely artifactual stenosis or an artifactual overestimation of an existing stenosis at the renal artery origin can be caused by timing errors of the order of 5 seconds in examinations performed with contrast agent infusion rates compatible with or higher than those of hand injections. Lower infusion rates make the studies more likely to accurately depict the origin of the renal arteries. In approximately one-third of all clinical examinations, different contrast agent uptake rates were detected on the left and right sides of the body, and thus allowed us to confirm that it is often impossible to optimize depiction of both renal arteries. In three renal arteries, a signal void was found at the origin in a patent vessel, and delayed contrast agent arrival was confirmed. Computer simulations and clinical examinations showed that timing errors impair the accurate depiction of renal artery origins. (c) RSNA, 2008.

  19. Development of an Improved Time Varying Loudness Model with the Inclusion of Binaural Loudness Summation

    Science.gov (United States)

    Charbonneau, Jeremy

    As the perceived quality of a product is becoming more important in the manufacturing industry, more emphasis is being placed on accurately predicting the sound quality of everyday objects. This study was undertaken to improve upon current prediction techniques with regard to the psychoacoustic descriptor of loudness and an improved binaural summation technique. The feasibility of this project was first investigated through a loudness matching experiment involving thirty-one subjects and pure tones of constant sound pressure level. A dependence of binaural summation on frequency was observed which had previously not been a subject of investigation in the reviewed literature. A follow-up investigation was carried out with forty-eight volunteers and pure tones of constant sensation level. Contrary to existing theories in literature the resulting loudness matches revealed an amplitude versus frequency relationship which confirmed the perceived increase in loudness when a signal was presented to both ears simultaneously as opposed to one ear alone. The resulting trend strongly indicated that the higher the frequency of the presented signal, the greater the increase in observed binaural summation. The results from each investigation were summarized into a single binaural summation algorithm and inserted into an improved time-varying loudness model. Using experimental techniques, it was demonstrated that the updated binaural summation algorithm was a considerable improvement over the state of the art approach for predicting the perceived binaural loudness. The improved function retained the ease of use from the original model while additionally providing accurate estimates of diotic listening conditions from monaural WAV files. It was clearly demonstrated using a validation jury test that the revised time-varying loudness model was a significant improvement over the previously standardized approach.

  20. Improved clinical documentation leads to superior reportable outcomes: An accurate representation of patient's clinical status.

    Science.gov (United States)

    Elkbuli, Adel; Godelman, Steven; Miller, Ashley; Boneva, Dessy; Bernal, Eileen; Hai, Shaikh; McKenney, Mark

    2018-05-01

    Clinical documentation can be an underappreciated. Trauma Centers (TCs) are now routinely evaluated for quality performance. TCs with poor documentation may not accurately reflect actual injury burden or comorbidities and can impact accuracy of mortality measures. Markers exist to adjust crude death rates for injury severity: observed over expected deaths (O/E) adjust for injury; Case Mix Index (CMI) reflects disease burden, and Severity of Illness (SOI) measures organ dysfunction. We aim to evaluate the impact of implementing a Clinical Documentation Improvement Program (CDIP) on reported outcomes. Review of 2-years of prospectively collected data for trauma patients, during the implementation of CDIP. A two-group prospective observational study design was used to evaluate the pre-implementation and the post-implementation phase of improved clinical documentation. T-test and Chi-Squared were used with significance defined as p deaths out of 1419 (3.45%), while post-implementation period, had 38 deaths out of 1454 (2.61%), (non-significant). There was however, a significant difference between O/E ratios. In the pre-phase, the O/E was 1.36 and 0.70 in the post-phase (p < 0.001). The two groups also differed on CMI with a pre-group mean of 2.48 and a post-group of 2.87 (p < 0.001), indicating higher injury burden in the post-group. SOI started at 2.12 and significantly increased to 2.91, signifying more organ system dysfunction (p < 0.018). Improved clinical documentation results in improved accuracy of measures of mortality, injury severity, and comorbidities and a more accurate reflection in O/E mortality ratios, CMI, and SOI. Copyright © 2018 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  1. Joint accurate time and stable frequency distribution infrastructure sharing fiber footprint with research network

    Czech Academy of Sciences Publication Activity Database

    Vojtěch, J.; Šlapák, M.; Škoda, P.; Radil, J.; Havliš, O.; Altmann, M.; Münster, P.; Velč, R.; Kundrát, J.; Altmannová, L.; Vohnout, R.; Horváth, T.; Hůla, M.; Smotlacha, V.; Čížek, Martin; Pravdová, Lenka; Řeřucha, Šimon; Hrabina, Jan; Číp, Ondřej

    2017-01-01

    Roč. 56, č. 2 (2017), s. 1-7, č. článku 027101. ISSN 0091-3286 R&D Projects: GA ČR GB14-36681G Institutional support: RVO:68081731 Keywords : accurate time * stable frequency * wavelength division multiplexing * bidirectional reciprocal path * Sagnac effect Subject RIV: BH - Optics, Masers, Lasers OBOR OECD: Optics (including laser optics and quantum optics) Impact factor: 1.082, year: 2016

  2. Finite-time adaptive sliding mode force control for electro-hydraulic load simulator based on improved GMS friction model

    Science.gov (United States)

    Kang, Shuo; Yan, Hao; Dong, Lijing; Li, Changchun

    2018-03-01

    This paper addresses the force tracking problem of electro-hydraulic load simulator under the influence of nonlinear friction and uncertain disturbance. A nonlinear system model combined with the improved generalized Maxwell-slip (GMS) friction model is firstly derived to describe the characteristics of load simulator system more accurately. Then, by using particle swarm optimization (PSO) algorithm ​combined with the system hysteresis characteristic analysis, the GMS friction parameters are identified. To compensate for nonlinear friction and uncertain disturbance, a finite-time adaptive sliding mode control method is proposed based on the accurate system model. This controller has the ability to ensure that the system state moves along the nonlinear sliding surface to steady state in a short time as well as good dynamic properties under the influence of parametric uncertainties and disturbance, which further improves the force loading accuracy and rapidity. At the end of this work, simulation and experimental results are employed to demonstrate the effectiveness of the proposed sliding mode control strategy.

  3. A Time--Independent Born--Oppenheimer Approximation with Exponentially Accurate Error Estimates

    CERN Document Server

    Hagedorn, G A

    2004-01-01

    We consider a simple molecular--type quantum system in which the nuclei have one degree of freedom and the electrons have two levels. The Hamiltonian has the form \\[ H(\\epsilon)\\ =\\ -\\,\\frac{\\epsilon^4}2\\, \\frac{\\partial^2\\phantom{i}}{\\partial y^2}\\ +\\ h(y), \\] where $h(y)$ is a $2\\times 2$ real symmetric matrix. Near a local minimum of an electron level ${\\cal E}(y)$ that is not at a level crossing, we construct quasimodes that are exponentially accurate in the square of the Born--Oppenheimer parameter $\\epsilon$ by optimal truncation of the Rayleigh--Schr\\"odinger series. That is, we construct $E_\\epsilon$ and $\\Psi_\\epsilon$, such that $\\|\\Psi_\\epsilon\\|\\,=\\,O(1)$ and \\[ \\|\\,(H(\\epsilon)\\,-\\,E_\\epsilon))\\,\\Psi_\\epsilon\\,\\|\\ 0. \\

  4. LocARNA-P: Accurate boundary prediction and improved detection of structural RNAs

    DEFF Research Database (Denmark)

    Will, Sebastian; Joshi, Tejal; Hofacker, Ivo L.

    2012-01-01

    Current genomic screens for noncoding RNAs (ncRNAs) predict a large number of genomic regions containing potential structural ncRNAs. The analysis of these data requires highly accurate prediction of ncRNA boundaries and discrimination of promising candidate ncRNAs from weak predictions. Existing...... methods struggle with these goals because they rely on sequence-based multiple sequence alignments, which regularly misalign RNA structure and therefore do not support identification of structural similarities. To overcome this limitation, we compute columnwise and global reliabilities of alignments based...... on sequence and structure similarity; we refer to these structure-based alignment reliabilities as STARs. The columnwise STARs of alignments, or STAR profiles, provide a versatile tool for the manual and automatic analysis of ncRNAs. In particular, we improve the boundary prediction of the widely used nc...

  5. Improvement of Galilean refractive beam shaping system for accurately generating near-diffraction-limited flattop beam with arbitrary beam size.

    Science.gov (United States)

    Ma, Haotong; Liu, Zejin; Jiang, Pengzhi; Xu, Xiaojun; Du, Shaojun

    2011-07-04

    We propose and demonstrate the improvement of conventional Galilean refractive beam shaping system for accurately generating near-diffraction-limited flattop beam with arbitrary beam size. Based on the detailed study of the refractive beam shaping system, we found that the conventional Galilean beam shaper can only work well for the magnifying beam shaping. Taking the transformation of input beam with Gaussian irradiance distribution into target beam with high order Fermi-Dirac flattop profile as an example, the shaper can only work well at the condition that the size of input and target beam meets R(0) ≥ 1.3 w(0). For the improvement, the shaper is regarded as the combination of magnifying and demagnifying beam shaping system. The surface and phase distributions of the improved Galilean beam shaping system are derived based on Geometric and Fourier Optics. By using the improved Galilean beam shaper, the accurate transformation of input beam with Gaussian irradiance distribution into target beam with flattop irradiance distribution is realized. The irradiance distribution of the output beam is coincident with that of the target beam and the corresponding phase distribution is maintained. The propagation performance of the output beam is greatly improved. Studies of the influences of beam size and beam order on the improved Galilean beam shaping system show that restriction of beam size has been greatly reduced. This improvement can also be used to redistribute the input beam with complicated irradiance distribution into output beam with complicated irradiance distribution.

  6. Accurate Point-of-Care Detection of Ruptured Fetal Membranes: Improved Diagnostic Performance Characteristics with a Monoclonal/Polyclonal Immunoassay

    Directory of Open Access Journals (Sweden)

    Linda C. Rogers

    2016-01-01

    Full Text Available Objective Accurate and timely diagnosis of rupture of membranes (ROM is imperative to allow for gestational age-specific interventions. This study compared the diagnostic performance characteristics between two methods used for the detection of ROM as measured in the same patient. Methods Vaginal secretions were evaluated using the conventional fern test as well as a point-of-care monoclonal/polyclonal immunoassay test (ROM Plus® in 75 pregnant patients who presented to labor and delivery with complaints of leaking amniotic fluid. Both tests were compared to analytical confirmation of ROM using three external laboratory tests. Diagnostic performance characteristics were calculated including sensitivity, specificity, positive predictive value (PPV, negative predictive value (NPV, and accuracy. Results Diagnostic performance characteristics uniformly favored ROM detection using the immunoassay test compared to the fern test: sensitivity (100% vs. 77.8%, specificity (94.8% vs. 79.3%, PPV (75% vs. 36.8%, NPV (100% vs. 95.8%, and accuracy (95.5% vs. 79.1%. Conclusions The point-of-care immunoassay test provides improved diagnostic accuracy for the detection of ROM compared to fern testing. It has the potential of improving patient management decisions, thereby minimizing serious complications and perinatal morbidity.

  7. PET optimization for improved assessment and accurate quantification of {sup 90}Y-microsphere biodistribution after radioembolization

    Energy Technology Data Exchange (ETDEWEB)

    Martí-Climent, Josep M., E-mail: jmmartic@unav.es; Prieto, Elena; Elosúa, César; Rodríguez-Fraile, Macarena; Domínguez-Prado, Inés; Vigil, Carmen; García-Velloso, María J.; Arbizu, Javier; Peñuelas, Iván; Richter, José A. [Nuclear Medicine Department, Clínica Universidad de Navarra, 36, Pío XII Avenue, 31008 Pamplona (Spain)

    2014-09-15

    Purpose: {sup 90}Y-microspheres are widely used for the radioembolization of metastatic liver cancer or hepatocellular carcinoma and there is a growing interest for imaging {sup 90}Y-microspheres with PET. The aim of this study is to evaluate the performance of a current generation PET/CT scanner for {sup 90}Y imaging and to optimize the PET protocol to improve the assessment and the quantification of {sup 90}Y-microsphere biodistribution after radioembolization. Methods: Data were acquired on a Biograph mCT-TrueV scanner with time of flight (TOF) and point spread function (PSF) modeling. Spatial resolution was measured with a{sup 90}Y point source. Sensitivity was evaluated using the NEMA 70 cm line source filled with {sup 90}Y. To evaluate the count rate performance, {sup 90}Y vials with activity ranging from 3.64 to 0.035 GBq were measured in the center of the field of view (CFOV). The energy spectrum was evaluated. Image quality with different reconstructions was studied using the Jaszczak phantom containing six hollow spheres (diameters: 31.3, 28.1, 21.8, 16.1, 13.3, and 10.5 mm), filled with a 207 kBq/ml {sup 90}Y concentration and a 5:1 sphere-to-background ratio. Acquisition time was adjusted to simulate the quality of a realistic clinical PET acquisition of a patient treated with SIR-Spheres{sup ®}. The developed methodology was applied to ten patients after SIR-Spheres{sup ®} treatment acquiring a 10 min per bed PET. Results: The energy spectrum showed the{sup 90}Y bremsstrahlung radiation. The {sup 90}Y transverse resolution, with filtered backprojection reconstruction, was 4.5 mm in the CFOV and degraded to 5.0 mm at 10 cm off-axis. {sup 90}Y absolute sensitivity was 0.40 kcps/MBq in the center of the field of view. Tendency of true and random rates as a function of the {sup 90}Y activity could be accurately described using linear and quadratic models, respectively. Phantom studies demonstrated that, due to low count statistics in {sup 90}Y PET

  8. Mobile, real-time, and point-of-care augmented reality is robust, accurate, and feasible: a prospective pilot study.

    Science.gov (United States)

    Kenngott, Hannes Götz; Preukschas, Anas Amin; Wagner, Martin; Nickel, Felix; Müller, Michael; Bellemann, Nadine; Stock, Christian; Fangerau, Markus; Radeleff, Boris; Kauczor, Hans-Ulrich; Meinzer, Hans-Peter; Maier-Hein, Lena; Müller-Stich, Beat Peter

    2018-06-01

    Augmented reality (AR) systems are currently being explored by a broad spectrum of industries, mainly for improving point-of-care access to data and images. Especially in surgery and especially for timely decisions in emergency cases, a fast and comprehensive access to images at the patient bedside is mandatory. Currently, imaging data are accessed at a distance from the patient both in time and space, i.e., at a specific workstation. Mobile technology and 3-dimensional (3D) visualization of radiological imaging data promise to overcome these restrictions by making bedside AR feasible. In this project, AR was realized in a surgical setting by fusing a 3D-representation of structures of interest with live camera images on a tablet computer using marker-based registration. The intent of this study was to focus on a thorough evaluation of AR. Feasibility, robustness, and accuracy were thus evaluated consecutively in a phantom model and a porcine model. Additionally feasibility was evaluated in one male volunteer. In the phantom model (n = 10), AR visualization was feasible in 84% of the visualization space with high accuracy (mean reprojection error ± standard deviation (SD): 2.8 ± 2.7 mm; 95th percentile = 6.7 mm). In a porcine model (n = 5), AR visualization was feasible in 79% with high accuracy (mean reprojection error ± SD: 3.5 ± 3.0 mm; 95th percentile = 9.5 mm). Furthermore, AR was successfully used and proved feasible within a male volunteer. Mobile, real-time, and point-of-care AR for clinical purposes proved feasible, robust, and accurate in the phantom, animal, and single-trial human model shown in this study. Consequently, AR following similar implementation proved robust and accurate enough to be evaluated in clinical trials assessing accuracy, robustness in clinical reality, as well as integration into the clinical workflow. If these further studies prove successful, AR might revolutionize data access at patient

  9. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  10. A solution for measuring accurate reaction time to visual stimuli realized with a programmable microcontroller.

    Science.gov (United States)

    Ohyanagi, Toshio; Sengoku, Yasuhito

    2010-02-01

    This article presents a new solution for measuring accurate reaction time (SMART) to visual stimuli. The SMART is a USB device realized with a Cypress Programmable System-on-Chip (PSoC) mixed-signal array programmable microcontroller. A brief overview of the hardware and firmware of the PSoC is provided, together with the results of three experiments. In Experiment 1, we investigated the timing accuracy of the SMART in measuring reaction time (RT) under different conditions of operating systems (OSs; Windows XP or Vista) and monitor displays (a CRT or an LCD). The results indicated that the timing error in measuring RT by the SMART was less than 2 msec, on average, under all combinations of OS and display and that the SMART was tolerant to jitter and noise. In Experiment 2, we tested the SMART with 8 participants. The results indicated that there was no significant difference among RTs obtained with the SMART under the different conditions of OS and display. In Experiment 3, we used Microsoft (MS) PowerPoint to present visual stimuli on the display. We found no significant difference in RTs obtained using MS DirectX technology versus using the PowerPoint file with the SMART. We are certain that the SMART is a simple and practical solution for measuring RTs accurately. Although there are some restrictions in using the SMART with RT paradigms, the SMART is capable of providing both researchers and health professionals working in clinical settings with new ways of using RT paradigms in their work.

  11. Final priority; technical assistance to improve state data capacity--National Technical Assistance Center to improve state capacity to accurately collect and report IDEA data. Final priority.

    Science.gov (United States)

    2013-05-20

    The Assistant Secretary for Special Education and Rehabilitative Services announces a priority under the Technical Assistance to Improve State Data Capacity program. The Assistant Secretary may use this priority for competitions in fiscal year (FY) 2013 and later years. We take this action to focus attention on an identified national need to provide technical assistance (TA) to States to improve their capacity to meet the data collection and reporting requirements of the Individuals with Disabilities Education Act (IDEA). We intend this priority to establish a TA center to improve State capacity to accurately collect and report IDEA data (Data Center).

  12. Improving productivity through more effective time management.

    Science.gov (United States)

    Arnold, Edwin; Pulich, Marcia

    2004-01-01

    Effective time management has become increasingly important for managers as they seek to accomplish objectives in today's organizations, which have been restructured for efficiency while employing fewer people. Managers can improve their ability to manage time effectively by examining their attitudes toward time, analyzing time-wasting behaviors, and developing better time management skills. Managers can improve their performance and promotion potential with more effective time utilization. Strategies for improving time management skills are presented.

  13. A time-driven activity-based costing model to improve health-care resource use in Mirebalais, Haiti.

    Science.gov (United States)

    Mandigo, Morgan; O'Neill, Kathleen; Mistry, Bipin; Mundy, Bryan; Millien, Christophe; Nazaire, Yolande; Damuse, Ruth; Pierre, Claire; Mugunga, Jean Claude; Gillies, Rowan; Lucien, Franciscka; Bertrand, Karla; Luo, Eva; Costas, Ainhoa; Greenberg, Sarah L M; Meara, John G; Kaplan, Robert

    2015-04-27

    In resource-limited settings, efficiency is crucial to maximise resources available for patient care. Time driven activity-based costing (TDABC) estimates costs directly from clinical and administrative processes used in patient care, thereby providing valuable information for process improvements. TDABC is more accurate and simpler than traditional activity-based costing because it assigns resource costs to patients based on the amount of time clinical and staff resources are used in patient encounters. Other costing approaches use somewhat arbitrary allocations that provide little transparency into the actual clinical processes used to treat medical conditions. TDABC has been successfully applied in European and US health-care settings to facilitate process improvements and new reimbursement approaches, but it has not been used in resource-limited settings. We aimed to optimise TDABC for use in a resource-limited setting to provide accurate procedure and service costs, reliably predict financing needs, inform quality improvement initiatives, and maximise efficiency. A multidisciplinary team used TDABC to map clinical processes for obstetric care (vaginal and caesarean deliveries, from triage to post-partum discharge) and breast cancer care (diagnosis, chemotherapy, surgery, and support services, such as pharmacy, radiology, laboratory, and counselling) at Hôpital Universitaire de Mirebalais (HUM) in Haiti. The team estimated the direct costs of personnel, equipment, and facilities used in patient care based on the amount of time each of these resources was used. We calculated inpatient personnel costs by allocating provider costs per staffed bed, and assigned indirect costs (administration, facility maintenance and operations, education, procurement and warehouse, bloodbank, and morgue) to various subgroups of the patient population. This study was approved by the Partners in Health/Zanmi Lasante Research Committee. The direct cost of an uncomplicated vaginal

  14. An accurate approximate solution of optimal sequential age replacement policy for a finite-time horizon

    International Nuclear Information System (INIS)

    Jiang, R.

    2009-01-01

    It is difficult to find the optimal solution of the sequential age replacement policy for a finite-time horizon. This paper presents an accurate approximation to find an approximate optimal solution of the sequential replacement policy. The proposed approximation is computationally simple and suitable for any failure distribution. Their accuracy is illustrated by two examples. Based on the approximate solution, an approximate estimate for the total cost is derived.

  15. Improved modified energy ratio method using a multi-window approach for accurate arrival picking

    Science.gov (United States)

    Lee, Minho; Byun, Joongmoo; Kim, Dowan; Choi, Jihun; Kim, Myungsun

    2017-04-01

    To identify accurately the location of microseismic events generated during hydraulic fracture stimulation, it is necessary to detect the first break of the P- and S-wave arrival times recorded at multiple receivers. These microseismic data often contain high-amplitude noise, which makes it difficult to identify the P- and S-wave arrival times. The short-term-average to long-term-average (STA/LTA) and modified energy ratio (MER) methods are based on the differences in the energy densities of the noise and signal, and are widely used to identify the P-wave arrival times. The MER method yields more consistent results than the STA/LTA method for data with a low signal-to-noise (S/N) ratio. However, although the MER method shows good results regardless of the delay of the signal wavelet for signals with a high S/N ratio, it may yield poor results if the signal is contaminated by high-amplitude noise and does not have the minimum delay. Here we describe an improved MER (IMER) method, whereby we apply a multiple-windowing approach to overcome the limitations of the MER method. The IMER method contains calculations of an additional MER value using a third window (in addition to the original MER window), as well as the application of a moving average filter to each MER data point to eliminate high-frequency fluctuations in the original MER distributions. The resulting distribution makes it easier to apply thresholding. The proposed IMER method was applied to synthetic and real datasets with various S/N ratios and mixed-delay wavelets. The results show that the IMER method yields a high accuracy rate of around 80% within five sample errors for the synthetic datasets. Likewise, in the case of real datasets, 94.56% of the P-wave picking results obtained by the IMER method had a deviation of less than 0.5 ms (corresponding to 2 samples) from the manual picks.

  16. AN ACCURATE ORBITAL INTEGRATOR FOR THE RESTRICTED THREE-BODY PROBLEM AS A SPECIAL CASE OF THE DISCRETE-TIME GENERAL THREE-BODY PROBLEM

    International Nuclear Information System (INIS)

    Minesaki, Yukitaka

    2013-01-01

    For the restricted three-body problem, we propose an accurate orbital integration scheme that retains all conserved quantities of the two-body problem with two primaries and approximately preserves the Jacobi integral. The scheme is obtained by taking the limit as mass approaches zero in the discrete-time general three-body problem. For a long time interval, the proposed scheme precisely reproduces various periodic orbits that cannot be accurately computed by other generic integrators

  17. Multipurpose discriminator with accurate time coupling

    International Nuclear Information System (INIS)

    Baldin, B.Yu.; Krumshtejn, Z.V.; Ronzhin, A.I.

    1977-01-01

    The principle diagram of a multipurpose discriminator is described, designed on the basis of a wide-band differential amplifier. The discriminator has three independent channels: the timing channel, the lower level discriminator and the control channel. The timing channel and the lower level discriminator are connected to a coincidence circuit. Three methods of timing are used: a single threshold, a double threshold with timing on the pulse front, and a constant fraction timing. The lower level discriminator is a wide-band amplifier with an adjustable threshold. The investigation of compensation characteristics of the discriminator has shown that the time shift of the discriminator output in the constant fraction timing regime does not exceed +-75 ns for the input signal range of 1:85. The time resolution was found to be 20 ns in the 20% energy range near the photo-peak maximum of 60 Co γ source

  18. Using space-time features to improve detection of forest disturbances from Landsat time series

    NARCIS (Netherlands)

    Hamunyela, E.; Reiche, J.; Verbesselt, J.; Herold, M.

    2017-01-01

    Current research on forest change monitoring using medium spatial resolution Landsat satellite data aims for accurate and timely detection of forest disturbances. However, producing forest disturbance maps that have both high spatial and temporal accuracy is still challenging because of the

  19. Towards more accurate wind and solar power prediction by improving NWP model physics

    Science.gov (United States)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  20. Improved fingercode alignment for accurate and compact fingerprint recognition

    CSIR Research Space (South Africa)

    Brown, Dane

    2016-05-01

    Full Text Available Alignment for Accurate and Compact Fingerprint Recognition Dane Brown∗† and Karen Bradshaw∗ ∗Department of Computer Science Rhodes University Grahamstown, South Africa †Council for Scientific and Industrial Research Modelling and Digital Sciences Pretoria.... The experimental analysis and results are discussed in Section IV. Section V concludes the paper. II. RELATED STUDIES FingerCode [1] uses circular tessellation of filtered finger- print images centered at the reference point, which results in a circular ROI...

  1. Incorporating geostrophic wind information for improved space–time short-term wind speed forecasting

    KAUST Repository

    Zhu, Xinxin

    2014-09-01

    Accurate short-term wind speed forecasting is needed for the rapid development and efficient operation of wind energy resources. This is, however, a very challenging problem. Although on the large scale, the wind speed is related to atmospheric pressure, temperature, and other meteorological variables, no improvement in forecasting accuracy was found by incorporating air pressure and temperature directly into an advanced space-time statistical forecasting model, the trigonometric direction diurnal (TDD) model. This paper proposes to incorporate the geostrophic wind as a new predictor in the TDD model. The geostrophic wind captures the physical relationship between wind and pressure through the observed approximate balance between the pressure gradient force and the Coriolis acceleration due to the Earth’s rotation. Based on our numerical experiments with data from West Texas, our new method produces more accurate forecasts than does the TDD model using air pressure and temperature for 1to 6-hour-ahead forecasts based on three different evaluation criteria. Furthermore, forecasting errors can be further reduced by using moving average hourly wind speeds to fit the diurnal pattern. For example, our new method obtains between 13.9% and 22.4% overall mean absolute error reduction relative to persistence in 2-hour-ahead forecasts, and between 5.3% and 8.2% reduction relative to the best previous space-time methods in this setting.

  2. Improved method for rapid and accurate isolation and identification of Streptococcus mutans and Streptococcus sobrinus from human plaque samples.

    Science.gov (United States)

    Villhauer, Alissa L; Lynch, David J; Drake, David R

    2017-08-01

    Mutans streptococci (MS), specifically Streptococcus mutans (SM) and Streptococcus sobrinus (SS), are bacterial species frequently targeted for investigation due to their role in the etiology of dental caries. Differentiation of S. mutans and S. sobrinus is an essential part of exploring the role of these organisms in disease progression and the impact of the presence of either/both on a subject's caries experience. Of vital importance to the study of these organisms is an identification protocol that allows us to distinguish between the two species in an easy, accurate, and timely manner. While conducting a 5-year birth cohort study in a Northern Plains American Indian tribe, the need for a more rapid procedure for isolating and identifying high volumes of MS was recognized. We report here on the development of an accurate and rapid method for MS identification. Accuracy, ease of use, and material and time requirements for morphological differentiation on selective agar, biochemical tests, and various combinations of PCR primers were compared. The final protocol included preliminary identification based on colony morphology followed by PCR confirmation of species identification using primers targeting regions of the glucosyltransferase (gtf) genes of SM and SS. This method of isolation and identification was found to be highly accurate, more rapid than the previous methodology used, and easily learned. It resulted in more efficient use of both time and material resources. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Wireless Time Tracking Improves Productivity at CSU Long Beach.

    Science.gov (United States)

    Charmack, Scott; Walsh, Randy

    2002-01-01

    Describes California State University Long Beach's implementation of new maintenance management software, which integrated maintenance, inventory control, and key control and allows technicians to enter and receive information through handheld wireless devices for more accurate time accounting. The school estimates a 10 percent increase in…

  4. Improved Algorithms for Accurate Retrieval of UV - Visible Diffuse Attenuation Coefficients in Optically Complex, Inshore Waters

    Science.gov (United States)

    Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.

    2014-01-01

    Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This

  5. Improvement of the exponential experiment system for the automatical and accurate measurement of the exponential decay constant

    International Nuclear Information System (INIS)

    Shin, Hee Sung; Jang, Ji Woon; Lee, Yoon Hee; Hwang, Yong Hwa; Kim, Ho Dong

    2004-01-01

    The previous exponential experiment system has been improved for the automatical and accurate axial movement of the neutron source and detector with attaching the automatical control system which consists of a Programmable Logical Controller(PLC) and a stepping motor set. The automatic control program which controls MCA and PLC consistently has been also developed on the basis of GENIE 2000 library. The exponential experiments have been carried out for Kori 1 unit spent fuel assemblies, C14, J14 and G23, and Kori 2 unit spent fuel assembly, J44, using the improved systematical measurement system. As the results, the average exponential decay constants for 4 assemblies are determined to be 0.1302, 0.1267, 0.1247, and 0.1210, respectively, with the application of poisson regression

  6. Anti-Mullerian hormone is a more accurate predictor of individual time to menopause than mother's age at menopause

    NARCIS (Netherlands)

    Dolleman, M.; Depmann, M.; Eijkemans, M.J.; Heimensem, J.; Broer, S.L.; Stroom, E.M. van der; Laven, J.S.E.; Rooij, I.A.L.M. van; Scheffer, G.J.; Peeters, P.H.M.; Schouw, Y.T. van der; Lambalk, C.B.; Broekmans, F.J.

    2014-01-01

    STUDY QUESTION: In the prediction of time to menopause (TTM), what is the added value of anti-Mullerian hormone (AMH) when mother's age at natural menopause (ANM) is also known? SUMMARY ANSWER: AMH is a more accurate predictor of individual TTM than mother's age at menopause. WHAT IS KNOWN ALREADY:

  7. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  8. Demonstration of the improved PID method for the accurate temperature control of ADRs

    International Nuclear Information System (INIS)

    Shinozaki, K.; Hoshino, A.; Ishisaki, Y.; Mihara, T.

    2006-01-01

    Microcalorimeters require extreme stability (-bar 10μK) of thermal bath at low temperature (∼100mK). We have developed a portable adiabatic demagnetization refrigerator (ADR) system for ground experiments with TES microcalorimeters, in which we observed residual temperature between aimed and measured values when magnet current was controlled with the standard Proportional, Integral, and Derivative control (PID) method. The difference increases in time as the magnet current decreases. This phenomenon can be explained by the theory of the magnetic cooling, and we have introduced a new functional parameter to improve the PID method. With this improvement, long-term stability of the ADR temperature about 10μK rms is obtained up to the period of ∼15ks down to almost zero magnet current. We briefly describe our ADR system and principle of the improved PID method, showing the temperature control result. It is demonstrated that the controlled time of the aimed temperature can be extended by about 30% longer than the standard PID method in our system. The improved PID method is considered to be of great advantage especially in the range of small magnet current

  9. Accurate procedure for deriving UTI at a submilliarcsecond accuracy from Greenwich Sidereal Time or from the stellar angle

    Science.gov (United States)

    Capitaine, N.; Gontier, A.-M.

    1993-08-01

    Present observations using modern astrometric techniques are supposed to provide the Earth orientation parameters, and therefore UT1, with an accuracy better than ±1 mas. In practice, UT1 is determined through the intermediary of Greenwich Sidereal Time (GST), using both the conventional relationship between Greenwich Mean Sidereal Time (GMST) and UTl (Aoki et al. 1982) and the so-called "equation of the equinoxes" limited to the first order terms with respect to the nutation quantities. This highly complex relation between sidereal time and UT1 is not accurate at the milliaresecond level which gives rise to spurious terms of milliaresecond amplitude in the derived UTl. A more complete relationship between GST and UT1 has been recommended by Aoki & Kinoshita (1983) and Aoki (1991) taking into account the second order terms in the difference between GST and GM ST, the largest one having an amplitude of 2.64 mas and a 18.6 yr-period. This paper explains how this complete expansion of GST implicitly uses the concept of "nonrotating origin" (NRO) as proposed by Guinot in 1979 and would, therefore, provide a more accurate value of UTl and consequently of the Earth's angular velocity. This paper shows, moreover, that such a procedure would be simplified and conceptually clarified by the explicit use of the NRO as previously proposed (Guinot 1979; Capitaine et al. 1986). The two corresponding options (implicit or explicit use of the NRO) are shown to be equivalent for defining the specific Earth's angle of rotation and then UT1. The of the use of such an accurate procedure which has been proposed in the new IERS standards (McCarthy 1992a) instead of the usual one are estimated for the practical derivation of UT1.

  10. Picture Memory Improves with Longer On Time and Off Time

    Science.gov (United States)

    Tversky, Barbara; Sherman, Tracy

    1975-01-01

    Both recognition and recall of pictures improve as picture presentation time increases and as time between picture increases. This experiment was compared with an earlier one by Shaffer and Shiffrin (1972). (Editor/RK)

  11. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    Science.gov (United States)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  12. An Improved Clutter Suppression Method for Weather Radars Using Multiple Pulse Repetition Time Technique

    Directory of Open Access Journals (Sweden)

    Yingjie Yu

    2017-01-01

    Full Text Available This paper describes the implementation of an improved clutter suppression method for the multiple pulse repetition time (PRT technique based on simulated radar data. The suppression method is constructed using maximum likelihood methodology in time domain and is called parametric time domain method (PTDM. The procedure relies on the assumption that precipitation and clutter signal spectra follow a Gaussian functional form. The multiple interleaved pulse repetition frequencies (PRFs that are used in this work are set to four PRFs (952, 833, 667, and 513 Hz. Based on radar simulation, it is shown that the new method can provide accurate retrieval of Doppler velocity even in the case of strong clutter contamination. The obtained velocity is nearly unbiased for all the range of Nyquist velocity interval. Also, the performance of the method is illustrated on simulated radar data for plan position indicator (PPI scan. Compared with staggered 2-PRT transmission schemes with PTDM, the proposed method presents better estimation accuracy under certain clutter situations.

  13. Controlled Substance Reconciliation Accuracy Improvement Using Near Real-Time Drug Transaction Capture from Automated Dispensing Cabinets.

    Science.gov (United States)

    Epstein, Richard H; Dexter, Franklin; Gratch, David M; Perino, Michael; Magrann, Jerry

    2016-06-01

    Accurate accounting of controlled drug transactions by inpatient hospital pharmacies is a requirement in the United States under the Controlled Substances Act. At many hospitals, manual distribution of controlled substances from pharmacies is being replaced by automated dispensing cabinets (ADCs) at the point of care. Despite the promise of improved accountability, a high prevalence (15%) of controlled substance discrepancies between ADC records and anesthesia information management systems (AIMS) has been published, with a similar incidence (15.8%; 95% confidence interval [CI], 15.3% to 16.2%) noted at our institution. Most reconciliation errors are clerical. In this study, we describe a method to capture drug transactions in near real-time from our ADCs, compare them with documentation in our AIMS, and evaluate subsequent improvement in reconciliation accuracy. ADC-controlled substance transactions are transmitted to a hospital interface server, parsed, reformatted, and sent to a software script written in Perl. The script extracts the data and writes them to a SQL Server database. Concurrently, controlled drug totals for each patient having care are documented in the AIMS and compared with the balance of the ADC transactions (i.e., vending, transferring, wasting, and returning drug). Every minute, a reconciliation report is available to anesthesia providers over the hospital Intranet from AIMS workstations. The report lists all patients, the current provider, the balance of ADC transactions, the totals from the AIMS, the difference, and whether the case is still ongoing or had concluded. Accuracy and latency of the ADC transaction capture process were assessed via simulation and by comparison with pharmacy database records, maintained by the vendor on a central server located remotely from the hospital network. For assessment of reconciliation accuracy over time, data were collected from our AIMS from January 2012 to June 2013 (Baseline), July 2013 to April 2014

  14. Time-Accurate Unsteady Pressure Loads Simulated for the Space Launch System at Wind Tunnel Conditions

    Science.gov (United States)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.

    2015-01-01

    A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.

  15. Simultaneous transmission of accurate time, stable frequency, data, and sensor system over one fiber with ITU 100 GHz grid

    Science.gov (United States)

    Horvath, Tomas; Munster, Petr; Vojtech, Josef; Velc, Radek; Oujezsky, Vaclav

    2018-01-01

    Optical fiber is the most used medium for current telecommunication networks. Besides data transmissions, special advanced applications like accurate time or stable frequency transmissions are more common, especially in research and education networks. On the other hand, new applications like distributed sensing are in ISP's interest because e.g. such sensing allows new service: protection of fiber infrastructure. Transmission of all applications in a single fiber can be very cost efficient but it is necessary to evaluate possible interaction before real application and deploying the service, especially if standard 100 GHz grid is considered. We performed laboratory measurement of simultaneous transmission of 100 G data based on DP-QPSK modulation format, accurate time, stable frequency and sensing system based on phase sensitive OTDR through two types of optical fibers, G.655 and G.653. These fibers are less common than G.652 fiber but thanks to their slightly higher nonlinear character, there are suitable for simulation of the worst case which can arise in a real network.

  16. STELLAR LOCUS REGRESSION: ACCURATE COLOR CALIBRATION AND THE REAL-TIME DETERMINATION OF GALAXY CLUSTER PHOTOMETRIC REDSHIFTS

    International Nuclear Information System (INIS)

    High, F. William; Stubbs, Christopher W.; Rest, Armin; Stalder, Brian; Challis, Peter

    2009-01-01

    We present stellar locus regression (SLR), a method of directly adjusting the instrumental broadband optical colors of stars to bring them into accord with a universal stellar color-color locus, producing accurately calibrated colors for both stars and galaxies. This is achieved without first establishing individual zero points for each passband, and can be performed in real-time at the telescope. We demonstrate how SLR naturally makes one wholesale correction for differences in instrumental response, for atmospheric transparency, for atmospheric extinction, and for Galactic extinction. We perform an example SLR treatment of Sloan Digital Sky Survey data over a wide range of Galactic dust values and independently recover the direction and magnitude of the canonical Galactic reddening vector with 14-18 mmag rms uncertainties. We then isolate the effect of atmospheric extinction, showing that SLR accounts for this and returns precise colors over a wide range of air mass, with 5-14 mmag rms residuals. We demonstrate that SLR-corrected colors are sufficiently accurate to allow photometric redshift estimates for galaxy clusters (using red sequence galaxies) with an uncertainty σ(z)/(1 + z) = 0.6% per cluster for redshifts 0.09 < z < 0.25. Finally, we identify our objects in the 2MASS all-sky catalog, and produce i-band zero points typically accurate to 18 mmag using only SLR. We offer open-source access to our IDL routines, validated and verified for the implementation of this technique, at http://stellar-locus-regression.googlecode.com.

  17. A Comprehensive Strategy for Accurate Reactive Power Distribution, Stability Improvement, and Harmonic Suppression of Multi-Inverter-Based Micro-Grid

    Directory of Open Access Journals (Sweden)

    Henan Dong

    2018-03-01

    Full Text Available Among the issues of accurate power distribution, stability improvement, and harmonic suppression in micro-grid, each has been well studied as an individual, and most of the strategies about these issues aim at one inverter-based micro-grid, hence there is a need to establish a model to achieve these functions as a whole, aiming at a multi-inverter-based micro-grid. This paper proposes a comprehensive strategy which achieves this goal successfully; since the output voltage and frequency of micro-grid all consist of fundamental and harmonic components, the strategy contains two parts accordingly. On one hand, a fundamental control strategy is proposed upon the conventional droop control. The virtual impedance is introduced to solve the problem of accurate allocation of reactive power between inverters. Meanwhile, a secondary power balance controller is added to improve the stability of voltage and frequency while considering the aggravating problem of stability because of introducing virtual impedance. On the other hand, the fractional frequency harmonic control strategy is proposed. It can solve the influence of nonlinear loads, micro-grid inverters, and the distribution network on output voltage of inverters, which is focused on eliminating specific harmonics caused by the nonlinear loads, micro-grid converters, and the distribution network so that the power quality of micro-grid can be improved effectively. Finally, small signal analysis is used to analyze the stability of the multi-converter parallel system after introducing the whole control strategy. The simulation results show that the strategy proposed in this paper has a great performance on distributing reactive power, regulating and stabilizing output voltage of inverters and frequency, eliminating harmonic components, and improving the power quality of multi-inverter-based micro-grid.

  18. Development of improved enzyme-based and lateral flow immunoassays for rapid and accurate serodiagnosis of canine brucellosis.

    Science.gov (United States)

    Cortina, María E; Novak, Analía; Melli, Luciano J; Elena, Sebastián; Corbera, Natalia; Romero, Juan E; Nicola, Ana M; Ugalde, Juan E; Comerci, Diego J; Ciocchini, Andrés E

    2017-09-01

    Brucellosis is a widespread zoonotic disease caused by Brucella spp. Brucella canis is the etiological agent of canine brucellosis, a disease that can lead to sterility in bitches and dogs causing important economic losses in breeding kennels. Early and accurate diagnosis of canine brucellosis is central to control the disease and lower the risk of transmission to humans. Here, we develop and validate enzyme and lateral flow immunoassays for improved serodiagnosis of canine brucellosis using as antigen the B. canis rough lipopolysaccharide (rLPS). The method used to obtain the rLPS allowed us to produce more homogeneous batches of the antigen that facilitated the standardization of the assays. To validate the assays, 284 serum samples obtained from naturally infected dogs and healthy animals were analyzed. For the B. canis-iELISA and B. canis-LFIA the diagnostic sensitivity was of 98.6%, and the specificity 99.5% and 100%, respectively. We propose the implementation of the B. canis-LFIA as a screening test in combination with the highly accurate laboratory g-iELISA. The B. canis-LFIA is a rapid, accurate and easy to use test, characteristics that make it ideal for the serological surveillance of canine brucellosis in the field or veterinary laboratories. Finally, a blind study including 1040 serum samples obtained from urban dogs showed a prevalence higher than 5% highlighting the need of new diagnostic tools for a more effective control of the disease in dogs and therefore to reduce the risk of transmission of this zoonotic pathogen to humans. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Single breath-hold real-time cine MR imaging: improved temporal resolution using generalized autocalibrating partially parallel acquisition (GRAPPA) algorithm

    International Nuclear Information System (INIS)

    Wintersperger, Bernd J.; Nikolaou, Konstantin; Dietrich, Olaf; Reiser, Maximilian F.; Schoenberg, Stefan O.; Rieber, Johannes; Nittka, Matthias

    2003-01-01

    The purpose of this study was to test parallel imaging techniques for improvement of temporal resolution in multislice single breath-hold real-time cine steady-state free precession (SSFP) in comparison with standard segmented single-slice SSFP techniques. Eighteen subjects were examined on a 1.5-T scanner using a multislice real-time cine SSFP technique using the GRAPPA algorithm. Global left ventricular parameters (EDV, ESV, SV, EF) were evaluated and results compared with a standard segmented single-slice SSFP technique. Results for EDV (r=0.93), ESV (r=0.99), SV (r=0.83), and EF (r=0.99) of real-time multislice SSFP imaging showed a high correlation with results of segmented SSFP acquisitions. Systematic differences between both techniques were statistically non-significant. Single breath-hold multislice techniques using GRAPPA allow for improvement of temporal resolution and for accurate assessment of global left ventricular functional parameters. (orig.)

  20. Improving the description of sunglint for accurate prediction of remotely sensed radiances

    Energy Technology Data Exchange (ETDEWEB)

    Ottaviani, Matteo [Light and Life Laboratory, Department of Physics and Engineering Physics, Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 (United States)], E-mail: mottavia@stevens.edu; Spurr, Robert [RT Solutions Inc., 9 Channing Street, Cambridge, MA 02138 (United States); Stamnes, Knut; Li Wei [Light and Life Laboratory, Department of Physics and Engineering Physics, Stevens Institute of Technology, Castle Point on Hudson, Hoboken, NJ 07030 (United States); Su Wenying [Science Systems and Applications Inc., 1 Enterprise Parkway, Hampton, VA 23666 (United States); Wiscombe, Warren [NASA GSFC, Greenbelt, MD 20771 (United States)

    2008-09-15

    The bidirectional reflection distribution function (BRDF) of the ocean is a critical boundary condition for radiative transfer calculations in the coupled atmosphere-ocean system. Existing models express the extent of the glint-contaminated region and its contribution to the radiance essentially as a function of the wind speed. An accurate treatment of the glint contribution and its propagation in the atmosphere would improve current correction schemes and hence rescue a significant portion of data presently discarded as 'glint contaminated'. In current satellite imagery, a correction to the sensor-measured radiances is limited to the region at the edge of the glint, where the contribution is below a certain threshold. This correction assumes the sunglint radiance to be directly transmitted through the atmosphere. To quantify the error introduced by this approximation we employ a radiative transfer code that allows for a user-specified BRDF at the atmosphere-ocean interface and rigorously accounts for multiple scattering. We show that the errors incurred by ignoring multiple scattering are very significant and typically lie in the range 10-90%. Multiple reflections and shadowing at the surface can also be accounted for, and we illustrate the importance of such processes at grazing geometries.

  1. DC-dynamic biasing for >50× switching time improvement in severely underdamped fringing-field electrostatic MEMS actuators

    International Nuclear Information System (INIS)

    Small, J; Liu, X; Fruehling, A; Garg, A; Peroulis, D

    2012-01-01

    This paper presents the design and experimental validation of dc-dynamic biasing for > 50× switching time improvement in severely underdamped fringing-field electrostatic MEMS actuators. The electrostatic fringing-field actuator is used to demonstrate the concept due to its robust device design and inherently low damping conditions. In order to accurately quantify the gap height versus voltage characteristics, a heuristic model is developed. The difference between the heuristic model and numerical simulation is less than 5.6% for typical MEMS geometries. MEMS fixed–fixed beams are fabricated and measured for experimental validation. Good agreement is observed between the calculated and measured results. For a given voltage, the measured and calculated displacements are typically within 10%. Lastly, the derived model is used to design a dc-dynamic bias waveform to improve the switching time of the underdamped MEMS actuators. With dynamic biasing, the measured up-to-down and down-to-up switching time of the actuator is ∼35 μs. On the other hand, coventional step biasing results in a switching time of ∼2 ms for both up-to-down and down-to-up states. (paper)

  2. Improved Patient Size Estimates for Accurate Dose Calculations in Abdomen Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang-Lae [Yonsei University, Wonju (Korea, Republic of)

    2017-07-15

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  3. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    Science.gov (United States)

    Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.

  4. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    KAUST Repository

    Pan, Bing

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost. © 2014 Elsevier Ltd.

  5. Influential Factors for Accurate Load Prediction in a Demand Response Context

    DEFF Research Database (Denmark)

    Wollsen, Morten Gill; Kjærgaard, Mikkel Baun; Jørgensen, Bo Nørregaard

    2016-01-01

    Accurate prediction of a buildings electricity load is crucial to respond to Demand Response events with an assessable load change. However, previous work on load prediction lacks to consider a wider set of possible data sources. In this paper we study different data scenarios to map the influence....... Next, the time of day that is being predicted greatly influence the prediction which is related to the weather pattern. By presenting these results we hope to improve the modeling of building loads and algorithms for Demand Response planning.......Accurate prediction of a buildings electricity load is crucial to respond to Demand Response events with an assessable load change. However, previous work on load prediction lacks to consider a wider set of possible data sources. In this paper we study different data scenarios to map the influence...

  6. Quality Improvement Cycles that Reduced Waiting Times at ...

    African Journals Online (AJOL)

    It was decided to undertake quality improvement (QI) cycles to analyse and improve the situation, using waiting time as a measure of improvement. Methods: A QI team was chosen to conduct two QI cycles. The allocated time for QI cycle 1 was from May to August 2006 and for QI cycle 2 from September to December 2006.

  7. A new approach for accurate mass assignment on a multi-turn time-of-flight mass spectrometer.

    Science.gov (United States)

    Hondo, Toshinobu; Jensen, Kirk R; Aoki, Jun; Toyoda, Michisato

    2017-12-01

    A simple, effective accurate mass assignment procedure for a time-of-flight mass spectrometer is desirable. External mass calibration using a mass calibration standard together with an internal mass reference (lock mass) is a common technique for mass assignment, however, using polynomial fitting can result in mass-dependent errors. By using the multi-turn time-of-flight mass spectrometer infiTOF-UHV, we were able to obtain multiple time-of-flight data from an ion monitored under several different numbers of laps that was then used to calculate a mass calibration equation. We have developed a data acquisition system that simultaneously monitors spectra at several different lap conditions with on-the-fly centroid determination and scan law estimation, which is a function of acceleration voltage, flight path, and instrumental time delay. Less than 0.9 mDa mass errors were observed for assigned mass to charge ratios ( m/z) ranging between 4 and 134 using only 40 Ar + as a reference. It was also observed that estimating the scan law on-the-fly provides excellent mass drift compensation.

  8. The description of a method for accurately estimating creatinine clearance in acute kidney injury.

    Science.gov (United States)

    Mellas, John

    2016-05-01

    practitioner with a new tool to estimate real time K in AKI with enough precision to predict the severity of the renal injury, including progression, stabilization, or improvement in azotemia. It is the author's belief that this simple method improves on RIFLE, AKIN, and KDIGO for estimating the degree of renal impairment in AKI and allows a more accurate estimate of K in AKI. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Improvements in Low-cost Ultrasonic Measurements of Blood Flow in "by-passes" Using Narrow & Broad Band Transit-time Procedures

    Science.gov (United States)

    Ramos, A.; Calas, H.; Diez, L.; Moreno, E.; Prohías, J.; Villar, A.; Carrillo, E.; Jiménez, A.; Pereira, W. C. A.; Von Krüger, M. A.

    The cardio-pathology by ischemia is an important cause of death, but the re-vascularization of coronary arteries (by-pass operation) is an useful solution to reduce associated morbidity improving quality of life in patients. During these surgeries, the flow in coronary vessels must be measured, using non-invasive ultrasonic methods, known as transit time flow measurements (TTFM), which are the most accurate option nowadays. TTFM is a common intra-operative tool, in conjunction with classic Doppler velocimetry, to check the quality of these surgery processes for implanting grafts in parallel with the coronary arteries. This work shows important improvements achieved in flow-metering, obtained in our research laboratories (CSIC, ICIMAF, COPPE) and tested under real surgical conditions in Cardiocentro-HHA, for both narrowband NB and broadband BB regimes, by applying results of a CYTED multinational project (Ultrasonic & computational systems for cardiovascular diagnostics). mathematical models and phantoms were created to evaluate accurately flow measurements, in laboratory conditions, before our new electronic designs and low-cost implementations, improving previous ttfm systems, which include analogic detection, acquisition & post-processing, and a portable PC. Both regimes (NB and BB), with complementary performances for different conditions, were considered. Finally, specific software was developed to offer facilities to surgeons in their interventions.

  10. Incorporation of exact boundary conditions into a discontinuous galerkin finite element method for accurately solving 2d time-dependent maxwell equations

    KAUST Repository

    Sirenko, Kostyantyn; Liu, Meilin; Bagci, Hakan

    2013-01-01

    A scheme that discretizes exact absorbing boundary conditions (EACs) to incorporate them into a time-domain discontinuous Galerkin finite element method (TD-DG-FEM) is described. The proposed TD-DG-FEM with EACs is used for accurately characterizing

  11. Disambiguating past events: Accurate source memory for time and context depends on different retrieval processes.

    Science.gov (United States)

    Persson, Bjorn M; Ainge, James A; O'Connor, Akira R

    2016-07-01

    Current animal models of episodic memory are usually based on demonstrating integrated memory for what happened, where it happened, and when an event took place. These models aim to capture the testable features of the definition of human episodic memory which stresses the temporal component of the memory as a unique piece of source information that allows us to disambiguate one memory from another. Recently though, it has been suggested that a more accurate model of human episodic memory would include contextual rather than temporal source information, as humans' memory for time is relatively poor. Here, two experiments were carried out investigating human memory for temporal and contextual source information, along with the underlying dual process retrieval processes, using an immersive virtual environment paired with a 'Remember-Know' memory task. Experiment 1 (n=28) showed that contextual information could only be retrieved accurately using recollection, while temporal information could be retrieved using either recollection or familiarity. Experiment 2 (n=24), which used a more difficult task, resulting in reduced item recognition rates and therefore less potential for contamination by ceiling effects, replicated the pattern of results from Experiment 1. Dual process theory predicts that it should only be possible to retrieve source context from an event using recollection, and our results are consistent with this prediction. That temporal information can be retrieved using familiarity alone suggests that it may be incorrect to view temporal context as analogous to other typically used source contexts. This latter finding supports the alternative proposal that time since presentation may simply be reflected in the strength of memory trace at retrieval - a measure ideally suited to trace strength interrogation using familiarity, as is typically conceptualised within the dual process framework. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. EpHLA software: a timesaving and accurate tool for improving identification of acceptable mismatches for clinical purposes.

    Science.gov (United States)

    Filho, Herton Luiz Alves Sales; da Mata Sousa, Luiz Claudio Demes; von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; dos Santos Neto, Pedro de Alcântara; do Nascimento, Ferraz; de Castro, Adail Fonseca; do Nascimento, Liliane Machado; Kneib, Carolina; Bianchi Cazarote, Helena; Mayumi Kitamura, Daniele; Torres, Juliane Roberta Dias; da Cruz Lopes, Laiane; Barros, Aryela Loureiro; da Silva Edlin, Evelin Nildiane; de Moura, Fernanda Sá Leal; Watanabe, Janine Midori Figueiredo; do Monte, Semiramis Jamil Hadad

    2012-06-01

    The HLAMatchmaker algorithm, which allows the identification of “safe” acceptable mismatches (AMMs) for recipients of solid organ and cell allografts, is rarely used in part due to the difficulty in using it in the current Excel format. The automation of this algorithm may universalize its use to benefit the allocation of allografts. Recently, we have developed a new software called EpHLA, which is the first computer program automating the use of the HLAMatchmaker algorithm. Herein, we present the experimental validation of the EpHLA program by showing the time efficiency and the quality of operation. The same results, obtained by a single antigen bead assay with sera from 10 sensitized patients waiting for kidney transplants, were analyzed either by conventional HLAMatchmaker or by automated EpHLA method. Users testing these two methods were asked to record: (i) time required for completion of the analysis (in minutes); (ii) number of eplets obtained for class I and class II HLA molecules; (iii) categorization of eplets as reactive or non-reactive based on the MFI cutoff value; and (iv) determination of AMMs based on eplets' reactivities. We showed that although both methods had similar accuracy, the automated EpHLA method was over 8 times faster in comparison to the conventional HLAMatchmaker method. In particular the EpHLA software was faster and more reliable but equally accurate as the conventional method to define AMMs for allografts. The EpHLA software is an accurate and quick method for the identification of AMMs and thus it may be a very useful tool in the decision-making process of organ allocation for highly sensitized patients as well as in many other applications.

  13. Accurate LC peak boundary detection for ¹⁶O/¹⁸O labeled LC-MS data.

    Science.gov (United States)

    Cui, Jian; Petritis, Konstantinos; Tegeler, Tony; Petritis, Brianne; Ma, Xuepo; Jin, Yufang; Gao, Shou-Jiang S J; Zhang, Jianqiu Michelle

    2013-01-01

    In liquid chromatography-mass spectrometry (LC-MS), parts of LC peaks are often corrupted by their co-eluting peptides, which results in increased quantification variance. In this paper, we propose to apply accurate LC peak boundary detection to remove the corrupted part of LC peaks. Accurate LC peak boundary detection is achieved by checking the consistency of intensity patterns within peptide elution time ranges. In addition, we remove peptides with erroneous mass assignment through model fitness check, which compares observed intensity patterns to theoretically constructed ones. The proposed algorithm can significantly improve the accuracy and precision of peptide ratio measurements.

  14. Can an interprofessional tracheostomy team improve weaning to decannulation times? A quality improvement evaluation

    Science.gov (United States)

    Morrison, Melissa; Catalig, Marifel; Chris, Juliana; Pataki, Janos

    2016-01-01

    BACKGROUND: Percutaneous tracheostomy is a common procedure in the intensive care unit and, on patient transfer to the wards, there is a gap in ongoing tracheostomy management. There is some evidence that tracheostomy teams can shorten weaning to decannulation times. In response to lengthy weaning to decannulation times at Trillium Health Partners – Credit Valley Hospital site (Mississauga, Ontario), an interprofessional tracheostomy team, led by respiratory therapists and consisting of speech-language pathologists and intensive care physicians, was implemented. OBJECTIVE: To evaluate the interprofessional tracheostomy team and its impact on time from weaning off mechanical ventilation to decannulation; and time from weaning to speech-language pathology referral. METHODS: Performance metrics were collected retrospectively through chart review pre- and post-team implementation. The primary metrics evaluated were the time from weaning off mechanical ventilation to decannulation, and time to referral to speech-language pathology. RESULTS: Following implementation of the interprofessional tracheostomy team, there was no improvement in decannulation times or time from weaning to speech-language pathology referral. A significant improvement was noted in the average time to first tracheostomy tube change (36.2 days to 22.9 days; P=0.01) and average time to speech-language pathology referral following initial tracheostomy insertion (51.8 days to 26.3 days; P=0.01). CONCLUSION: An interprofessional tracheostomy team can improve the quality of tracheostomy care through earlier tracheostomy tube changes and swallowing assessment referrals. The lack of improved weaning to decannulation time was potentially due to poor adherence with established protocols as well as a change in mechanical ventilation practices. To validate the findings from this particular institution, a more rigorous quality improvement methodology should be considered in addition to strategies to improve

  15. KFM: a homemade yet accurate and dependable fallout meter

    International Nuclear Information System (INIS)

    Kearny, C.H.; Barnes, P.R.; Chester, C.V.; Cortner, M.W.

    1978-01-01

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of +-25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The step-by-step illustrated instructions for making and using a KFM are presented. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM

  16. Improving Patient Satisfaction with Waiting Time

    Science.gov (United States)

    Eilers, Gayleen M.

    2004-01-01

    Waiting times are a significant component of patient satisfaction. A patient satisfaction survey performed in the author's health center showed that students rated waiting time lowest of the listed categories--A ratings of 58% overall, 63% for scheduled appointments, and 41% for the walk-in clinic. The center used a quality improvement process and…

  17. Enhanced Time Out: An Improved Communication Process.

    Science.gov (United States)

    Nelson, Patricia E

    2017-06-01

    An enhanced time out is an improved communication process initiated to prevent such surgical errors as wrong-site, wrong-procedure, or wrong-patient surgery. The enhanced time out at my facility mandates participation from all members of the surgical team and requires designated members to respond to specified time out elements on the surgical safety checklist. The enhanced time out incorporated at my facility expands upon the safety measures from the World Health Organization's surgical safety checklist and ensures that all personnel involved in a surgical intervention perform a final check of relevant information. Initiating the enhanced time out at my facility was intended to improve communication and teamwork among surgical team members and provide a highly reliable safety process to prevent wrong-site, wrong-procedure, and wrong-patient surgery. Copyright © 2017 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  18. Fast and accurate three-dimensional point spread function computation for fluorescence microscopy.

    Science.gov (United States)

    Li, Jizhou; Xue, Feng; Blu, Thierry

    2017-06-01

    The point spread function (PSF) plays a fundamental role in fluorescence microscopy. A realistic and accurately calculated PSF model can significantly improve the performance in 3D deconvolution microscopy and also the localization accuracy in single-molecule microscopy. In this work, we propose a fast and accurate approximation of the Gibson-Lanni model, which has been shown to represent the PSF suitably under a variety of imaging conditions. We express the Kirchhoff's integral in this model as a linear combination of rescaled Bessel functions, thus providing an integral-free way for the calculation. The explicit approximation error in terms of parameters is given numerically. Experiments demonstrate that the proposed approach results in a significantly smaller computational time compared with current state-of-the-art techniques to achieve the same accuracy. This approach can also be extended to other microscopy PSF models.

  19. Accurate screening for synthetic preservatives in beverage using high performance liquid chromatography with time-of-flight mass spectrometry

    International Nuclear Information System (INIS)

    Li Xiuqin; Zhang Feng; Sun Yanyan; Yong Wei; Chu Xiaogang; Fang Yanyan; Zweigenbaum, Jerry

    2008-01-01

    In this study, liquid chromatography time-of-flight mass spectrometry (HPLC/TOF-MS) is applied to qualitation and quantitation of 18 synthetic preservatives in beverage. The identification by HPLC/TOF-MS is accomplished with the accurate mass (the subsequent generated empirical formula) of the protonated molecules [M + H]+ or the deprotonated molecules [M - H]-, along with the accurate mass of their main fragment ions. In order to obtain sufficient sensitivity for quantitation purposes (using the protonated or deprotonated molecule) and additional qualitative mass spectrum information provided by the fragments ions, segment program of fragmentor voltages is designed in positive and negative ion mode, respectively. Accurate mass measurements are highly useful in the complex sample analyses since they allow us to achieve a high degree of specificity, often needed when other interferents are present in the matrix. The mass accuracy typically obtained is routinely better than 3 ppm. The 18 compounds behave linearly in the 0.005-5.0 mg.kg -1 concentration range, with correlation coefficient >0.996. The recoveries at the tested concentrations of 1.0 mg.kg -1 -100 mg.kg -1 are 81-106%, with coefficients of variation -1 , which are far below the required maximum residue level (MRL) for these preservatives in foodstuff. The method is suitable for routine quantitative and qualitative analyses of synthetic preservatives in foodstuff

  20. An improved Hough transform-based fingerprint alignment approach

    CSIR Research Space (South Africa)

    Mlambo, CS

    2014-11-01

    Full Text Available An improved Hough Transform based fingerprint alignment approach is presented, which improves computing time and memory usage with accurate alignment parameter (rotation and translation) results. This is achieved by studying the strengths...

  1. How Accurately can we Calculate Thermal Systems?

    International Nuclear Information System (INIS)

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-01-01

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K eff , for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors

  2. ERP application of real-time vdc-enabled last planner system for planning reliability improvement

    DEFF Research Database (Denmark)

    Cho, S.; Sørensen, Kristian Birch; Fischer, M.

    2009-01-01

    The Last Planner System (LPS) has since its introduction in 1994 become a widely used method of AEC practitioners for improvement of planning reliability and tracking and monitoring of project progress. However, the observations presented in this paper indicate that the last planners...... and coordinators are in need of a new system that integrates the existing LPS with Virtual Design and Construction (VDC), Enterprise Resource Planning (ERP) systems, and automatic object identification by means of Radio Frequency Identification (RFID) technology. This is because current practice of the LPS...... implementations is guesswork-driven, textual report-generated, hand-updated, and even interpersonal trust-oriented, resulting in less accurate and reliable plans. This research introduces a prototype development of the VREL (VDC + RFID + ERP + LPS) integration to generate a real-time updated cost + physical...

  3. A stable higher order space time Galerkin marching-on-in-time scheme

    KAUST Repository

    Pray, Andrew J.

    2013-07-01

    We present a method for the stable solution of time-domain integral equations. The method uses a technique developed in [1] to accurately evaluate matrix elements. As opposed to existing stabilization schemes, the method presented uses higher order basis functions in time to improve the accuracy of the solver. The method is validated by showing convergence in temporal basis function order, time step size, and geometric discretization order. © 2013 IEEE.

  4. Utilizing a GPS-enabled fleet management system to improve safety through real-time personnel monitoring and asset management

    Energy Technology Data Exchange (ETDEWEB)

    Mavreas, M. [Bell Canada, Montreal, PQ (Canada)

    2005-07-01

    The telepod is a real-time dispatch, tracking and vehicle management system developed by Bell, which also allows remote access to company data. Advantages of the system were discussed in this power point presentation. It was suggested that the system offers increased efficiency, asset tracking and more accurate maintenance. Productivity improvements are made possible through real-time dispatching of orders, which results in improved customer service. Additional benefits of the system include fuel savings; trip reports to track vehicle start and stop times; and improved route changes through trip analysis. The system also enables the tracking of vehicles driven after work hours and on weekends. The generator tracking capability provides information on when generators are being moved as well as uptime for improved maintenance in addition to registering fuel levels to ensure business keeps running during a blackout. The vehicle management system is also capable of identifying under-utilized vehicles and can assist in the reduction of inactive vehicles as well as in a reduction of fuel consumption and harmful emissions by controlling idling time. Other advantages include maintenance eliminated mileage errors; an improved inspection program; remote diagnosis and prognostics; a reduction in downtime and costs associated with unnecessary vehicle breakdown; and reduced vehicle wear and tear. Among the safety features is a trigger for the dispatch of emergency vehicles. It was suggested that the lone worker device provides technicians with a sense of security, as well as ensuring greater consumer safety. It was concluded that Bell supports industry cooperation for safe driving awareness through advertising campaigns, and communicates safety messages to customers, employees and the public at large. tabs, figs.

  5. Average chewing pattern improvements following Disclusion Time reduction.

    Science.gov (United States)

    Kerstein, Robert B; Radke, John

    2017-05-01

    Studies involving electrognathographic (EGN) recordings of chewing improvements obtained following occlusal adjustment therapy are rare, as most studies lack 'chewing' within the research. The objectives of this study were to determine if reducing long Disclusion Time to short Disclusion Time with the immediate complete anterior guidance development (ICAGD) coronoplasty in symptomatic subjects altered their average chewing pattern (ACP) and their muscle function. Twenty-nine muscularly symptomatic subjects underwent simultaneous EMG and EGN recordings of right and left gum chewing, before and after the ICAGD coronoplasty. Statistical differences in the mean Disclusion Time, the mean muscle contraction cycle, and the mean ACP resultant from ICAGD underwent the Student's paired t-test (α = 0.05). Disclusion Time reductions from ICAGD were significant (2.11-0.45 s. p = 0.0000). Post-ICAGD muscle changes were significant in the mean area (p = 0.000001), the peak amplitude (p = 0.00005), the time to peak contraction (p chewing position became closer to centric occlusion (p chewing velocities increased (p chewing pattern (ACP) shape, speed, consistency, muscular coordination, and vertical opening improvements can be significantly improved in muscularly dysfunctional TMD patients within one week's time of undergoing the ICAGD enameloplasty. Computer-measured and guided occlusal adjustments quickly and physiologically improved chewing, without requiring the patients to wear pre- or post-treatment appliances.

  6. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    Science.gov (United States)

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  7. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    International Nuclear Information System (INIS)

    Malik, Afshan N.; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-01-01

    Highlights: → Mitochondrial dysfunction is central to many diseases of oxidative stress. → 95% of the mitochondrial genome is duplicated in the nuclear genome. → Dilution of untreated genomic DNA leads to dilution bias. → Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as β-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  8. Toward Accurate On-Ground Attitude Determination for the Gaia Spacecraft

    Science.gov (United States)

    Samaan, Malak A.

    2010-03-01

    The work presented in this paper concerns the accurate On-Ground Attitude (OGA) reconstruction for the astrometry spacecraft Gaia in the presence of disturbance and of control torques acting on the spacecraft. The reconstruction of the expected environmental torques which influence the spacecraft dynamics will be also investigated. The telemetry data from the spacecraft will include the on-board real-time attitude, which is of order of several arcsec. This raw attitude is the starting point for the further attitude reconstruction. The OGA will use the inputs from the field coordinates of known stars (attitude stars) and also the field coordinate differences of objects on the Sky Mapper (SM) and Astrometric Field (AF) payload instruments to improve this raw attitude. The on-board attitude determination uses a Kalman Filter (KF) to minimize the attitude errors and produce a more accurate attitude estimation than the pure star tracker measurement. Therefore the first approach for the OGA will be an adapted version of KF. Furthermore, we will design a batch least squares algorithm to investigate how to obtain a more accurate OGA estimation. Finally, a comparison between these different attitude determination techniques in terms of accuracy, robustness, speed and memory required will be evaluated in order to choose the best attitude algorithm for the OGA. The expected resulting accuracy for the OGA determination will be on the order of milli-arcsec.

  9. Accuracy of the improved quasistatic space-time method checked with experiment

    International Nuclear Information System (INIS)

    Kugler, G.; Dastur, A.R.

    1976-10-01

    Recent experiments performed at the Savannah River Laboratory have made it possible to check the accuracy of numerical methods developed to simulate space-dependent neutron transients. The experiments were specifically designed to emphasize delayed neutron holdback. The CERBERUS code using the IQS (Improved Quasistatic) method has been developed to provide a practical yet accurate tool for spatial kinetics calculations of CANDU reactors. The code was tested on the Savannah River experiments and excellent agreement was obtained. (author)

  10. Improved timing of the millisecond pulsar PSR 1937+21 using real-time coherent dedispersion

    International Nuclear Information System (INIS)

    Hankins, T.H.; Stinebring, D.R.; Rawley, L.A.; Princeton Univ., NJ)

    1987-01-01

    Profiles of the millisecond pulsar PSR 1937+21 have been obtained with 6-micron resolution using a real-time hardware dispersion removal device. This dedisperser has a potential resolution of better than 0.5 microsec and is immune to time-of-arrival jitter caused by scintillation-induced spectral gradients across the receiver passband. It significantly reduces the time-of-arrival residuals when compared with the timing technique currently in use. This increased timing accuracy, when utilized in a long-term timing program of millisec pulsars, will improve the solar system ephemeris and will substantially improve the detection limit of a gravitational wave background. 27 references

  11. The accurate particle tracer code

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  12. Efficient and accurate nearest neighbor and closest pair search in high-dimensional space

    KAUST Repository

    Tao, Yufei

    2010-07-01

    Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders ofmagnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial

  13. Integrating GPS, GYRO, vehicle speed sensor, and digital map to provide accurate and real-time position in an intelligent navigation system

    Science.gov (United States)

    Li, Qingquan; Fang, Zhixiang; Li, Hanwu; Xiao, Hui

    2005-10-01

    The global positioning system (GPS) has become the most extensively used positioning and navigation tool in the world. Applications of GPS abound in surveying, mapping, transportation, agriculture, military planning, GIS, and the geosciences. However, the positional and elevation accuracy of any given GPS location is prone to error, due to a number of factors. The applications of Global Positioning System (GPS) positioning is more and more popular, especially the intelligent navigation system which relies on GPS and Dead Reckoning technology is developing quickly for future huge market in China. In this paper a practical combined positioning model of GPS/DR/MM is put forward, which integrates GPS, Gyro, Vehicle Speed Sensor (VSS) and digital navigation maps to provide accurate and real-time position for intelligent navigation system. This model is designed for automotive navigation system making use of Kalman filter to improve position and map matching veracity by means of filtering raw GPS and DR signals, and then map-matching technology is used to provide map coordinates for map displaying. In practical examples, for illustrating the validity of the model, several experiments and their results of integrated GPS/DR positioning in intelligent navigation system will be shown for the conclusion that Kalman Filter based GPS/DR integrating position approach is necessary, feasible and efficient for intelligent navigation application. Certainly, this combined positioning model, similar to other model, can not resolve all situation issues. Finally, some suggestions are given for further improving integrated GPS/DR/MM application.

  14. Accurate Energy Consumption Modeling of IEEE 802.15.4e TSCH Using Dual-BandOpenMote Hardware.

    Science.gov (United States)

    Daneels, Glenn; Municio, Esteban; Van de Velde, Bruno; Ergeerts, Glenn; Weyn, Maarten; Latré, Steven; Famaey, Jeroen

    2018-02-02

    The Time-Slotted Channel Hopping (TSCH) mode of the IEEE 802.15.4e amendment aims to improve reliability and energy efficiency in industrial and other challenging Internet-of-Things (IoT) environments. This paper presents an accurate and up-to-date energy consumption model for devices using this IEEE 802.15.4e TSCH mode. The model identifies all network-related CPU and radio state changes, thus providing a precise representation of the device behavior and an accurate prediction of its energy consumption. Moreover, energy measurements were performed with a dual-band OpenMote device, running the OpenWSN firmware. This allows the model to be used for devices using 2.4 GHz, as well as 868 MHz. Using these measurements, several network simulations were conducted to observe the TSCH energy consumption effects in end-to-end communication for both frequency bands. Experimental verification of the model shows that it accurately models the consumption for all possible packet sizes and that the calculated consumption on average differs less than 3% from the measured consumption. This deviation includes measurement inaccuracies and the variations of the guard time. As such, the proposed model is very suitable for accurate energy consumption modeling of TSCH networks.

  15. Time-accurate CFD conjugate analysis of transient measurements of the heat-transfer coefficient in a channel with pin fins

    Directory of Open Access Journals (Sweden)

    Tom I-P. Shih

    2013-03-01

    Full Text Available Heat-transfer coefficients (HTC on surfaces exposed to convection environments are often measured by transient techniques such as thermochromic liquid crystal (TLC or infrared thermography. In these techniques, the surface temperature is measured as a function of time, and that measurement is used with the exact solution for unsteady, zero-dimensional (0-D or one-dimensional (1-D heat conduction into a solid to calculate the local HTC. When using the 0-D or 1-D exact solutions, the transient techniques assume the HTC and the free-stream or bulk temperature characterizing the convection environment to be constants in addition to assuming the conduction into the solid to be 0-D or 1-D. In this study, computational fluid dynamics (CFD conjugate analyses were performed to examine the errors that might be invoked by these assumptions for a problem, where the free-stream/bulk temperature and the heat-transfer coefficient vary appreciably along the surface and where conduction into the solid may not be 0-D or 1-D. The problem selected to assess these errors is flow and heat transfer in a channel lined with a staggered array of pin fins. This conjugate study uses three-dimensional (3-D unsteady Reynolds-averaged Navier–Stokes (RANS closed by the shear-stress transport (SST turbulence model for the gas phase (wall functions not used and the Fourier law for the solid phase. The errors in the transient techniques are assessed by comparing the HTC predicted by the time-accurate conjugate CFD with those predicted by the 0-D and 1-D exact solutions, where the surface temperatures needed by the exact solutions are taken from the time-accurate conjugate CFD solution. Results obtained show that the use of the 1-D exact solution for the semi-infinite wall to give reasonably accurate “transient” HTC (less than 5% relative error. Transient techniques that use the 0-D exact solution for the pin fins were found to produce large errors (up to 160% relative error

  16. Improving work control systems: The core team concept

    International Nuclear Information System (INIS)

    Jorgensen, M.D.; Simpson, W.W.

    1996-01-01

    The improved work control system at the Idaho Chemical Processing Plant minimizes review and approval time, maximizes field work time, and maintains full compliance with applicable requirements. The core team method gives ownership and accountability to knowledgeable individuals, and the teams use sophisticated scheduling techniques to improve information sharing and cost control and to establish accurate roll-up master schedules

  17. Can Measured Synergy Excitations Accurately Construct Unmeasured Muscle Excitations?

    Science.gov (United States)

    Bianco, Nicholas A; Patten, Carolynn; Fregly, Benjamin J

    2018-01-01

    Accurate prediction of muscle and joint contact forces during human movement could improve treatment planning for disorders such as osteoarthritis, stroke, Parkinson's disease, and cerebral palsy. Recent studies suggest that muscle synergies, a low-dimensional representation of a large set of muscle electromyographic (EMG) signals (henceforth called "muscle excitations"), may reduce the redundancy of muscle excitation solutions predicted by optimization methods. This study explores the feasibility of using muscle synergy information extracted from eight muscle EMG signals (henceforth called "included" muscle excitations) to accurately construct muscle excitations from up to 16 additional EMG signals (henceforth called "excluded" muscle excitations). Using treadmill walking data collected at multiple speeds from two subjects (one healthy, one poststroke), we performed muscle synergy analysis on all possible subsets of eight included muscle excitations and evaluated how well the calculated time-varying synergy excitations could construct the remaining excluded muscle excitations (henceforth called "synergy extrapolation"). We found that some, but not all, eight-muscle subsets yielded synergy excitations that achieved >90% extrapolation variance accounted for (VAF). Using the top 10% of subsets, we developed muscle selection heuristics to identify included muscle combinations whose synergy excitations achieved high extrapolation accuracy. For 3, 4, and 5 synergies, these heuristics yielded extrapolation VAF values approximately 5% lower than corresponding reconstruction VAF values for each associated eight-muscle subset. These results suggest that synergy excitations obtained from experimentally measured muscle excitations can accurately construct unmeasured muscle excitations, which could help limit muscle excitations predicted by muscle force optimizations.

  18. QUESP and QUEST revisited - fast and accurate quantitative CEST experiments.

    Science.gov (United States)

    Zaiss, Moritz; Angelovski, Goran; Demetriou, Eleni; McMahon, Michael T; Golay, Xavier; Scheffler, Klaus

    2018-03-01

    Chemical exchange saturation transfer (CEST) NMR or MRI experiments allow detection of low concentrated molecules with enhanced sensitivity via their proton exchange with the abundant water pool. Be it endogenous metabolites or exogenous contrast agents, an exact quantification of the actual exchange rate is required to design optimal pulse sequences and/or specific sensitive agents. Refined analytical expressions allow deeper insight and improvement of accuracy for common quantification techniques. The accuracy of standard quantification methodologies, such as quantification of exchange rate using varying saturation power or varying saturation time, is improved especially for the case of nonequilibrium initial conditions and weak labeling conditions, meaning the saturation amplitude is smaller than the exchange rate (γB 1  exchange rate using varying saturation power/time' (QUESP/QUEST) equations allow for more accurate exchange rate determination, and provide clear insights on the general principles to execute the experiments and to perform numerical evaluation. The proposed methodology was evaluated on the large-shift regime of paramagnetic chemical-exchange-saturation-transfer agents using simulated data and data of the paramagnetic Eu(III) complex of DOTA-tetraglycineamide. The refined formulas yield improved exchange rate estimation. General convergence intervals of the methods that would apply for smaller shift agents are also discussed. Magn Reson Med 79:1708-1721, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Accurate light-time correction due to a gravitating mass

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, Neil [Department of Physics, University of Colorado, Boulder, CO (United States); Bertotti, Bruno, E-mail: ashby@boulder.nist.go [Dipartimento di Fisica Nucleare e Teorica, Universita di Pavia (Italy)

    2010-07-21

    This technical paper of mathematical physics arose as an aftermath of the 2002 Cassini experiment (Bertotti et al 2003 Nature 425 374-6), in which the PPN parameter {gamma} was measured with an accuracy {sigma}{sub {gamma}} = 2.3 x 10{sup -5} and found consistent with the prediction {gamma} = 1 of general relativity. The Orbit Determination Program (ODP) of NASA's Jet Propulsion Laboratory, which was used in the data analysis, is based on an expression (8) for the gravitational delay {Delta}t that differs from the standard formula (2); this difference is of second order in powers of m-the gravitational radius of the Sun-but in Cassini's case it was much larger than the expected order of magnitude m{sup 2}/b, where b is the distance of the closest approach of the ray. Since the ODP does not take into account any other second-order terms, it is necessary, also in view of future more accurate experiments, to revisit the whole problem, to systematically evaluate higher order corrections and to determine which terms, and why, are larger than the expected value. We note that light propagation in a static spacetime is equivalent to a problem in ordinary geometrical optics; Fermat's action functional at its minimum is just the light-time between the two end points A and B. A new and powerful formulation is thus obtained. This method is closely connected with the much more general approach of Le Poncin-Lafitte et al (2004 Class. Quantum Grav. 21 4463-83), which is based on Synge's world function. Asymptotic power series are necessary to provide a safe and automatic way of selecting which terms to keep at each order. Higher order approximations to the required quantities, in particular the delay and the deflection, are easily obtained. We also show that in a close superior conjunction, when b is much smaller than the distances of A and B from the Sun, say of order R, the second-order correction has an enhanced part of order m{sup 2}R/b{sup 2}, which

  20. Assessing reference genes for accurate transcript normalization using quantitative real-time PCR in pearl millet [Pennisetum glaucum (L. R. Br].

    Directory of Open Access Journals (Sweden)

    Prasenjit Saha

    Full Text Available Pearl millet [Pennisetum glaucum (L. R.Br.], a close relative of Panicoideae food crops and bioenergy grasses, offers an ideal system to perform functional genomics studies related to C4 photosynthesis and abiotic stress tolerance. Quantitative real-time reverse transcription polymerase chain reaction (qRT-PCR provides a sensitive platform to conduct such gene expression analyses. However, the lack of suitable internal control reference genes for accurate transcript normalization during qRT-PCR analysis in pearl millet is the major limitation. Here, we conducted a comprehensive assessment of 18 reference genes on 234 samples which included an array of different developmental tissues, hormone treatments and abiotic stress conditions from three genotypes to determine appropriate reference genes for accurate normalization of qRT-PCR data. Analyses of Ct values using Stability Index, BestKeeper, ΔCt, Normfinder, geNorm and RefFinder programs ranked PP2A, TIP41, UBC2, UBQ5 and ACT as the most reliable reference genes for accurate transcript normalization under different experimental conditions. Furthermore, we validated the specificity of these genes for precise quantification of relative gene expression and provided evidence that a combination of the best reference genes are required to obtain optimal expression patterns for both endogeneous genes as well as transgenes in pearl millet.

  1. Accurate Region-of-Interest Recovery Improves the Measurement of the Cell Migration Rate in the In Vitro Wound Healing Assay.

    Science.gov (United States)

    Bedoya, Cesar; Cardona, Andrés; Galeano, July; Cortés-Mancera, Fabián; Sandoz, Patrick; Zarzycki, Artur

    2017-12-01

    The wound healing assay is widely used for the quantitative analysis of highly regulated cellular events. In this essay, a wound is voluntarily produced on a confluent cell monolayer, and then the rate of wound reduction (WR) is characterized by processing images of the same regions of interest (ROIs) recorded at different time intervals. In this method, sharp-image ROI recovery is indispensable to compensate for displacements of the cell cultures due either to the exploration of multiple sites of the same culture or to transfers from the microscope stage to a cell incubator. ROI recovery is usually done manually and, despite a low-magnification microscope objective is generally used (10x), repositioning imperfections constitute a major source of errors detrimental to the WR measurement accuracy. We address this ROI recovery issue by using pseudoperiodic patterns fixed onto the cell culture dishes, allowing the easy localization of ROIs and the accurate quantification of positioning errors. The method is applied to a tumor-derived cell line, and the WR rates are measured by means of two different image processing software. Sharp ROI recovery based on the proposed method is found to improve significantly the accuracy of the WR measurement and the positioning under the microscope.

  2. Dual-view inverted selective plane illumination microscopy (diSPIM) with improved background rejection for accurate 3D digital pathology

    Science.gov (United States)

    Hu, Bihe; Bolus, Daniel; Brown, J. Quincy

    2018-02-01

    Current gold-standard histopathology for cancerous biopsies is destructive, time consuming, and limited to 2D slices, which do not faithfully represent true 3D tumor micro-morphology. Light sheet microscopy has emerged as a powerful tool for 3D imaging of cancer biospecimens. Here, we utilize the versatile dual-view inverted selective plane illumination microscopy (diSPIM) to render digital histological images of cancer biopsies. Dual-view architecture enabled more isotropic resolution in X, Y, and Z; and different imaging modes, such as adding electronic confocal slit detection (eCSD) or structured illumination (SI), can be used to improve degraded image quality caused by background signal of large, scattering samples. To obtain traditional H&E-like images, we used DRAQ5 and eosin (D&E) staining, with 488nm and 647nm laser illumination, and multi-band filter sets. Here, phantom beads and a D&E stained buccal cell sample have been used to verify our dual-view method. We also show that via dual view imaging and deconvolution, more isotropic resolution has been achieved for optical cleared human prostate sample, providing more accurate quantitation of 3D tumor architecture than was possible with single-view SPIM methods. We demonstrate that the optimized diSPIM delivers more precise analysis of 3D cancer microarchitecture in human prostate biopsy than simpler light sheet microscopy arrangements.

  3. Using electronic health records and Internet search information for accurate influenza forecasting.

    Science.gov (United States)

    Yang, Shihao; Santillana, Mauricio; Brownstein, John S; Gray, Josh; Richardson, Stewart; Kou, S C

    2017-05-08

    Accurate influenza activity forecasting helps public health officials prepare and allocate resources for unusual influenza activity. Traditional flu surveillance systems, such as the Centers for Disease Control and Prevention's (CDC) influenza-like illnesses reports, lag behind real-time by one to 2 weeks, whereas information contained in cloud-based electronic health records (EHR) and in Internet users' search activity is typically available in near real-time. We present a method that combines the information from these two data sources with historical flu activity to produce national flu forecasts for the United States up to 4 weeks ahead of the publication of CDC's flu reports. We extend a method originally designed to track flu using Google searches, named ARGO, to combine information from EHR and Internet searches with historical flu activities. Our regularized multivariate regression model dynamically selects the most appropriate variables for flu prediction every week. The model is assessed for the flu seasons within the time period 2013-2016 using multiple metrics including root mean squared error (RMSE). Our method reduces the RMSE of the publicly available alternative (Healthmap flutrends) method by 33, 20, 17 and 21%, for the four time horizons: real-time, one, two, and 3 weeks ahead, respectively. Such accuracy improvements are statistically significant at the 5% level. Our real-time estimates correctly identified the peak timing and magnitude of the studied flu seasons. Our method significantly reduces the prediction error when compared to historical publicly available Internet-based prediction systems, demonstrating that: (1) the method to combine data sources is as important as data quality; (2) effectively extracting information from a cloud-based EHR and Internet search activity leads to accurate forecast of flu.

  4. An accurate, flexible and small optical fiber sensor: a novel technological breakthrough for real-time analysis of dynamic blood flow data in vivo.

    Directory of Open Access Journals (Sweden)

    Qiao-ying Yuan

    Full Text Available Because of the limitations of existing methods and techniques for directly obtaining real-time blood data, no accurate microflow in vivo real-time analysis method exists. To establish a novel technical platform for real-time in vivo detection and to analyze average blood pressure and other blood flow parameters, a small, accurate, flexible, and nontoxic Fabry-Perot fiber sensor was designed. The carotid sheath was implanted through intubation of the rabbit carotid artery (n = 8, and the blood pressure and other detection data were determined directly through the veins. The fiber detection results were compared with test results obtained using color Doppler ultrasound and a physiological pressure sensor recorder. Pairwise comparisons among the blood pressure results obtained using the three methods indicated that real-time blood pressure information obtained through the fiber sensor technique exhibited better correlation than the data obtained with the other techniques. The highest correlation (correlation coefficient of 0.86 was obtained between the fiber sensor and pressure sensor. The blood pressure values were positively related to the total cholesterol level, low-density lipoprotein level, number of red blood cells, and hemoglobin level, with correlation coefficients of 0.033, 0.129, 0.358, and 0.373, respectively. The blood pressure values had no obvious relationship with the number of white blood cells and high-density lipoprotein and had a negative relationship with triglyceride levels, with a correlation coefficient of -0.031. The average ambulatory blood pressure measured by the fiber sensor exhibited a negative correlation with the quantity of blood platelets (correlation coefficient of -0.839, P<0.05. The novel fiber sensor can thus obtain in vivo blood pressure data accurately, stably, and in real time; the sensor can also determine the content and status of the blood flow to some extent. Therefore, the fiber sensor can obtain

  5. Reclaiming Spare Capacity and Improving Aperiodic Response Times in Real-Time Environments

    Directory of Open Access Journals (Sweden)

    Liu Xue

    2011-01-01

    Full Text Available Abstract Scheduling recurring task sets that allow some instances of the tasks to be skipped produces holes in the schedule which are nonuniformly distributed. Similarly, when the recurring tasks are not strictly periodic but are sporadic, there is extra processor bandwidth arising because of irregular job arrivals. The additional computation capacity that results from skips or sporadic tasks can be reclaimed to service aperiodic task requests efficiently and quickly. We present techniques for improving the response times of aperiodic tasks by identifying nonuniformly distributed spare capacity—because of skips or sporadic tasks—in the schedule and adding such extra capacity to the capacity queue of a BASH server. These gaps can account for a significant portion of aperiodic capacity, and their reclamation results in considerable improvement to aperiodic response times. We present two schemes: NCLB-CBS, which performs well in periodic real-time environments with firm tasks, and NCLB-CUS, which can be deployed when the basic task set to schedule is sporadic. Evaluation via simulations and implementation suggests that performance improvements for aperiodic tasks can be obtained with limited additional overhead.

  6. Accurate alpha sticking fractions from improved calculations relevant for muon catalyzed fusion

    International Nuclear Information System (INIS)

    Szalewicz, K.

    1990-05-01

    Recent experiments have shown that under proper conditions a single muon may catalyze almost two hundred fusions in its lifetime. This process proceeds through formation of muonic molecular ions. Properties of these ions are central to the understanding of the phenomenon. Our work included the most accurate calculations of the energy levels and Coulombic sticking fractions for tdμ and other muonic molecular ions, calculations of Auger transition rates, calculations of corrections to the energy levels due to interactions with the most molecule, and calculation of the reactivation of muons from α particles. The majority of our effort has been devoted to the theory and computation of the influence of the strong nuclear forces on fusion rates and sticking fractions. We have calculated fusion rates for tdμ including the effects of nuclear forces on the molecular wave functions. We have also shown that these results can be reproduced to almost four digit accuracy by using a very simple quasifactorizable expression which does not require modifications of the molecular wave functions. Our sticking fractions are more accurate than any other theoretical values. We have used a more sophisticated theory than any other work and our numerical calculations have converged to at least three significant digits

  7. Smart and accurate state-of-charge indication in portable applications

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Notten, P.H.L.; Regtien, P.P.L.

    2005-01-01

    Accurate state-of-charge (SoC) and remaining run-time indication for portable devices is important for the user-convenience and to prolong the lifetime of batteries. However, the known methods of SoC indication in portable applications are not accurate enough under all practical conditions. The

  8. Smart and accurate State-of-Charge indication in Portable Applications

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Notten, P.H.L.; Regtien, Paulus P.L.

    2006-01-01

    Accurate state-of-charge (SoC) and remaining run-time indication for portable devices is important for the user-convenience and to prolong the lifetime of batteries. However, the known methods of SoC indication in portable applications are not accurate enough under all practical conditions. The

  9. Efficient preloading of the ventricles by a properly timed atrial contraction underlies stroke work improvement in the acute response to cardiac resynchronization therapy

    Science.gov (United States)

    Hu, Yuxuan; Gurev, Viatcheslav; Constantino, Jason; Trayanova, Natalia

    2013-01-01

    Background The acute response to cardiac resynchronization therapy (CRT) has been shown to be due to three mechanisms: resynchronization of ventricular contraction, efficient preloading of the ventricles by a properly timed atrial contraction, and mitral regurgitation reduction. However, the contribution of each of the three mechanisms to the acute response of CRT, specifically stroke work improvement, has not been quantified. Objective The goal of this study was to use an MRI-based anatomically accurate 3D model of failing canine ventricular electromechanics to quantify the contribution of each of the three mechanisms to stroke work improvement and identify the predominant mechanisms. Methods An MRI-based electromechanical model of the failing canine ventricles assembled previously by our group was further developed and modified. Three different protocols were used to dissect the contribution of each of the three mechanisms to stroke work improvement. Results Resynchronization of ventricular contraction did not lead to significant stroke work improvement. Efficient preloading of the ventricles by a properly timed atrial contraction was the predominant mechanism underlying stroke work improvement. Stroke work improvement peaked at an intermediate AV delay, as it allowed ventricular filling by atrial contraction to occur at a low diastolic LV pressure but also provided adequate time for ventricular filling before ventricular contraction. Diminution of mitral regurgitation by CRT led to stroke work worsening instead of improvement. Conclusion Efficient preloading of the ventricles by a properly timed atrial contraction is responsible for significant stroke work improvement in the acute CRT response. PMID:23928177

  10. Atomistic simulations of materials: Methods for accurate potentials and realistic time scales

    Science.gov (United States)

    Tiwary, Pratyush

    This thesis deals with achieving more realistic atomistic simulations of materials, by developing accurate and robust force-fields, and algorithms for practical time scales. I develop a formalism for generating interatomic potentials for simulating atomistic phenomena occurring at energy scales ranging from lattice vibrations to crystal defects to high-energy collisions. This is done by fitting against an extensive database of ab initio results, as well as to experimental measurements for mixed oxide nuclear fuels. The applicability of these interactions to a variety of mixed environments beyond the fitting domain is also assessed. The employed formalism makes these potentials applicable across all interatomic distances without the need for any ambiguous splining to the well-established short-range Ziegler-Biersack-Littmark universal pair potential. We expect these to be reliable potentials for carrying out damage simulations (and molecular dynamics simulations in general) in nuclear fuels of varying compositions for all relevant atomic collision energies. A hybrid stochastic and deterministic algorithm is proposed that while maintaining fully atomistic resolution, allows one to achieve milliseconds and longer time scales for several thousands of atoms. The method exploits the rare event nature of the dynamics like other such methods, but goes beyond them by (i) not having to pick a scheme for biasing the energy landscape, (ii) providing control on the accuracy of the boosted time scale, (iii) not assuming any harmonic transition state theory (HTST), and (iv) not having to identify collective coordinates or interesting degrees of freedom. The method is validated by calculating diffusion constants for vacancy-mediated diffusion in iron metal at low temperatures, and comparing against brute-force high temperature molecular dynamics. We also calculate diffusion constants for vacancy diffusion in tantalum metal, where we compare against low-temperature HTST as well

  11. Accurately bi-orthogonal direct and adjoint lambda modes via two-sided Eigen-solvers

    International Nuclear Information System (INIS)

    Roman, J.E.; Vidal, V.; Verdu, G.

    2005-01-01

    This work is concerned with the accurate computation of the dominant l-modes (Lambda mode) of the reactor core in order to approximate the solution of the neutron diffusion equation in different situations such as the transient modal analysis. In a previous work, the problem was already addressed by implementing a parallel program based on SLEPc (Scalable Library for Eigenvalue Problem Computations), a public domain software for the solution of eigenvalue problems. Now, the proposed solution is extended by incorporating also the computation of the adjoint l-modes in such a way that the bi-orthogonality condition is enforced very accurately. This feature is very desirable in some types of analyses, and in the proposed scheme it is achieved by making use of two-sided eigenvalue solving software. Current implementations of some of these software, while still susceptible of improvement, show that they can be competitive in terms of response time and accuracy with respect to other types of eigenvalue solving software. The code developed by the authors has parallel capabilities in order to be able to analyze reactors with a great level of detail in a short time. (authors)

  12. Accurately bi-orthogonal direct and adjoint lambda modes via two-sided Eigen-solvers

    Energy Technology Data Exchange (ETDEWEB)

    Roman, J.E.; Vidal, V. [Valencia Univ. Politecnica, D. Sistemas Informaticos y Computacion (Spain); Verdu, G. [Valencia Univ. Politecnica, D. Ingenieria Quimica y Nuclear (Spain)

    2005-07-01

    This work is concerned with the accurate computation of the dominant l-modes (Lambda mode) of the reactor core in order to approximate the solution of the neutron diffusion equation in different situations such as the transient modal analysis. In a previous work, the problem was already addressed by implementing a parallel program based on SLEPc (Scalable Library for Eigenvalue Problem Computations), a public domain software for the solution of eigenvalue problems. Now, the proposed solution is extended by incorporating also the computation of the adjoint l-modes in such a way that the bi-orthogonality condition is enforced very accurately. This feature is very desirable in some types of analyses, and in the proposed scheme it is achieved by making use of two-sided eigenvalue solving software. Current implementations of some of these software, while still susceptible of improvement, show that they can be competitive in terms of response time and accuracy with respect to other types of eigenvalue solving software. The code developed by the authors has parallel capabilities in order to be able to analyze reactors with a great level of detail in a short time. (authors)

  13. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    International Nuclear Information System (INIS)

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-01-01

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  14. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    Science.gov (United States)

    Kou, Baoquan; Xing, Feng; Zhang, Lu; Zhou, Yiheng; Liu, Jiaqi

    2017-05-01

    To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  15. Improvement of a picking algorithm real-time P-wave detection by kurtosis

    Science.gov (United States)

    Ishida, H.; Yamada, M.

    2016-12-01

    Earthquake early warning (EEW) requires fast and accurate P-wave detection. The current EEW system in Japan uses the STA/LTAalgorithm (Allen, 1978) to detect P-wave arrival.However, some stations did not trigger during the 2011 Great Tohoku Earthquake due to the emergent onset. In addition, accuracy of the P-wave detection is very important: on August 1, 2016, the EEW issued a false alarm with M9 in Tokyo region due to a thunder noise.To solve these problems, we use a P-wave detection method using kurtosis statistics. It detects the change of statistic distribution of the waveform amplitude. This method was recently developed (Saragiotis et al., 2002) and used for off-line analysis such as making seismic catalogs. To apply this method for EEW, we need to remove an acausal calculation and enable a real-time processing. Here, we propose a real-time P-wave detection method using kurtosis statistics with a noise filter.To avoid false triggering by a noise, we incorporated a simple filter to classify seismic signal and noise. Following Kong et al. (2016), we used the interquartilerange and zero cross rate for the classification. The interquartile range is an amplitude measure that is equal to the middle 50% of amplitude in a certain time window. The zero cross rate is a simple frequency measure that counts the number of times that the signal crosses baseline zero. A discriminant function including these measures was constructed by the linear discriminant analysis.To test this kurtosis method, we used strong motion records for 62 earthquakes between April, 2005 and July, 2015, which recorded the seismic intensity greater equal to 6 lower in the JMA intensity scale. The records with hypocentral distance picks. It shows that the median error is 0.13 sec and 0.035 sec for STA/LTA and kurtosis method. The kurtosis method tends to be more sensitive to small changes in amplitude.Our approach will contribute to improve the accuracy of source location determination of

  16. Real time traffic models, decision support for traffic management

    NARCIS (Netherlands)

    Wismans, Luc Johannes Josephus; de Romph, E.; Friso, K.; Zantema, K.

    2014-01-01

    Reliable and accurate short-term traffic state prediction can improve the performance of real-time traffic management systems significantly. Using this short-time prediction based on current measurements delivered by advanced surveillance systems will support decision-making processes on various

  17. Real Time Traffic Models, Decision Support for Traffic Management

    NARCIS (Netherlands)

    Wismans, L.; De Romph, E.; Friso, K.; Zantema, K.

    2014-01-01

    Reliable and accurate short-term traffic state prediction can improve the performance of real-time traffic management systems significantly. Using this short-time prediction based on current measurements delivered by advanced surveillance systems will support decision-making processes on various

  18. How Accurate Are Our Processed ENDF Cross Sections?

    International Nuclear Information System (INIS)

    Cullen, Dermott E.

    2014-05-01

    Let me start by reassuring you that currently our nuclear data processing codes are very accurate in the calculations that they perform INSIDE COMNPUTERS. However, most of them drop the ball in what should be a trivial final step to output their results into the ENDF format. This is obviously a very important step, because without accurately outputting their results we would not be able to confidently use their results in our applications. This is indeed a very important step, but unfortunately it is one that is not given the attention it deserves; hence we come to the purpose of this paper. Here I document first the state of a number of nuclear data processing codes as of February 2012, when this comparison began, and then the current state, November 2013, of the same codes. I have delayed publishing results until now to give participants time to distribute updated codes and data. The codes compared include, in alphabetical order: AMPX, NJOY, PREPRO, and SAMMY/SAMRML. During this time we have seen considerable improvement in output results, and as a direct result of this study we now have four codes that produce high precision results, but this is still a long way from ensuring that all codes that handle nuclear data are maintaining the accuracy that we require today. In the first part of this report I consider the precision of our tabulated energies; here we see obvious flaws when less-precise output is used. In the second part I consider the precision of our cross sections; here we see more subtle flaws. The important point to stress is that once these flaws are recognized it is relatively easy to eliminate them and produce high precision energies and cross sections.

  19. Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum

    Science.gov (United States)

    Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.

    2013-02-01

    Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.

  20. Using fuzzy logic to improve the project time and cost estimation based on Project Evaluation and Review Technique (PERT

    Directory of Open Access Journals (Sweden)

    Farhad Habibi

    2018-09-01

    Full Text Available Among different factors, correct scheduling is one of the vital elements for project management success. There are several ways to schedule projects including the Critical Path Method (CPM and Program Evaluation and Review Technique (PERT. Due to problems in estimating dura-tions of activities, these methods cannot accurately and completely model actual projects. The use of fuzzy theory is a basic way to improve scheduling and deal with such problems. Fuzzy theory approximates project scheduling models to reality by taking into account uncertainties in decision parameters and expert experience and mental models. This paper provides a step-by-step approach for accurate estimation of time and cost of projects using the Project Evaluation and Review Technique (PERT and expert views as fuzzy numbers. The proposed method included several steps. In the first step, the necessary information for project time and cost is estimated using the Critical Path Method (CPM and the Project Evaluation and Review Technique (PERT. The second step considers the duration and cost of the project activities as the trapezoidal fuzzy numbers, and then, the time and cost of the project are recalculated. The duration and cost of activities are estimated using the questionnaires as well as weighing the expert opinions, averaging and defuzzification based on a step-by-step algorithm. The calculating procedures for evaluating these methods are applied in a real project; and the obtained results are explained.

  1. A Review of Wearable Technologies for Elderly Care that Can Accurately Track Indoor Position, Recognize Physical Activities and Monitor Vital Signs in Real Time

    Science.gov (United States)

    Wang, Zhihua; Yang, Zhaochu; Dong, Tao

    2017-01-01

    Rapid growth of the aged population has caused an immense increase in the demand for healthcare services. Generally, the elderly are more prone to health problems compared to other age groups. With effective monitoring and alarm systems, the adverse effects of unpredictable events such as sudden illnesses, falls, and so on can be ameliorated to some extent. Recently, advances in wearable and sensor technologies have improved the prospects of these service systems for assisting elderly people. In this article, we review state-of-the-art wearable technologies that can be used for elderly care. These technologies are categorized into three types: indoor positioning, activity recognition and real time vital sign monitoring. Positioning is the process of accurate localization and is particularly important for elderly people so that they can be found in a timely manner. Activity recognition not only helps ensure that sudden events (e.g., falls) will raise alarms but also functions as a feasible way to guide people’s activities so that they avoid dangerous behaviors. Since most elderly people suffer from age-related problems, some vital signs that can be monitored comfortably and continuously via existing techniques are also summarized. Finally, we discussed a series of considerations and future trends with regard to the construction of “smart clothing” system. PMID:28208620

  2. Screening of 485 Pesticide Residues in Fruits and Vegetables by Liquid Chromatography-Quadrupole-Time-of-Flight Mass Spectrometry Based on TOF Accurate Mass Database and QTOF Spectrum Library.

    Science.gov (United States)

    Pang, Guo-Fang; Fan, Chun-Lin; Chang, Qiao-Ying; Li, Jian-Xun; Kang, Jian; Lu, Mei-Ling

    2018-03-22

    This paper uses the LC-quadrupole-time-of-flight MS technique to evaluate the behavioral characteristics of MSof 485 pesticides under different conditions and has developed an accurate mass database and spectra library. A high-throughput screening and confirmation method has been developed for the 485 pesticides in fruits and vegetables. Through the optimization of parameters such as accurate mass number, time of retention window, ionization forms, etc., the method has improved the accuracy of pesticide screening, thus avoiding the occurrence of false-positive and false-negative results. The method features a full scan of fragments, with 80% of pesticide qualitative points over 10, which helps increase pesticide qualitative accuracy. The abundant differences of fragment categories help realize the effective separation and qualitative identification of isomer pesticides. Four different fruits and vegetables-apples, grapes, celery, and tomatoes-were chosen to evaluate the efficiency of the method at three fortification levels of 5, 10, and 20 μg/kg, and satisfactory results were obtained. With this method, a national survey of pesticide residues was conducted between 2012 and 2015 for 12 551 samples of 146 different fruits and vegetables collected from 638 sampling points in 284 counties across 31 provincial capitals/cities directly under the central government, which provided scientific data backup for ensuring pesticide residue safety of the fruits and vegetables consumed daily by the public. Meanwhile, the big data statistical analysis of the new technique also further proves it to be of high speed, high throughput, high accuracy, high reliability, and high informatization.

  3. Accurate 3D Mapping Algorithm for Flexible Antennas

    Directory of Open Access Journals (Sweden)

    Saed Asaly

    2018-01-01

    Full Text Available This work addresses the problem of performing an accurate 3D mapping of a flexible antenna surface. Consider a high-gain satellite flexible antenna; even a submillimeter change in the antenna surface may lead to a considerable loss in the antenna gain. Using a robotic subreflector, such changes can be compensated for. Yet, in order to perform such tuning, an accurate 3D mapping of the main antenna is required. This paper presents a general method for performing an accurate 3D mapping of marked surfaces such as satellite dish antennas. Motivated by the novel technology for nanosatellites with flexible high-gain antennas, we propose a new accurate mapping framework which requires a small-sized monocamera and known patterns on the antenna surface. The experimental result shows that the presented mapping method can detect changes up to 0.1-millimeter accuracy, while the camera is located 1 meter away from the dish, allowing an RF antenna optimization for Ka and Ku frequencies. Such optimization process can improve the gain of the flexible antennas and allow an adaptive beam shaping. The presented method is currently being implemented on a nanosatellite which is scheduled to be launched at the end of 2018.

  4. A high-order time-accurate interrogation method for time-resolved PIV

    International Nuclear Information System (INIS)

    Lynch, Kyle; Scarano, Fulvio

    2013-01-01

    A novel method is introduced for increasing the accuracy and extending the dynamic range of time-resolved particle image velocimetry (PIV). The approach extends the concept of particle tracking velocimetry by multiple frames to the pattern tracking by cross-correlation analysis as employed in PIV. The working principle is based on tracking the patterned fluid element, within a chosen interrogation window, along its individual trajectory throughout an image sequence. In contrast to image-pair interrogation methods, the fluid trajectory correlation concept deals with variable velocity along curved trajectories and non-zero tangential acceleration during the observed time interval. As a result, the velocity magnitude and its direction are allowed to evolve in a nonlinear fashion along the fluid element trajectory. The continuum deformation (namely spatial derivatives of the velocity vector) is accounted for by adopting local image deformation. The principle offers important reductions of the measurement error based on three main points: by enlarging the temporal measurement interval, the relative error becomes reduced; secondly, the random and peak-locking errors are reduced by the use of least-squares polynomial fits to individual trajectories; finally, the introduction of high-order (nonlinear) fitting functions provides the basis for reducing the truncation error. Lastly, the instantaneous velocity is evaluated as the temporal derivative of the polynomial representation of the fluid parcel position in time. The principal features of this algorithm are compared with a single-pair iterative image deformation method. Synthetic image sequences are considered with steady flow (translation, shear and rotation) illustrating the increase of measurement precision. An experimental data set obtained by time-resolved PIV measurements of a circular jet is used to verify the robustness of the method on image sequences affected by camera noise and three-dimensional motions. In

  5. Time-Driven Activity-Based Costing in Emergency Medicine.

    Science.gov (United States)

    Yun, Brian J; Prabhakar, Anand M; Warsh, Jonathan; Kaplan, Robert; Brennan, John; Dempsey, Kyle E; Raja, Ali S

    2016-06-01

    Value in emergency medicine is determined by both patient-important outcomes and the costs associated with achieving them. However, measuring true costs is challenging. Without an understanding of costs, emergency department (ED) leaders will be unable to determine which interventions might improve value for their patients. Although ongoing research may determine which outcomes are meaningful, an accurate costing system is also needed. This article reviews current costing mechanisms in the ED and their pitfalls. It then describes how time-driven activity-based costing may be superior to these current costing systems. Time-driven activity-based costing, in addition to being a more accurate costing system, can be used for process improvements in the ED. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  6. Decision peptide-driven: a free software tool for accurate protein quantification using gel electrophoresis and matrix assisted laser desorption ionization time of flight mass spectrometry.

    Science.gov (United States)

    Santos, Hugo M; Reboiro-Jato, Miguel; Glez-Peña, Daniel; Nunes-Miranda, J D; Fdez-Riverola, Florentino; Carvallo, R; Capelo, J L

    2010-09-15

    The decision peptide-driven tool implements a software application for assisting the user in a protocol for accurate protein quantification based on the following steps: (1) protein separation through gel electrophoresis; (2) in-gel protein digestion; (3) direct and inverse (18)O-labeling and (4) matrix assisted laser desorption ionization time of flight mass spectrometry, MALDI analysis. The DPD software compares the MALDI results of the direct and inverse (18)O-labeling experiments and quickly identifies those peptides with paralleled loses in different sets of a typical proteomic workflow. Those peptides are used for subsequent accurate protein quantification. The interpretation of the MALDI data from direct and inverse labeling experiments is time-consuming requiring a significant amount of time to do all comparisons manually. The DPD software shortens and simplifies the searching of the peptides that must be used for quantification from a week to just some minutes. To do so, it takes as input several MALDI spectra and aids the researcher in an automatic mode (i) to compare data from direct and inverse (18)O-labeling experiments, calculating the corresponding ratios to determine those peptides with paralleled losses throughout different sets of experiments; and (ii) allow to use those peptides as internal standards for subsequent accurate protein quantification using (18)O-labeling. In this work the DPD software is presented and explained with the quantification of protein carbonic anhydrase. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  7. Real-time feedback can improve infant manikin cardiopulmonary resuscitation by up to 79%--a randomised controlled trial.

    Science.gov (United States)

    Martin, Philip; Theobald, Peter; Kemp, Alison; Maguire, Sabine; Maconochie, Ian; Jones, Michael

    2013-08-01

    European and Advanced Paediatric Life Support training courses. Sixty-nine certified CPR providers. CPR providers were randomly allocated to a 'no-feedback' or 'feedback' group, performing two-thumb and two-finger chest compressions on a "physiological", instrumented resuscitation manikin. Baseline data was recorded without feedback, before chest compressions were repeated with one group receiving feedback. Indices were calculated that defined chest compression quality, based upon comparison of the chest wall displacement to the targets of four, internationally recommended parameters: chest compression depth, release force, chest compression rate and compression duty cycle. Baseline data were consistent with other studies, with <1% of chest compressions performed by providers simultaneously achieving the target of the four internationally recommended parameters. During the 'experimental' phase, 34 CPR providers benefitted from the provision of 'real-time' feedback which, on analysis, coincided with a statistical improvement in compression rate, depth and duty cycle quality across both compression techniques (all measures: p<0.001). Feedback enabled providers to simultaneously achieve the four targets in 75% (two-finger) and 80% (two-thumb) of chest compressions. Real-time feedback produced a dramatic increase in the quality of chest compression (i.e. from <1% to 75-80%). If these results transfer to a clinical scenario this technology could, for the first time, support providers in consistently performing accurate chest compressions during infant CPR and thus potentially improving clinical outcomes. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Improving a real-time object detector with compact temporal information

    DEFF Research Database (Denmark)

    Ahrnbom, Martin; Jensen, Morten Bornø; Åström, Kalle

    2017-01-01

    Neural networks designed for real-time object detection have recently improved significantly, but in practice, look- ing at only a single RGB image at the time may not be ideal. For example, when detecting objects in videos, a foreground detection algorithm can be used to obtain compact temporal......, a problem this approach is well suited for. The ac- curacy was found to improve significantly (up to 66%), with a roughly 40% increase in computational time....

  9. An improved energy-range relationship for high-energy electron beams based on multiple accurate experimental and Monte Carlo data sets

    International Nuclear Information System (INIS)

    Sorcini, B.B.; Andreo, P.; Hyoedynmaa, S.; Brahme, A.; Bielajew, A.F.

    1995-01-01

    A theoretically based analytical energy-range relationship has been developed and calibrated against well established experimental and Monte Carlo calculated energy-range data. Only published experimental data with a clear statement of accuracy and method of evaluation have been used. Besides published experimental range data for different uniform media, new accurate experimental data on the practical range of high-energy electron beams in water for the energy range 10-50 MeV from accurately calibrated racetrack microtrons have been used. Largely due to the simultaneous pooling of accurate experimental and Monte Carlo data for different materials, the fit has resulted in an increased accuracy of the resultant energy-range relationship, particularly at high energies. Up to date Monte Carlo data from the latest versions of the codes ITS3 and EGS4 for absorbers of atomic numbers between four and 92 (Be, C, H 2 O, PMMA, Al, Cu, Ag, Pb and U) and incident electron energies between 1 and 100 MeV have been used as a complement where experimental data are sparse or missing. The standard deviation of the experimental data relative to the new relation is slightly larger than that of the Monte Carlo data. This is partly due to the fact that theoretically based stopping and scattering cross-sections are used both to account for the material dependence of the analytical energy-range formula and to calculate ranges with the Monte Carlo programs. For water the deviation from the traditional energy-range relation of ICRU Report 35 is only 0.5% at 20 MeV but as high as - 2.2% at 50 MeV. An improved method for divergence and ionization correction in high-energy electron beams has also been developed to enable use of a wider range of experimental results. (Author)

  10. An accurate real-time model of maglev planar motor based on compound Simpson numerical integration

    Directory of Open Access Journals (Sweden)

    Baoquan Kou

    2017-05-01

    Full Text Available To realize the high-speed and precise control of the maglev planar motor, a more accurate real-time electromagnetic model, which considers the influence of the coil corners, is proposed in this paper. Three coordinate systems for the stator, mover and corner coil are established. The coil is divided into two segments, the straight coil segment and the corner coil segment, in order to obtain a complete electromagnetic model. When only take the first harmonic of the flux density distribution of a Halbach magnet array into account, the integration method can be carried out towards the two segments according to Lorenz force law. The force and torque analysis formula of the straight coil segment can be derived directly from Newton-Leibniz formula, however, this is not applicable to the corner coil segment. Therefore, Compound Simpson numerical integration method is proposed in this paper to solve the corner segment. With the validation of simulation and experiment, the proposed model has high accuracy and can realize practical application easily.

  11. A stabilized second-order time accurate finite element formulation for incompressible viscous flow with heat transfer

    International Nuclear Information System (INIS)

    Curi, Marcos Filardy

    2011-01-01

    In view of the problem of global warming and the search for clean energy sources, a worldwide expansion on the use of nuclear energy is foreseen. Thus, the development of science and technology regarding nuclear power plants is essential, in particular in the field of reactor engineering. Fluid mechanics and heat transfer play an important role in the development of nuclear reactors. Computational Fluid Mechanics (CFD) is becoming ever more important in the optimization of cost and safety of the designs. This work presents a stabilized second-order time accurate finite element formulation for incompressible flows with heat transfer. A second order time discretization precedes a spatial discretization using finite elements. The terms that stabilize the finite element method arise naturally from the discretization process, rather than being introduced a priori in the variational formulation. The method was implemented in the program 'ns n ew s olvec2d av 2 M PI' written in FORTRAN90, developed in the Parallel Computing Laboratory at the Institute of Nuclear Engineering (LCP/IEN). Numerical solutions of some representative examples, including free, mixed and forced convection, demonstrate that the proposed stabilized formulation attains very good agreement with experimental and computational results available in the literature. (author)

  12. Robust and Accurate Anomaly Detection in ECG Artifacts Using Time Series Motif Discovery

    Science.gov (United States)

    Sivaraks, Haemwaan

    2015-01-01

    Electrocardiogram (ECG) anomaly detection is an important technique for detecting dissimilar heartbeats which helps identify abnormal ECGs before the diagnosis process. Currently available ECG anomaly detection methods, ranging from academic research to commercial ECG machines, still suffer from a high false alarm rate because these methods are not able to differentiate ECG artifacts from real ECG signal, especially, in ECG artifacts that are similar to ECG signals in terms of shape and/or frequency. The problem leads to high vigilance for physicians and misinterpretation risk for nonspecialists. Therefore, this work proposes a novel anomaly detection technique that is highly robust and accurate in the presence of ECG artifacts which can effectively reduce the false alarm rate. Expert knowledge from cardiologists and motif discovery technique is utilized in our design. In addition, every step of the algorithm conforms to the interpretation of cardiologists. Our method can be utilized to both single-lead ECGs and multilead ECGs. Our experiment results on real ECG datasets are interpreted and evaluated by cardiologists. Our proposed algorithm can mostly achieve 100% of accuracy on detection (AoD), sensitivity, specificity, and positive predictive value with 0% false alarm rate. The results demonstrate that our proposed method is highly accurate and robust to artifacts, compared with competitive anomaly detection methods. PMID:25688284

  13. EnergyPlus Run Time Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.

  14. A Review of Wearable Technologies for Elderly Care that Can Accurately Track Indoor Position, Recognize Physical Activities and Monitor Vital Signs in Real Time

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2017-02-01

    Full Text Available Rapid growth of the aged population has caused an immense increase in the demand for healthcare services. Generally, the elderly are more prone to health problems compared to other age groups. With effective monitoring and alarm systems, the adverse effects of unpredictable events such as sudden illnesses, falls, and so on can be ameliorated to some extent. Recently, advances in wearable and sensor technologies have improved the prospects of these service systems for assisting elderly people. In this article, we review state-of-the-art wearable technologies that can be used for elderly care. These technologies are categorized into three types: indoor positioning, activity recognition and real time vital sign monitoring. Positioning is the process of accurate localization and is particularly important for elderly people so that they can be found in a timely manner. Activity recognition not only helps ensure that sudden events (e.g., falls will raise alarms but also functions as a feasible way to guide people’s activities so that they avoid dangerous behaviors. Since most elderly people suffer from age-related problems, some vital signs that can be monitored comfortably and continuously via existing techniques are also summarized. Finally, we discussed a series of considerations and future trends with regard to the construction of “smart clothing” system.

  15. Improved hybrid information filtering based on limited time window

    Science.gov (United States)

    Song, Wen-Jun; Guo, Qiang; Liu, Jian-Guo

    2014-12-01

    Adopting the entire collecting information of users, the hybrid information filtering of heat conduction and mass diffusion (HHM) (Zhou et al., 2010) was successfully proposed to solve the apparent diversity-accuracy dilemma. Since the recent behaviors are more effective to capture the users' potential interests, we present an improved hybrid information filtering of adopting the partial recent information. We expand the time window to generate a series of training sets, each of which is treated as known information to predict the future links proven by the testing set. The experimental results on one benchmark dataset Netflix indicate that by only using approximately 31% recent rating records, the accuracy could be improved by an average of 4.22% and the diversity could be improved by 13.74%. In addition, the performance on the dataset MovieLens could be preserved by considering approximately 60% recent records. Furthermore, we find that the improved algorithm is effective to solve the cold-start problem. This work could improve the information filtering performance and shorten the computational time.

  16. Improved predictive modeling of white LEDs with accurate luminescence simulation and practical inputs with TracePro opto-mechanical design software

    Science.gov (United States)

    Tsao, Chao-hsi; Freniere, Edward R.; Smith, Linda

    2009-02-01

    The use of white LEDs for solid-state lighting to address applications in the automotive, architectural and general illumination markets is just emerging. LEDs promise greater energy efficiency and lower maintenance costs. However, there is a significant amount of design and cost optimization to be done while companies continue to improve semiconductor manufacturing processes and begin to apply more efficient and better color rendering luminescent materials such as phosphor and quantum dot nanomaterials. In the last decade, accurate and predictive opto-mechanical software modeling has enabled adherence to performance, consistency, cost, and aesthetic criteria without the cost and time associated with iterative hardware prototyping. More sophisticated models that include simulation of optical phenomenon, such as luminescence, promise to yield designs that are more predictive - giving design engineers and materials scientists more control over the design process to quickly reach optimum performance, manufacturability, and cost criteria. A design case study is presented where first, a phosphor formulation and excitation source are optimized for a white light. The phosphor formulation, the excitation source and other LED components are optically and mechanically modeled and ray traced. Finally, its performance is analyzed. A blue LED source is characterized by its relative spectral power distribution and angular intensity distribution. YAG:Ce phosphor is characterized by relative absorption, excitation and emission spectra, quantum efficiency and bulk absorption coefficient. Bulk scatter properties are characterized by wavelength dependent scatter coefficients, anisotropy and bulk absorption coefficient.

  17. SATe-II: very fast and accurate simultaneous estimation of multiple sequence alignments and phylogenetic trees.

    Science.gov (United States)

    Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal

    2012-01-01

    Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of

  18. Improving Retention and Enrollment Forecasting in Part-Time Programs

    Science.gov (United States)

    Shapiro, Joel; Bray, Christopher

    2011-01-01

    This article describes a model that can be used to analyze student enrollment data and can give insights for improving retention of part-time students and refining institutional budgeting and planning efforts. Adult higher-education programs are often challenged in that part-time students take courses less reliably than full-time students. For…

  19. Fast and accurate spectral estimation for online detection of partial broken bar in induction motors

    Science.gov (United States)

    Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti

    2018-01-01

    In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.

  20. Real-Time Pore Pressure Detection: Indicators and Improved Methods

    Directory of Open Access Journals (Sweden)

    Jincai Zhang

    2017-01-01

    Full Text Available High uncertainties may exist in the predrill pore pressure prediction in new prospects and deepwater subsalt wells; therefore, real-time pore pressure detection is highly needed to reduce drilling risks. The methods for pore pressure detection (the resistivity, sonic, and corrected d-exponent methods are improved using the depth-dependent normal compaction equations to adapt to the requirements of the real-time monitoring. A new method is proposed to calculate pore pressure from the connection gas or elevated background gas, which can be used for real-time pore pressure detection. The pore pressure detection using the logging-while-drilling, measurement-while-drilling, and mud logging data is also implemented and evaluated. Abnormal pore pressure indicators from the well logs, mud logs, and wellbore instability events are identified and analyzed to interpret abnormal pore pressures for guiding real-time drilling decisions. The principles for identifying abnormal pressure indicators are proposed to improve real-time pore pressure monitoring.

  1. An Improved Scheduling Technique for Time-Triggered Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    1999-01-01

    clock synchronization and mode changes. We have improved the quality of the schedules by introducing a new priority function that takes into consideration the communication protocol. Communication has been optimized through packaging messages into slots with a properly selected order and lengths......In this paper we present an improved scheduling technique for the synthesis of time-triggered embedded systems. Our system model captures both the flow of data and that of control. We have considered communication of data and conditions for a time-triggered protocol implementation that supports...

  2. Indexed variation graphs for efficient and accurate resistome profiling.

    Science.gov (United States)

    Rowe, Will P M; Winn, Martyn D

    2018-05-14

    Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.

  3. Fast sweeping algorithm for accurate solution of the TTI eikonal equation using factorization

    KAUST Repository

    bin Waheed, Umair

    2017-06-10

    Traveltime computation is essential for many seismic data processing applications and velocity analysis tools. High-resolution seismic imaging requires eikonal solvers to account for anisotropy whenever it significantly affects the seismic wave kinematics. Moreover, computation of auxiliary quantities, such as amplitude and take-off angle, rely on highly accurate traveltime solutions. However, the finite-difference based eikonal solution for a point-source initial condition has an upwind source-singularity at the source position, since the wavefront curvature is large near the source point. Therefore, all finite-difference solvers, even the high-order ones, show inaccuracies since the errors due to source-singularity spread from the source point to the whole computational domain. We address the source-singularity problem for tilted transversely isotropic (TTI) eikonal solvers using factorization. We solve a sequence of factored tilted elliptically anisotropic (TEA) eikonal equations iteratively, each time by updating the right hand side function. At each iteration, we factor the unknown TEA traveltime into two factors. One of the factors is specified analytically, such that the other factor is smooth in the source neighborhood. Therefore, through the iterative procedure we obtain accurate solution to the TTI eikonal equation. Numerical tests show significant improvement in accuracy due to factorization. The idea can be easily extended to compute accurate traveltimes for models with lower anisotropic symmetries, such as orthorhombic, monoclinic or even triclinic media.

  4. A simple method for normalization of DNA extraction to improve the quantitative detection of soil-borne plant pathogenic oomycetes by real-time PCR.

    Science.gov (United States)

    Li, M; Ishiguro, Y; Kageyama, K; Zhu, Z

    2015-08-01

    Most of the current research into the quantification of soil-borne pathogenic oomycetes lacks determination of DNA extraction efficiency, probably leading to an incorrect estimation of DNA quantity. In this study, we developed a convenient method by using a 100 bp artificially synthesized DNA sequence derived from the mitochondrion NADH dehydrogenase subunit 2 gene of Thunnus thynnus as a control to determine the DNA extraction efficiency. The control DNA was added to soils and then co-extracted along with soil genomic DNA. DNA extraction efficiency was determined by the control DNA. Two different DNA extraction methods were compared and evaluated using different types of soils, and the commercial kit was proved to give more consistent results. We used the control DNA combined with real-time PCR to quantify the oomycete DNAs from 12 naturally infested soils. Detectable target DNA concentrations were three to five times higher after normalization. Our tests also showed that the extraction efficiencies varied on a sample-to-sample basis and were simple and useful for the accurate quantification of soil-borne pathogenic oomycetes. Oomycetes include many important plant pathogens. Accurate quantification of these pathogens is essential in the management of diseases. This study reports an easy method utilizing an external DNA control for the normalization of DNA extraction by real-time PCR. By combining two different efficient soil DNA extraction methods, the developed quantification method dramatically improved the results. This study also proves that the developed normalization method is necessary and useful for the accurate quantification of soil-borne plant pathogenic oomycetes. © 2015 The Society for Applied Microbiology.

  5. Timing Calibration for Time-of-Flight PET Using Positron-Emitting Isotopes and Annihilation Targets

    Science.gov (United States)

    Li, Xiaoli; Burr, Kent C.; Wang, Gin-Chung; Du, Huini; Gagnon, Daniel

    2016-06-01

    Adding time-of-flight (TOF) technology has been proven to improve image quality in positron emission tomography (PET). In order for TOF information to significantly reduce the statistical noise in reconstructed PET images, good timing resolution is needed across the scanner field of view (FOV). This work proposes an accurate, robust, and practical crystal-based timing calibration method using 18F - FDG positron-emitting sources together with a spatially separated annihilation target. We calibrated a prototype Toshiba TOF PET scanner using this method and then assessed its timing resolution at different locations in the scanner FOV.

  6. Highly Accurate Prediction of Jobs Runtime Classes

    OpenAIRE

    Reiner-Benaim, Anat; Grabarnick, Anna; Shmueli, Edi

    2016-01-01

    Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...

  7. Improving Reports Turnaround Time: An Essential Healthcare Quality Dimension.

    Science.gov (United States)

    Khan, Mustafa; Khalid, Parwaiz; Al-Said, Youssef; Cupler, Edward; Almorsy, Lamia; Khalifa, Mohamed

    2016-01-01

    Turnaround time is one of the most important healthcare performance indicators. King Faisal Specialist Hospital and Research Center in Jeddah, Saudi Arabia worked on reducing the reports turnaround time of the neurophysiology lab from more than two weeks to only five working days for 90% of cases. The main quality improvement methodology used was the FOCUS PDCA. Using root cause analysis, Pareto analysis and qualitative survey methods, the main factors contributing to the delay of turnaround time and the suggested improvement strategies were identified and implemented, through restructuring transcriptionists daily tasks, rescheduling physicians time and alerting for new reports, engaging consultants, consistent coordination and prioritizing critical reports. After implementation; 92% of reports are verified within 5 days compared to only 6% before implementation. 7% of reports were verified in 5 days to 2 weeks and only 1% of reports needed more than 2 weeks compared to 76% before implementation.

  8. Predicting Charging Time of Battery Electric Vehicles Based on Regression and Time-Series Methods: A Case Study of Beijing

    Directory of Open Access Journals (Sweden)

    Jun Bi

    2018-04-01

    Full Text Available Battery electric vehicles (BEVs reduce energy consumption and air pollution as compared with conventional vehicles. However, the limited driving range and potential long charging time of BEVs create new problems. Accurate charging time prediction of BEVs helps drivers determine travel plans and alleviate their range anxiety during trips. This study proposed a combined model for charging time prediction based on regression and time-series methods according to the actual data from BEVs operating in Beijing, China. After data analysis, a regression model was established by considering the charged amount for charging time prediction. Furthermore, a time-series method was adopted to calibrate the regression model, which significantly improved the fitting accuracy of the model. The parameters of the model were determined by using the actual data. Verification results confirmed the accuracy of the model and showed that the model errors were small. The proposed model can accurately depict the charging time characteristics of BEVs in Beijing.

  9. Land cover change mapping using MODIS time series to improve emissions inventories

    Science.gov (United States)

    López-Saldaña, Gerardo; Quaife, Tristan; Clifford, Debbie

    2016-04-01

    MELODIES is an FP7 funded project to develop innovative and sustainable services, based upon Open Data, for users in research, government, industry and the general public in a broad range of societal and environmental benefit areas. Understanding and quantifying land surface changes is necessary for estimating greenhouse gas and ammonia emissions, and for meeting air quality limits and targets. More sophisticated inventories methodologies for at least key emission source are needed due to policy-driven air quality directives. Quantifying land cover changes on an annual basis requires greater spatial and temporal disaggregation of input data. The main aim of this study is to develop a methodology for using Earth Observations (EO) to identify annual land surface changes that will improve emissions inventories from agriculture and land use/land use change and forestry (LULUCF) in the UK. First goal is to find the best sets of input features that describe accurately the surface dynamics. In order to identify annual and inter-annual land surface changes, a times series of surface reflectance was used to capture seasonal variability. Daily surface reflectance images from the Moderate Resolution Imaging Spectroradiometer (MODIS) at 500m resolution were used to invert a Bidirectional Reflectance Distribution Function (BRDF) model to create the seamless time series. Given the limited number of cloud-free observations, a BRDF climatology was used to constrain the model inversion and where no high-scientific quality observations were available at all, as a gap filler. The Land Cover Map 2007 (LC2007) produced by the Centre for Ecology & Hydrology (CEH) was used for training and testing purposes. A land cover product was created for 2003 to 2015 and a bayesian approach was created to identified land cover changes. We will present the results of the time series development and the first exercises when creating the land cover and land cover changes products.

  10. Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L

    2018-02-01

    This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Linear signal noise summer accurately determines and controls S/N ratio

    Science.gov (United States)

    Sundry, J. L.

    1966-01-01

    Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.

  12. Screening and confirmation criteria for hormone residue analysis using liquid chromatography accurate mass time-of-flight, Fourier transform ion cyclotron resonance and orbitrap mass spectrometry techniques

    NARCIS (Netherlands)

    Nielen, M.W.F.; Engelen, M.C. van; Zuiderent, R.; Ramaker, R.

    2007-01-01

    An emerging trend is recognised in hormone and veterinary drug residue analysis from liquid chromatography tandem mass spectrometry (LC/MS/MS) based screening and confirmation towards accurate mass alternatives such as LC coupled with time-of-flight (TOF), Fourier transform ion cyclotron resonance

  13. Accurate quantum chemical calculations

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  14. Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques

    Science.gov (United States)

    Lee, Hanbong; Malik, Waqar; Jung, Yoon C.

    2016-01-01

    Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.

  15. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment

    Directory of Open Access Journals (Sweden)

    Qi Liu

    2016-08-01

    Full Text Available Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks’ execution time can be improved, in particular for some regular jobs.

  16. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  17. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  18. Improved timing recovery in wireless mobile receivers

    CSIR Research Space (South Africa)

    Olwal, TO

    2007-06-01

    Full Text Available are transmitted to the receiver. In the proposed method, the receiver exploits the soft decisions computed at each turbo decoding iteration to provide reliable estimates of a soft timing signal, which in turn, improves the decoding time. The derived method... as ( ) ( )( )1 2 1 2, ,..., , ,...,Q Qk k k k k k k k ka a x x x P a x x xη∗ ∗∈Β= ∑ (29) where ( )1 2, ,..., Qk k kx x x are the Q coded bits in a multilevel symbol modulation scheme [32]. According to [29], the soft information demapper computes posteriori...

  19. Wavelet Denoising of Radio Observations of Rotating Radio Transients (RRATs): Improved Timing Parameters for Eight RRATs

    Science.gov (United States)

    Jiang, M.; Cui, B.-Y.; Schmid, N. A.; McLaughlin, M. A.; Cao, Z.-C.

    2017-09-01

    Rotating radio transients (RRATs) are sporadically emitting pulsars detectable only through searches for single pulses. While over 100 RRATs have been detected, only a small fraction (roughly 20%) have phase-connected timing solutions, which are critical for determining how they relate to other neutron star populations. Detecting more pulses in order to achieve solutions is key to understanding their physical nature. Astronomical signals collected by radio telescopes contain noise from many sources, making the detection of weak pulses difficult. Applying a denoising method to raw time series prior to performing a single-pulse search typically leads to a more accurate estimation of their times of arrival (TOAs). Taking into account some features of RRAT pulses and noise, we present a denoising method based on wavelet data analysis, an image-processing technique. Assuming that the spin period of an RRAT is known, we estimate the frequency spectrum components contributing to the composition of RRAT pulses. This allows us to suppress the noise, which contributes to other frequencies. We apply the wavelet denoising method including selective wavelet reconstruction and wavelet shrinkage to the de-dispersed time series of eight RRATs with existing timing solutions. The signal-to-noise ratio (S/N) of most pulses are improved after wavelet denoising. Compared to the conventional approach, we measure 12%–69% more TOAs for the eight RRATs. The new timing solutions for the eight RRATs show 16%–90% smaller estimation error of most parameters. Thus, we conclude that wavelet analysis is an effective tool for denoising RRATs signal.

  20. Wavelet Denoising of Radio Observations of Rotating Radio Transients (RRATs): Improved Timing Parameters for Eight RRATs

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, M.; Schmid, N. A.; Cao, Z.-C. [Lane Department of Computer Science and Electrical Engineering West Virginia University Morgantown, WV 26506 (United States); Cui, B.-Y.; McLaughlin, M. A. [Department of Physics and Astronomy West Virginia University Morgantown, WV 26506 (United States)

    2017-09-20

    Rotating radio transients (RRATs) are sporadically emitting pulsars detectable only through searches for single pulses. While over 100 RRATs have been detected, only a small fraction (roughly 20%) have phase-connected timing solutions, which are critical for determining how they relate to other neutron star populations. Detecting more pulses in order to achieve solutions is key to understanding their physical nature. Astronomical signals collected by radio telescopes contain noise from many sources, making the detection of weak pulses difficult. Applying a denoising method to raw time series prior to performing a single-pulse search typically leads to a more accurate estimation of their times of arrival (TOAs). Taking into account some features of RRAT pulses and noise, we present a denoising method based on wavelet data analysis, an image-processing technique. Assuming that the spin period of an RRAT is known, we estimate the frequency spectrum components contributing to the composition of RRAT pulses. This allows us to suppress the noise, which contributes to other frequencies. We apply the wavelet denoising method including selective wavelet reconstruction and wavelet shrinkage to the de-dispersed time series of eight RRATs with existing timing solutions. The signal-to-noise ratio (S/N) of most pulses are improved after wavelet denoising. Compared to the conventional approach, we measure 12%–69% more TOAs for the eight RRATs. The new timing solutions for the eight RRATs show 16%–90% smaller estimation error of most parameters. Thus, we conclude that wavelet analysis is an effective tool for denoising RRATs signal.

  1. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    Energy Technology Data Exchange (ETDEWEB)

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these

  2. Improving Autopsy Report Turnaround Times by Implementing Lean Management Principles.

    Science.gov (United States)

    Cromwell, Susan; Chiasson, David A; Cassidy, Debra; Somers, Gino R

    2018-01-01

    The autopsy is an integral part of the service of a large academic pathology department. Timely reporting is central to providing good service and is beneficial for many stakeholders, including the families, the clinical team, the hospital, and the wider community. The current study aimed to improve hospital-consented autopsy reporting times (turnaround time, TAT) by using lean principles modified for a healthcare setting, with an aim of signing out 90% of autopsies in 90 days. An audit of current and historical TATs was performed, and a working group incorporating administrative, technical, and professional staff constructed a value stream map documenting the steps involved in constructing an autopsy report. Two areas of delay were noted: examination of the microscopy and time taken to sign-out the report after the weekly autopsy conference. Several measures were implemented to address these delays, including visual tracking using a whiteboard and individualized tracking sheets, weekly whiteboard huddles, and timelier scheduling of clinicopathologic conference rounds. All measures resulted in an improvement of TATs. In the 30 months prior to the institution of lean, 37% of autopsies (53/144) were signed out in 90 days, with a wide variation in reporting times. In the 30 months following the institution of lean, this improved to 74% (136/185) ( P lean; 63 days post-lean). The application of lean principles to autopsy sign-out workflow can significantly improve TATs and reduce variability, without changing staffing levels or significantly altering scheduling structure.

  3. On the use of spring baseflow recession for a more accurate parameterization of aquifer transit time distribution functions

    Directory of Open Access Journals (Sweden)

    J. Farlin

    2013-05-01

    Full Text Available Baseflow recession analysis and groundwater dating have up to now developed as two distinct branches of hydrogeology and have been used to solve entirely different problems. We show that by combining two classical models, namely the Boussinesq equation describing spring baseflow recession, and the exponential piston-flow model used in groundwater dating studies, the parameters describing the transit time distribution of an aquifer can be in some cases estimated to a far more accurate degree than with the latter alone. Under the assumption that the aquifer basis is sub-horizontal, the mean transit time of water in the saturated zone can be estimated from spring baseflow recession. This provides an independent estimate of groundwater transit time that can refine those obtained from tritium measurements. The approach is illustrated in a case study predicting atrazine concentration trend in a series of springs draining the fractured-rock aquifer known as the Luxembourg Sandstone. A transport model calibrated on tritium measurements alone predicted different times to trend reversal following the nationwide ban on atrazine in 2005 with different rates of decrease. For some of the springs, the actual time of trend reversal and the rate of change agreed extremely well with the model calibrated using both tritium measurements and the recession of spring discharge during the dry season. The agreement between predicted and observed values was however poorer for the springs displaying the most gentle recessions, possibly indicating a stronger influence of continuous groundwater recharge during the summer months.

  4. Acute physical exercise under hypoxia improves sleep, mood and reaction time.

    Science.gov (United States)

    de Aquino-Lemos, Valdir; Santos, Ronaldo Vagner T; Antunes, Hanna Karen Moreira; Lira, Fabio S; Luz Bittar, Irene G; Caris, Aline V; Tufik, Sergio; de Mello, Marco Tulio

    2016-02-01

    This study aimed to assess the effect of two sessions of acute physical exercise at 50% VO2peak performed under hypoxia (equivalent to an altitude of 4500 m for 28 h) on sleep, mood and reaction time. Forty healthy men were randomized into 4 groups: Normoxia (NG) (n = 10); Hypoxia (HG) (n = 10); Exercise under Normoxia (ENG) (n = 10); and Exercise under Hypoxia (EHG) (n = 10). All mood and reaction time assessments were performed 40 min after awakening. Sleep was reassessed on the first day at 14 h after the initiation of hypoxia; mood and reaction time were measured 28 h later. Two sessions of acute physical exercise at 50% VO2peak were performed for 60 min on the first and second days after 3 and 27 h, respectively, after starting to hypoxia. Improved sleep efficiency, stage N3 and REM sleep and reduced wake after sleep onset were observed under hypoxia after acute physical exercise. Tension, anger, depressed mood, vigor and reaction time scores improved after exercise under hypoxia. We conclude that hypoxia impairs sleep, reaction time and mood. Acute physical exercise at 50% VO2peak under hypoxia improves sleep efficiency, reversing the aspects that had been adversely affected under hypoxia, possibly contributing to improved mood and reaction time.

  5. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    Science.gov (United States)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  6. Improved Real-time Denoising Method Based on Lifting Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Liu Zhaohua

    2014-06-01

    Full Text Available Signal denoising can not only enhance the signal to noise ratio (SNR but also reduce the effect of noise. In order to satisfy the requirements of real-time signal denoising, an improved semisoft shrinkage real-time denoising method based on lifting wavelet transform was proposed. The moving data window technology realizes the real-time wavelet denoising, which employs wavelet transform based on lifting scheme to reduce computational complexity. Also hyperbolic threshold function and recursive threshold computing can ensure the dynamic characteristics of the system, in addition, it can improve the real-time calculating efficiency as well. The simulation results show that the semisoft shrinkage real-time denoising method has quite a good performance in comparison to the traditional methods, namely soft-thresholding and hard-thresholding. Therefore, this method can solve more practical engineering problems.

  7. Course Development Cycle Time: A Framework for Continuous Process Improvement.

    Science.gov (United States)

    Lake, Erinn

    2003-01-01

    Details Edinboro University's efforts to reduce the extended cycle time required to develop new courses and programs. Describes a collaborative process improvement framework, illustrated data findings, the team's recommendations for improvement, and the outcomes of those recommendations. (EV)

  8. Accuracy Improvement of Real-Time Location Tracking for Construction Workers

    Directory of Open Access Journals (Sweden)

    Hyunsoo Kim

    2018-05-01

    Full Text Available Extensive research has been conducted on the real-time locating system (RTLS for tracking construction components, including workers, equipment, and materials, in order to improve construction performance (e.g., productivity improvement or accident prevention. In order to prevent safety accidents and make more sustainable construction job sites, the higher accuracy of RTLS is required. To improve the accuracy of RTLS in construction projects, this paper presents a RTLS using radio frequency identification (RFID. For this goal, this paper develops a location tracking error mitigation algorithm and presents the concept of using assistant tags. The applicability and effectiveness of the developed RTLS are tested under eight different construction environments and the test results confirm the system’s strong potential for improving the accuracy of real-time location tracking in construction projects, thus enhancing construction performance.

  9. Measuring cross-border travel times for freight : Otay Mesa international border crossing.

    Science.gov (United States)

    2010-09-01

    Cross border movement of people and goods is a vital part of the North American economy. Accurate real-time data on travel times along the US-Mexico border can help generate a range of tangible benefits covering improved operations and security, lowe...

  10. Interrupted Time Series Versus Statistical Process Control in Quality Improvement Projects.

    Science.gov (United States)

    Andersson Hagiwara, Magnus; Andersson Gäre, Boel; Elg, Mattias

    2016-01-01

    To measure the effect of quality improvement interventions, it is appropriate to use analysis methods that measure data over time. Examples of such methods include statistical process control analysis and interrupted time series with segmented regression analysis. This article compares the use of statistical process control analysis and interrupted time series with segmented regression analysis for evaluating the longitudinal effects of quality improvement interventions, using an example study on an evaluation of a computerized decision support system.

  11. Improved core monitoring for improved plant operations

    International Nuclear Information System (INIS)

    Mueller, N.P.

    1987-01-01

    Westinghouse has recently installed a core on-line surveillance, monitoring and operations systems (COSMOS), which uses only currently available core and plant data to accurately reconstruct the core average axial and radial power distributions. This information is provided to the operator in an immediately usable, human-engineered format and is accumulated for use in application programs that provide improved core performance predictive tools and a data base for improved fuel management. Dynamic on-line real-time axial and radial core monitoring supports a variety of plant operations to provide a favorable cost/benefit ratio for such a system. Benefits include: (1) relaxation or elimination of certain technical specifications to reduce surveillance and reporting requirements and allow higher availability factors, (2) improved information displays, predictive tools, and control strategies to support more efficient core control and reduce effluent production, and (3) expanded burnup data base for improved fuel management. Such systems can be backfit into operating plants without changing the existing instrumentation and control system and can frequently be implemented on existing plant computer capacity

  12. The New Aptima HBV Quant Real-Time TMA Assay Accurately Quantifies Hepatitis B Virus DNA from Genotypes A to F.

    Science.gov (United States)

    Chevaliez, Stéphane; Dauvillier, Claude; Dubernet, Fabienne; Poveda, Jean-Dominique; Laperche, Syria; Hézode, Christophe; Pawlotsky, Jean-Michel

    2017-04-01

    Sensitive and accurate hepatitis B virus (HBV) DNA detection and quantification are essential to diagnose HBV infection, establish the prognosis of HBV-related liver disease, and guide the decision to treat and monitor the virological response to antiviral treatment and the emergence of resistance. Currently available HBV DNA platforms and assays are generally designed for batching multiple specimens within an individual run and require at least one full day of work to complete the analyses. The aim of this study was to evaluate the ability of the newly developed, fully automated, one-step Aptima HBV Quant assay to accurately detect and quantify HBV DNA in a large series of patients infected with different HBV genotypes. The limit of detection of the assay was estimated to be 4.5 IU/ml. The specificity of the assay was 100%. Intra-assay and interassay coefficients of variation ranged from 0.29% to 5.07% and 4.90% to 6.85%, respectively. HBV DNA levels from patients infected with HBV genotypes A to F measured with the Aptima HBV Quant assay strongly correlated with those measured by two commercial real-time PCR comparators (Cobas AmpliPrep/Cobas TaqMan HBV test, version 2.0, and Abbott RealTi m e HBV test). In conclusion, the Aptima HBV Quant assay is sensitive, specific, and reproducible and accurately quantifies HBV DNA in plasma samples from patients with chronic HBV infections of all genotypes, including patients on antiviral treatment with nucleoside or nucleotide analogues. The Aptima HBV Quant assay can thus confidently be used to detect and quantify HBV DNA in both clinical trials with new anti-HBV drugs and clinical practice. Copyright © 2017 American Society for Microbiology.

  13. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    International Nuclear Information System (INIS)

    Dral, Pavlo O.; Lilienfeld, O. Anatole von; Thiel, Walter

    2015-01-01

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C 7 H 10 O 2 , for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules

  14. Using quality improvement methods to reduce clear fluid fasting times in children on a preoperative ward.

    Science.gov (United States)

    Newton, Richard J G; Stuart, Grant M; Willdridge, Daniel J; Thomas, Mark

    2017-08-01

    We applied quality improvement (QI) methodology to identify the different aspects of why children fasted for prolonged periods in our institution. Our aim was for 75% of all children to be fasted for clear fluid for less than 4 hours. Prolonged fasting in children can increase thirst and irritability and have adverse effects on haemodynamic stability on induction. By reducing this, children may be less irritable, more comfortable and more physiologically stable, improving the preoperative experience for both children and carers. We conducted a QI project from January 2014 until August 2016 at a large tertiary pediatric teaching hospital. Baseline data and the magnitude of the problem were obtained from pilot studies. This allowed us to build a key driver diagram, a process map and conduct a failure mode and effects analysis. Using a framework of Plan-Do-Study-Act cycles our key interventions primarily focused on reducing confusion over procedure start times, giving parents accurate information, empowering staff and reducing variation by allowing children to drink on arrival (up to one hour) before surgery. Prior to this project, using the 6,4,2 fasting rule for solids, breast milk, and clear fluids, respectively, 19% of children were fasted for fluid for less than 4 hours, mean fluid fasting time was 6.3 hours (SD 4.48). At the conclusion 72% of patients received a drink within 4 hours, mean fluid fasting reduced to 3.1 hours (SD 2.33). The secondary measures of aspiration (4.14:10 000) and cancellations have not increased since starting this project. By using established QI methodology we reduced the mean fluid fasting time for day admissions at our hospital to 3.1 hours and increased the proportion of children fasting for less than 4 hours from 19% to 72%. © 2017 John Wiley & Sons Ltd.

  15. Enhancing diabetes management while teaching quality improvement methods.

    Science.gov (United States)

    Sievers, Beth A; Negley, Kristin D F; Carlson, Marny L; Nelson, Joyce L; Pearson, Kristina K

    2014-01-01

    Six medical units realized that they were having issues with accurate timing of bedtime blood glucose measurement for their patients with diabetes. They decided to investigate the issues by using their current staff nurse committee structure. The clinical nurse specialists and nurse education specialists decided to address the issue by educating and engaging the staff in the define, measure, analyze, improve, control (DMAIC) framework process. They found that two issues needed to be improved, including timing of bedtime blood glucose measurement and snack administration and documentation. Several educational interventions were completed and resulted in improved timing of bedtime glucose measurement and bedtime snack documentation. The nurses understood the DMAIC process, and collaboration and cohesion among the medical units was enhanced. Copyright 2014, SLACK Incorporated.

  16. Discrete sensors distribution for accurate plantar pressure analyses.

    Science.gov (United States)

    Claverie, Laetitia; Ille, Anne; Moretto, Pierre

    2016-12-01

    The aim of this study was to determine the distribution of discrete sensors under the footprint for accurate plantar pressure analyses. For this purpose, two different sensor layouts have been tested and compared, to determine which was the most accurate to monitor plantar pressure with wireless devices in research and/or clinical practice. Ten healthy volunteers participated in the study (age range: 23-58 years). The barycenter of pressures (BoP) determined from the plantar pressure system (W-inshoe®) was compared to the center of pressures (CoP) determined from a force platform (AMTI) in the medial-lateral (ML) and anterior-posterior (AP) directions. Then, the vertical ground reaction force (vGRF) obtained from both W-inshoe® and force platform was compared for both layouts for each subject. The BoP and vGRF determined from the plantar pressure system data showed good correlation (SCC) with those determined from the force platform data, notably for the second sensor organization (ML SCC= 0.95; AP SCC=0.99; vGRF SCC=0.91). The study demonstrates that an adjusted placement of removable sensors is key to accurate plantar pressure analyses. These results are promising for a plantar pressure recording outside clinical or laboratory settings, for long time monitoring, real time feedback or for whatever activity requiring a low-cost system. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. IMPROVED MOTOR-TIMING: EFFECTS OF SYNCHRONIZED METRO-NOME TRAINING ON GOLF SHOT ACCURACY

    Directory of Open Access Journals (Sweden)

    Louise Rönnqvist

    2009-12-01

    Full Text Available This study investigates the effect of synchronized metronome training (SMT on motor timing and how this training might affect golf shot accuracy. Twenty-six experienced male golfers participated (mean age 27 years; mean golf handicap 12.6 in this study. Pre- and post-test investigations of golf shots made by three different clubs were conducted by use of a golf simulator. The golfers were randomized into two groups: a SMT group and a Control group. After the pre-test, the golfers in the SMT group completed a 4-week SMT program designed to improve their motor timing, the golfers in the Control group were merely training their golf-swings during the same time period. No differences between the two groups were found from the pre-test outcomes, either for motor timing scores or for golf shot accuracy. However, the post-test results after the 4-weeks SMT showed evident motor timing improvements. Additionally, significant improvements for golf shot accuracy were found for the SMT group and with less variability in their performance. No such improvements were found for the golfers in the Control group. As with previous studies that used a SMT program, this study's results provide further evidence that motor timing can be improved by SMT and that such timing improvement also improves golf accuracy

  18. A handy time alignment probe for timing calibration of PET scanners

    International Nuclear Information System (INIS)

    Bergeron, Melanie; Pepin, Catherine M.; Arpin, Louis; Leroux, Jean-Daniel; Tetrault, Marc-Andre; Viscogliosi, Nicolas; Fontaine, Rejean; Lecomte, Roger

    2009-01-01

    Accurate time alignment of detectors in PET scanners is required for improving overall coincidence timing resolution. This is mandatory to reduce the coincidence time window of the scanner and limit as much as possible the rate of random events in images. Several techniques have been proposed so far, but most have shortcomings relating to difficult use, collection of huge amount of data or long acquisition times, not to mention transport regulation of radioactive source embedded in time alignment probes. A handy liquid scintillation beta probe was developed to overcome these problems. It consists of a PMT coupled to a small glass container that can be filled with a liquid scintillation cocktail loaded with radioactivity (such as 18 F). The PMT signal is processed by an analog CFD and a digital TDC supplying an accurate timestamp on positron detection. When tested in coincidence with a fast PMT/plastic detector, a timing resolution of 1.1 ns FWHM was obtained using a standard off-the-shelf liquid cocktail having a scintillation decay time of 6.2 ns. For time alignment, coincidences are recorded between positron detected by the probe and one of the two 511 keV annihilation photons reaching detectors in the scanner. Using this simple probe, it is possible to determine the time offsets for individual LYSO and LGSO crystals in LabPET TM scanners in about 15 min. Due to its ease of use and short acquisition time, the proposed timing calibration method was found ideal for tuning the APD bias of individual detectors to reach optimal timing resolution on every channel.

  19. Process improvement by cycle time reduction through Lean Methodology

    Science.gov (United States)

    Siva, R.; patan, Mahamed naveed khan; lakshmi pavan kumar, Mane; Purusothaman, M.; pitchai, S. Antony; Jegathish, Y.

    2017-05-01

    In present world, every customer needs their products to get on time with good quality. Presently every industry is striving to satisfy their customer requirements. An aviation concern trying to accomplish continuous improvement in all its projects. In this project the maintenance service for the customer is analyzed. The maintenance part service is split up into four levels. Out of it, three levels are done in service shops and the fourth level falls under customer’s privilege to change the parts in their aircraft engines at their location. An enhancement for electronics initial provisioning (eIP) is done for fourth level. Customers request service shops to get their requirements through Recommended Spare Parts List (RSPL) by eIP. To complete this RSPL for one customer, it takes 61.5 hours as a cycle time which is very high. By mapping current state VSM and takt time, future state improvement can be done in order to reduce cycle time using Lean tools such as Poke-Yoke, Jidoka, 5S, Muda etc.,

  20. Lean six sigma methodologies improve clinical laboratory efficiency and reduce turnaround times.

    Science.gov (United States)

    Inal, Tamer C; Goruroglu Ozturk, Ozlem; Kibar, Filiz; Cetiner, Salih; Matyar, Selcuk; Daglioglu, Gulcin; Yaman, Akgun

    2018-01-01

    Organizing work flow is a major task of laboratory management. Recently, clinical laboratories have started to adopt methodologies such as Lean Six Sigma and some successful implementations have been reported. This study used Lean Six Sigma to simplify the laboratory work process and decrease the turnaround time by eliminating non-value-adding steps. The five-stage Six Sigma system known as define, measure, analyze, improve, and control (DMAIC) is used to identify and solve problems. The laboratory turnaround time for individual tests, total delay time in the sample reception area, and percentage of steps involving risks of medical errors and biological hazards in the overall process are measured. The pre-analytical process in the reception area was improved by eliminating 3 h and 22.5 min of non-value-adding work. Turnaround time also improved for stat samples from 68 to 59 min after applying Lean. Steps prone to medical errors and posing potential biological hazards to receptionists were reduced from 30% to 3%. Successful implementation of Lean Six Sigma significantly improved all of the selected performance metrics. This quality-improvement methodology has the potential to significantly improve clinical laboratories. © 2017 Wiley Periodicals, Inc.

  1. Electronic Timekeeping: North Dakota State University Improves Payroll Processing.

    Science.gov (United States)

    Vetter, Ronald J.; And Others

    1993-01-01

    North Dakota State University has adopted automated timekeeping to improve the efficiency and effectiveness of payroll processing. The microcomputer-based system accurately records and computes employee time, tracks labor distribution, accommodates complex labor policies and company pay practices, provides automatic data processing and reporting,…

  2. Improving medical decisions for incapacitated persons: does focusing on "accurate predictions" lead to an inaccurate picture?

    Science.gov (United States)

    Kim, Scott Y H

    2014-04-01

    The Patient Preference Predictor (PPP) proposal places a high priority on the accuracy of predicting patients' preferences and finds the performance of surrogates inadequate. However, the quest to develop a highly accurate, individualized statistical model has significant obstacles. First, it will be impossible to validate the PPP beyond the limit imposed by 60%-80% reliability of people's preferences for future medical decisions--a figure no better than the known average accuracy of surrogates. Second, evidence supports the view that a sizable minority of persons may not even have preferences to predict. Third, many, perhaps most, people express their autonomy just as much by entrusting their loved ones to exercise their judgment than by desiring to specifically control future decisions. Surrogate decision making faces none of these issues and, in fact, it may be more efficient, accurate, and authoritative than is commonly assumed.

  3. Infrastructure and methods for real-time predictions of the 2014 dengue fever season in Thailand

    OpenAIRE

    Reich, Nicholas G.; Lauer, Stephen A.; Sakrejda, Krzysztof; Iamsirithaworn, Sopon; Hinjoy, Soawapak; Suangtho, Paphanij; Suthachana, Suthanun; Clapham, Hannah E.; Salje, Henrik; Cummings, Derek A. T.; Lessler, Justin

    2015-01-01

    Epidemics of communicable diseases place a huge burden on public health infrastructures across the world. Producing accurate and actionable forecasts of infectious disease incidence at short and long time scales will improve public health response to outbreaks. However, scientists and public health officials face many obstacles in trying to create accurate and actionable real-time forecasts of infectious disease incidence. Dengue is a mosquito-borne virus that annually infects over 400 millio...

  4. Growing slower and less accurate: adult age differences in time-accuracy functions for recall and recognition from episodic memory.

    Science.gov (United States)

    Verhaeghen, P; Vandenbroucke, A; Dierckx, V

    1998-01-01

    In 2 experiments, time-accuracy curves were derived for recall and recognition from episodic memory for both young and older adults. In Experiment 1, time-accuracy functions were estimated for free list recall and list recall cued by rhyme words or semantic associations for 13 young and 13 older participants. In Experiment 2, time-accuracy functions were estimated for recognition of word lists with or without distractor items and with or without articulatory suppression for 29 young and 30 older participants. In both studies, age differences were found in the asymptote (i.e., the maximum level of performance attainable) and in the rate of approach toward the asymptote (i.e., the steepness of the curve). These two parameters were only modestly correlated. In Experiment 2, it was found that 89% of the age-related variance in the rate of approach and 62% of the age-related variance in the asymptote was explained by perceptual speed. The data point at the existence of 2 distinct effects of aging on episodic memory, namely a dynamic effect (growing slower) and an asymptotic effect (growing less accurate). The absence of Age x Condition interactions in the age-related parameters in either experiment points at the rather general nature of both aging effects.

  5. Annual land cover change mapping using MODIS time series to improve emissions inventories.

    Science.gov (United States)

    López Saldaña, G.; Quaife, T. L.; Clifford, D.

    2014-12-01

    Understanding and quantifying land surface changes is necessary for estimating greenhouse gas and ammonia emissions, and for meeting air quality limits and targets. More sophisticated inventories methodologies for at least key emission source are needed due to policy-driven air quality directives. Quantifying land cover changes on an annual basis requires greater spatial and temporal disaggregation of input data. The main aim of this study is to develop a methodology for using Earth Observations (EO) to identify annual land surface changes that will improve emissions inventories from agriculture and land use/land use change and forestry (LULUCF) in the UK. First goal is to find the best sets of input features that describe accurately the surface dynamics. In order to identify annual and inter-annual land surface changes, a times series of surface reflectance was used to capture seasonal variability. Daily surface reflectance images from the Moderate Resolution Imaging Spectroradiometer (MODIS) at 500m resolution were used to invert a Bidirectional Reflectance Distribution Function (BRDF) model to create the seamless time series. Given the limited number of cloud-free observations, a BRDF climatology was used to constrain the model inversion and where no high-scientific quality observations were available at all, as a gap filler. The Land Cover Map 2007 (LC2007) produced by the Centre for Ecology & Hydrology (CEH) was used for training and testing purposes. A prototype land cover product was created for 2006 to 2008. Several machine learning classifiers were tested as well as different sets of input features going from the BRDF parameters to spectral Albedo. We will present the results of the time series development and the first exercises when creating the prototype land cover product.

  6. Activity on improving performance of time-of-flight detector at CDF

    International Nuclear Information System (INIS)

    Menzione, A.; Cerri, C.; Vataga, E.; Prokoshin, F.; Tokar, S.

    2002-01-01

    The paper describes activity on improving the time resolution of the Time-of-Flight detector at CDF. The main goal of the detector is the identification of kaons and pions for b-quark (B-meson) flavour tagging. Construction of the detector has been described as well as proposals on detector design changes to improve its time resolution. Monte Carlo simulation of the detector response to MIP was performed. The results of the simulation showed that the proposed modifications (at least with currently available materials) bring modest or no improvement of the detector time resolution. An automated set-up was assembled to test and check out the changes in the electronic readout system of the detector. Sophisticated software has been developed for this set-up to provide control of the system as well as processing and presentation of data from the detector. This software can perform various tests using different implementations of the hardware set-up

  7. Determining the best phenological state for accurate mapping of Phragmites australis in wetlands using time series multispectral satellite data

    Science.gov (United States)

    Rupasinghe, P. A.; Markle, C. E.; Marcaccio, J. V.; Chow-Fraser, P.

    2017-12-01

    Phragmites australis (European common reed), is a relatively recent invader of wetlands and beaches in Ontario. It can establish large homogenous stands within wetlands and disperse widely throughout the landscape by wind and vehicular traffic. A first step in managing this invasive species includes accurate mapping and quantification of its distribution. This is challenging because Phragimtes is distributed in a large spatial extent, which makes the mapping more costly and time consuming. Here, we used freely available multispectral satellite images taken monthly (cloud free images as available) for the calendar year to determine the optimum phenological state of Phragmites that would allow it to be accurately identified using remote sensing data. We analyzed time series, Landsat-8 OLI and Sentinel-2 images for Big Creek Wildlife Area, ON using image classification (Support Vector Machines), Normalized Difference Vegetation Index (NDVI) and Normalized Difference Water Index (NDWI). We used field sampling data and high resolution image collected using Unmanned Aerial Vehicle (UAV; 8 cm spatial resolution) as training data and for the validation of the classified images. The accuracy for all land cover classes and for Phragmites alone were low at both the start and end of the calendar year, but reached overall accuracy >85% by mid to late summer. The highest classification accuracies for Landsat-8 OLI were associated with late July and early August imagery. We observed similar trends using the Sentinel-2 images, with higher overall accuracy for all land cover classes and for Phragmites alone from late July to late September. During this period, we found the greatest difference between Phragmites and Typha, commonly confused classes, with respect to near-infrared and shortwave infrared reflectance. Therefore, the unique spectral signature of Phragmites can be attributed to both the level of greenness and factors related to water content in the leaves during late

  8. Caffeinated nitric oxide-releasing lozenge improves cycling time trial performance.

    Science.gov (United States)

    Lee, J; Kim, H T; Solares, G J; Kim, K; Ding, Z; Ivy, J L

    2015-02-01

    Boosting nitric oxide production during exercise by various means has been found to improve exercise performance. We investigated the effects of a nitric oxide releasing lozenge with added caffeine (70 mg) on oxygen consumption during steady-state exercise and cycling time trial performance using a double-blinded randomized, crossover experimental design. 15 moderately trained cyclists (7 females and 8 males) were randomly assigned to ingest the caffeinated nitric oxide lozenge or placebo 5 min before exercise. Oxygen consumption and blood lactate were assessed at rest and at 50%, 65% and 75% maximal oxygen consumption. Exercise performance was assessed by time to complete a simulated 20.15 km cycling time-trial course. No significant treatment effects for oxygen consumption or blood lactate at rest or during steady-state exercise were observed. However, time-trial performance was improved by 2.1% (p<0.01) when participants consumed the nitric oxide lozenge (2,424±69 s) compared to placebo (2,476±78 s) and without a significant difference in rating of perceived exertion. These results suggest that acute supplementation with a caffeinated nitric oxide releasing lozenge may be a practical and effective means of improving aerobic exercise performance. © Georg Thieme Verlag KG Stuttgart · New York.

  9. A simple, fast, and accurate thermodynamic-based approach for transfer and prediction of gas chromatography retention times between columns and instruments Part III: Retention time prediction on target column.

    Science.gov (United States)

    Hou, Siyuan; Stevenson, Keisean A J M; Harynuk, James J

    2018-03-27

    This is the third part of a three-part series of papers. In Part I, we presented a method for determining the actual effective geometry of a reference column as well as the thermodynamic-based parameters of a set of probe compounds in an in-house mixture. Part II introduced an approach for estimating the actual effective geometry of a target column by collecting retention data of the same mixture of probe compounds on the target column and using their thermodynamic parameters, acquired on the reference column, as a bridge between both systems. Part III, presented here, demonstrates the retention time transfer and prediction from the reference column to the target column using experimental data for a separate mixture of compounds. To predict the retention time of a new compound, we first estimate its thermodynamic-based parameters on the reference column (using geometric parameters determined previously). The compound's retention time on a second column (of previously determined geometry) is then predicted. The models and the associated optimization algorithms were tested using simulated and experimental data. The accuracy of predicted retention times shows that the proposed approach is simple, fast, and accurate for retention time transfer and prediction between gas chromatography columns. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Turning education into action: Impact of a collective social education approach to improve nurses' ability to recognize and accurately assess delirium in hospitalized older patients.

    Science.gov (United States)

    Travers, Catherine; Henderson, Amanda; Graham, Fred; Beattie, Elizabeth

    2018-03-01

    Although cognitive impairment including dementia and delirium is common in older hospital patients, it is not well recognized or managed by hospital staff, potentially resulting in adverse events. This paper describes, and reports on the impact of a collective social education approach to improving both nurses' knowledge of, and screening for delirium. Thirty-four experienced nurses from six hospital wards, became Cognition Champions (CogChamps) to lead their wards in a collective social education process about cognitive impairment and the assessment of delirium. At the outset, the CogChamps were provided with comprehensive education about dementia and delirium from a multidisciplinary team of clinicians. Their knowledge was assessed to ascertain they had the requisite understanding to engage in education as a collective social process, namely, with each other and their local teams. Following this, they developed ward specific Action Plans in collaboration with their teams aimed at educating and evaluating ward nurses' ability to accurately assess and care for patients for delirium. The plans were implemented over five months. The broader nursing teams' knowledge was assessed, together with their ability to accurately assess patients for delirium. Each ward implemented their Action Plan to varying degrees and key achievements included the education of a majority of ward nurses about delirium and the certification of the majority as competent to assess patients for delirium using the Confusion Assessment Method. Two wards collected pre-and post-audit data that demonstrated a substantial improvement in delirium screening rates. The education process led by CogChamps and supported by educators and clinical experts provides an example of successfully educating nurses about delirium and improving screening rates of patients for delirium. ACTRN 12617000563369. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Development of anatomically and dielectrically accurate breast phantoms for microwave imaging applications

    Science.gov (United States)

    O'Halloran, M.; Lohfeld, S.; Ruvio, G.; Browne, J.; Krewer, F.; Ribeiro, C. O.; Inacio Pita, V. C.; Conceicao, R. C.; Jones, E.; Glavin, M.

    2014-05-01

    Breast cancer is one of the most common cancers in women. In the United States alone, it accounts for 31% of new cancer cases, and is second only to lung cancer as the leading cause of deaths in American women. More than 184,000 new cases of breast cancer are diagnosed each year resulting in approximately 41,000 deaths. Early detection and intervention is one of the most significant factors in improving the survival rates and quality of life experienced by breast cancer sufferers, since this is the time when treatment is most effective. One of the most promising breast imaging modalities is microwave imaging. The physical basis of active microwave imaging is the dielectric contrast between normal and malignant breast tissue that exists at microwave frequencies. The dielectric contrast is mainly due to the increased water content present in the cancerous tissue. Microwave imaging is non-ionizing, does not require breast compression, is less invasive than X-ray mammography, and is potentially low cost. While several prototype microwave breast imaging systems are currently in various stages of development, the design and fabrication of anatomically and dielectrically representative breast phantoms to evaluate these systems is often problematic. While some existing phantoms are composed of dielectrically representative materials, they rarely accurately represent the shape and size of a typical breast. Conversely, several phantoms have been developed to accurately model the shape of the human breast, but have inappropriate dielectric properties. This study will brie y review existing phantoms before describing the development of a more accurate and practical breast phantom for the evaluation of microwave breast imaging systems.

  12. Accurate and systematically improvable density functional theory embedding for correlated wavefunctions

    International Nuclear Information System (INIS)

    Goodpaster, Jason D.; Barnes, Taylor A.; Miller, Thomas F.; Manby, Frederick R.

    2014-01-01

    We analyze the sources of error in quantum embedding calculations in which an active subsystem is treated using wavefunction methods, and the remainder using density functional theory. We show that the embedding potential felt by the electrons in the active subsystem makes only a small contribution to the error of the method, whereas the error in the nonadditive exchange-correlation energy dominates. We test an MP2 correction for this term and demonstrate that the corrected embedding scheme accurately reproduces wavefunction calculations for a series of chemical reactions. Our projector-based embedding method uses localized occupied orbitals to partition the system; as with other local correlation methods, abrupt changes in the character of the localized orbitals along a reaction coordinate can lead to discontinuities in the embedded energy, but we show that these discontinuities are small and can be systematically reduced by increasing the size of the active region. Convergence of reaction energies with respect to the size of the active subsystem is shown to be rapid for all cases where the density functional treatment is able to capture the polarization of the environment, even in conjugated systems, and even when the partition cuts across a double bond

  13. Taxi Time Prediction at Charlotte Airport Using Fast-Time Simulation and Machine Learning Techniques

    Science.gov (United States)

    Lee, Hanbong

    2016-01-01

    Accurate taxi time prediction is required for enabling efficient runway scheduling that can increase runway throughput and reduce taxi times and fuel consumptions on the airport surface. Currently NASA and American Airlines are jointly developing a decision-support tool called Spot and Runway Departure Advisor (SARDA) that assists airport ramp controllers to make gate pushback decisions and improve the overall efficiency of airport surface traffic. In this presentation, we propose to use Linear Optimized Sequencing (LINOS), a discrete-event fast-time simulation tool, to predict taxi times and provide the estimates to the runway scheduler in real-time airport operations. To assess its prediction accuracy, we also introduce a data-driven analytical method using machine learning techniques. These two taxi time prediction methods are evaluated with actual taxi time data obtained from the SARDA human-in-the-loop (HITL) simulation for Charlotte Douglas International Airport (CLT) using various performance measurement metrics. Based on the taxi time prediction results, we also discuss how the prediction accuracy can be affected by the operational complexity at this airport and how we can improve the fast time simulation model before implementing it with an airport scheduling algorithm in a real-time environment.

  14. Exploring the relationship between sequence similarity and accurate phylogenetic trees.

    Science.gov (United States)

    Cantarel, Brandi L; Morrison, Hilary G; Pearson, William

    2006-11-01

    significantly decrease phylogenetic accuracy. In general, although less-divergent sequence families produce more accurate trees, the likelihood of estimating an accurate tree is most dependent on whether radiation in the family was ancient or recent. Accuracy can be improved by combining genes from the same organism when creating species trees or by selecting protein families with the best bootstrap values in comprehensive studies.

  15. Accurate treatment of nanoelectronics through improved description of van der Waals Interactions

    DEFF Research Database (Denmark)

    Kelkkanen, Kari André

    , or even as broken. The hexamer experience of the criteria and effects of vdW forces can be used in interpretation of results of molecular dynamics (MD) simulations of ambient water, where vdW forces qualitatively result in liquid water with fewer, more distorted HBs. This is interesting...... and relevance of van der Waals (vdW) forces in molecular surface adsorption and water through density- functional theory (DFT), using the exchange-correlation functional vdW-DF [Dion et al., Phys. Rev. Lett. 92, 246401 (2004)] and developments based on it. Results are first computed for adsorption with vd...... functionals. DFT calculations are performed for water dimer and hexamer, and for liquid water. Calculations on four low-energetic isomers of the water hexamer show that the vdW-DF accurately determines the energetic trend on these small clusters. How- ever, the dissociation-energy values with the vd...

  16. Fast and accurate calculation of the properties of water and steam for simulation

    International Nuclear Information System (INIS)

    Szegi, Zs.; Gacs, A.

    1990-01-01

    A basic principle simulator was developed at the CRIP, Budapest, for real time simulation of the transients of WWER-440 type nuclear power plants. Its integral part is the fast and accurate calculation of the thermodynamic properties of water and steam. To eliminate successive approximations, the model system of the secondary coolant circuit requires binary forms which are known as inverse functions, countinuous when crossing the saturation line, accurate and coherent for all argument combinations. A solution which reduces the computer memory and execution time demand is reported. (author) 36 refs.; 5 figs.; 3 tabs

  17. An accurate algorithm to calculate the Hurst exponent of self-similar processes

    International Nuclear Information System (INIS)

    Fernández-Martínez, M.; Sánchez-Granero, M.A.; Trinidad Segovia, J.E.; Román-Sánchez, I.M.

    2014-01-01

    In this paper, we introduce a new approach which generalizes the GM2 algorithm (introduced in Sánchez-Granero et al. (2008) [52]) as well as fractal dimension algorithms (FD1, FD2 and FD3) (first appeared in Sánchez-Granero et al. (2012) [51]), providing an accurate algorithm to calculate the Hurst exponent of self-similar processes. We prove that this algorithm performs properly in the case of short time series when fractional Brownian motions and Lévy stable motions are considered. We conclude the paper with a dynamic study of the Hurst exponent evolution in the S and P500 index stocks. - Highlights: • We provide a new approach to properly calculate the Hurst exponent. • This generalizes FD algorithms and GM2, introduced previously by the authors. • This method (FD4) results especially appropriate for short time series. • FD4 may be used in both unifractal and multifractal contexts. • As an empirical application, we show that S and P500 stocks improved their efficiency

  18. An accurate algorithm to calculate the Hurst exponent of self-similar processes

    Energy Technology Data Exchange (ETDEWEB)

    Fernández-Martínez, M., E-mail: fmm124@ual.es [Department of Mathematics, Faculty of Science, Universidad de Almería, 04120 Almería (Spain); Sánchez-Granero, M.A., E-mail: misanche@ual.es [Department of Mathematics, Faculty of Science, Universidad de Almería, 04120 Almería (Spain); Trinidad Segovia, J.E., E-mail: jetrini@ual.es [Department of Accounting and Finance, Faculty of Economics and Business, Universidad de Almería, 04120 Almería (Spain); Román-Sánchez, I.M., E-mail: iroman@ual.es [Department of Accounting and Finance, Faculty of Economics and Business, Universidad de Almería, 04120 Almería (Spain)

    2014-06-27

    In this paper, we introduce a new approach which generalizes the GM2 algorithm (introduced in Sánchez-Granero et al. (2008) [52]) as well as fractal dimension algorithms (FD1, FD2 and FD3) (first appeared in Sánchez-Granero et al. (2012) [51]), providing an accurate algorithm to calculate the Hurst exponent of self-similar processes. We prove that this algorithm performs properly in the case of short time series when fractional Brownian motions and Lévy stable motions are considered. We conclude the paper with a dynamic study of the Hurst exponent evolution in the S and P500 index stocks. - Highlights: • We provide a new approach to properly calculate the Hurst exponent. • This generalizes FD algorithms and GM2, introduced previously by the authors. • This method (FD4) results especially appropriate for short time series. • FD4 may be used in both unifractal and multifractal contexts. • As an empirical application, we show that S and P500 stocks improved their efficiency.

  19. Incorporation of exact boundary conditions into a discontinuous galerkin finite element method for accurately solving 2d time-dependent maxwell equations

    KAUST Repository

    Sirenko, Kostyantyn

    2013-01-01

    A scheme that discretizes exact absorbing boundary conditions (EACs) to incorporate them into a time-domain discontinuous Galerkin finite element method (TD-DG-FEM) is described. The proposed TD-DG-FEM with EACs is used for accurately characterizing transient electromagnetic wave interactions on two-dimensional waveguides. Numerical results demonstrate the proposed method\\'s superiority over the TD-DG-FEM that employs approximate boundary conditions and perfectly matched layers. Additionally, it is shown that the proposed method can produce the solution with ten-eleven digit accuracy when high-order spatial basis functions are used to discretize the Maxwell equations as well as the EACs. © 1963-2012 IEEE.

  20. Accurate pan-specific prediction of peptide-MHC class II binding affinity with improved binding core identification

    DEFF Research Database (Denmark)

    Andreatta, Massimo; Karosiene, Edita; Rasmussen, Michael

    2015-01-01

    with known binding registers, the new method NetMHCIIpan-3.1 significantly outperformed the earlier 3.0 version. We illustrate the impact of accurate binding core identification for the interpretation of T cell cross-reactivity using tetramer double staining with a CMV epitope and its variants mapped...

  1. Accurate lithography simulation model based on convolutional neural networks

    Science.gov (United States)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  2. Improved test of time dilation in special relativity.

    Science.gov (United States)

    Saathoff, G; Karpuk, S; Eisenbarth, U; Huber, G; Krohn, S; Muñoz Horta, R; Reinhardt, S; Schwalm, D; Wolf, A; Gwinner, G

    2003-11-07

    An improved test of time dilation in special relativity has been performed using laser spectroscopy on fast ions at the heavy-ion storage-ring TSR in Heidelberg. The Doppler-shifted frequencies of a two-level transition in 7Li+ ions at v=0.064c have been measured in the forward and backward direction to an accuracy of Deltanu/nu=1 x 10(-9) using collinear saturation spectroscopy. The result confirms the relativistic Doppler formula and sets a new limit of 2.2 x 10(-7) for deviations from the time dilation factor gamma(SR)=(1-v2/c2)(-1/2).

  3. Improving The Accuracy Of Bluetooth Based Travel Time Estimation Using Low-Level Sensor Data

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Tørholm Christensen, Lars; Krishnan, Rajesh

    2013-01-01

    triggered by a single device. This could lead to location ambiguity and reduced accuracy of travel time estimation. Therefore, the accuracy of travel time estimations by Bluetooth Technology (BT) depends upon how location ambiguity is handled by the estimation method. The issue of multiple detection events...... in the context of travel time estimation by BT has been considered by various researchers. However, treatment of this issue has remained simplistic so far. Most previous studies simply used the first detection event (Enter-Enter) as the best estimate. No systematic analysis for exploring the most accurate method...... of estimating travel time using multiple detection events has been conducted. In this study different aspects of BT detection zone, including size and its impact on the accuracy of travel time estimation, are discussed. Moreover, four alternative methods are applied; namely, Enter-Enter, Leave-Leave, Peak...

  4. SU-E-T-373: A Motorized Stage for Fast and Accurate QA of Machine Isocenter

    International Nuclear Information System (INIS)

    Moore, J; Velarde, E; Wong, J

    2014-01-01

    Purpose: Precision delivery of radiation dose relies on accurate knowledge of the machine isocenter under a variety of machine motions. This is typically determined by performing a Winston-Lutz test consisting of imaging a known object at multiple gantry/collimator/table angles and ensuring that the maximum offset is within specified tolerance. The first step in the Winston-Lutz test is careful placement of a ball bearing at the machine isocenter as determined by repeated imaging and shifting until accurate placement has been determined. Conventionally this is performed by adjusting a stage manually using vernier scales which carry the limitation that each adjustment must be done inside the treatment room with the risks of inaccurate adjustment of the scale and physical bumping of the table. It is proposed to use a motorized system controlled outside of the room to improve the required time and accuracy of these tests. Methods: The three dimensional vernier scales are replaced by three motors with accuracy of 1 micron and a range of 25.4mm connected via USB to a computer in the control room. Software is designed which automatically detects the motors and assigns them to proper axes and allows for small shifts to be entered and performed. Input values match calculated offsets in magnitude and sign to reduce conversion errors. Speed of setup, number of iterations to setup, and accuracy of final placement are assessed. Results: Automatic BB placement required 2.25 iterations and 13 minutes on average while manual placement required 3.76 iterations and 37.5 minutes. The average final XYZ offsets is 0.02cm, 0.01cm, 0.04cm for automatic setup and 0.04cm, 0.02cm, 0.04cm for manual setup. Conclusion: Automatic placement decreased time and repeat iterations for setup while improving placement accuracy. Automatic placement greatly reduces the time required to perform QA

  5. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    Directory of Open Access Journals (Sweden)

    Fan Ni

    Full Text Available Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  6. Combining instruction prefetching with partial cache locking to improve WCET in real-time systems.

    Science.gov (United States)

    Ni, Fan; Long, Xiang; Wan, Han; Gao, Xiaopeng

    2013-01-01

    Caches play an important role in embedded systems to bridge the performance gap between fast processor and slow memory. And prefetching mechanisms are proposed to further improve the cache performance. While in real-time systems, the application of caches complicates the Worst-Case Execution Time (WCET) analysis due to its unpredictable behavior. Modern embedded processors often equip locking mechanism to improve timing predictability of the instruction cache. However, locking the whole cache may degrade the cache performance and increase the WCET of the real-time application. In this paper, we proposed an instruction-prefetching combined partial cache locking mechanism, which combines an instruction prefetching mechanism (termed as BBIP) with partial cache locking to improve the WCET estimates of real-time applications. BBIP is an instruction prefetching mechanism we have already proposed to improve the worst-case cache performance and in turn the worst-case execution time. The estimations on typical real-time applications show that the partial cache locking mechanism shows remarkable WCET improvement over static analysis and full cache locking.

  7. Accurate and precise DNA quantification in the presence of different amplification efficiencies using an improved Cy0 method.

    Science.gov (United States)

    Guescini, Michele; Sisti, Davide; Rocchi, Marco B L; Panebianco, Renato; Tibollo, Pasquale; Stocchi, Vilberto

    2013-01-01

    Quantitative real-time PCR represents a highly sensitive and powerful technology for the quantification of DNA. Although real-time PCR is well accepted as the gold standard in nucleic acid quantification, there is a largely unexplored area of experimental conditions that limit the application of the Ct method. As an alternative, our research team has recently proposed the Cy0 method, which can compensate for small amplification variations among the samples being compared. However, when there is a marked decrease in amplification efficiency, the Cy0 is impaired, hence determining reaction efficiency is essential to achieve a reliable quantification. The proposed improvement in Cy0 is based on the use of the kinetic parameters calculated in the curve inflection point to compensate for efficiency variations. Three experimental models were used: inhibition of primer extension, non-optimal primer annealing and a very small biological sample. In all these models, the improved Cy0 method increased quantification accuracy up to about 500% without affecting precision. Furthermore, the stability of this procedure was enhanced integrating it with the SOD method. In short, the improved Cy0 method represents a simple yet powerful approach for reliable DNA quantification even in the presence of marked efficiency variations.

  8. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    Science.gov (United States)

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  9. Accurate control testing for clay liner permeability

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, R J

    1991-08-01

    Two series of centrifuge tests were carried out to evaluate the use of centrifuge modelling as a method of accurate control testing of clay liner permeability. The first series used a large 3 m radius geotechnical centrifuge and the second series a small 0.5 m radius machine built specifically for research on clay liners. Two permeability cells were fabricated in order to provide direct data comparisons between the two methods of permeability testing. In both cases, the centrifuge method proved to be effective and efficient, and was found to be free of both the technical difficulties and leakage risks normally associated with laboratory permeability testing of fine grained soils. Two materials were tested, a consolidated kaolin clay having an average permeability coefficient of 1.2{times}10{sup -9} m/s and a compacted illite clay having a permeability coefficient of 2.0{times}10{sup -11} m/s. Four additional tests were carried out to demonstrate that the 0.5 m radius centrifuge could be used for linear performance modelling to evaluate factors such as volumetric water content, compaction method and density, leachate compatibility and other construction effects on liner leakage. The main advantages of centrifuge testing of clay liners are rapid and accurate evaluation of hydraulic properties and realistic stress modelling for performance evaluations. 8 refs., 12 figs., 7 tabs.

  10. Fast and Accurate Residential Fire Detection Using Wireless Sensor Networks

    NARCIS (Netherlands)

    Bahrepour, Majid; Meratnia, Nirvana; Havinga, Paul J.M.

    2010-01-01

    Prompt and accurate residential fire detection is important for on-time fire extinguishing and consequently reducing damages and life losses. To detect fire sensors are needed to measure the environmental parameters and algorithms are required to decide about occurrence of fire. Recently, wireless

  11. An accurate and rapid continuous wavelet dynamic time warping algorithm for unbalanced global mapping in nanopore sequencing

    KAUST Repository

    Han, Renmin

    2017-12-24

    Long-reads, point-of-care, and PCR-free are the promises brought by nanopore sequencing. Among various steps in nanopore data analysis, the global mapping between the raw electrical current signal sequence and the expected signal sequence from the pore model serves as the key building block to base calling, reads mapping, variant identification, and methylation detection. However, the ultra-long reads of nanopore sequencing and an order of magnitude difference in the sampling speeds of the two sequences make the classical dynamic time warping (DTW) and its variants infeasible to solve the problem. Here, we propose a novel multi-level DTW algorithm, cwDTW, based on continuous wavelet transforms with different scales of the two signal sequences. Our algorithm starts from low-resolution wavelet transforms of the two sequences, such that the transformed sequences are short and have similar sampling rates. Then the peaks and nadirs of the transformed sequences are extracted to form feature sequences with similar lengths, which can be easily mapped by the original DTW. Our algorithm then recursively projects the warping path from a lower-resolution level to a higher-resolution one by building a context-dependent boundary and enabling a constrained search for the warping path in the latter. Comprehensive experiments on two real nanopore datasets on human and on Pandoraea pnomenusa, as well as two benchmark datasets from previous studies, demonstrate the efficiency and effectiveness of the proposed algorithm. In particular, cwDTW can almost always generate warping paths that are very close to the original DTW, which are remarkably more accurate than the state-of-the-art methods including FastDTW and PrunedDTW. Meanwhile, on the real nanopore datasets, cwDTW is about 440 times faster than FastDTW and 3000 times faster than the original DTW. Our program is available at https://github.com/realbigws/cwDTW.

  12. Time effectiveness of capillary effect improvement of ramie fabrics processed by RF glow discharging

    International Nuclear Information System (INIS)

    Wang Zhiwen; Wei Weixing; He Yanhe; Zhao Yuanqing; Pan Liyiji; Li Xuemei; Shi Shaodui; Li Guangxin

    2010-01-01

    The time effectiveness of capillary effect improvement of ramie fabrics processed by RF glow discharging was studied. The ramie fabrics were processed in fulfilling with different gas (O 2 , N 2 , Ar) by different parameters (such as pressure,power and time) plasma. The capillary effect of the ramie fabrics processed by RF glow discharging was tested at different time. The results indicate that the capillary effect of ramie fabrics processed by RF glow discharging has been improved, the improvement of the capillary effect firstly decrease rapidly, then slowly, and become stable after 15 day, it indicate that improvement of the ramie fabrics capillary has good time effectiveness, and the plasma parameter for the best capillary effect improvement of ramie fabric is 100 W and 40 Pa processed 20 min by oxygen plasma. (authors)

  13. From continuous improvement to collaborative improvement: scope, scale, skill and social networking in collaborative improvement

    NARCIS (Netherlands)

    Middel, H.G.A.; Groen, Arend J.; Fisscher, O.A.M.

    2004-01-01

    More than ever, companies are challenged to improve their performance and respond quickly and accurately to changes within the market. As competitive battlefield is moving towards the level of networks of organisations, the individual firm is an inadequate entity for identifying improvements.

  14. Review of current GPS methodologies for producing accurate time series and their error sources

    Science.gov (United States)

    He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping

    2017-05-01

    The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e

  15. Fast and accurate protein substructure searching with simulated annealing and GPUs

    Directory of Open Access Journals (Sweden)

    Stivala Alex D

    2010-09-01

    Full Text Available Abstract Background Searching a database of protein structures for matches to a query structure, or occurrences of a structural motif, is an important task in structural biology and bioinformatics. While there are many existing methods for structural similarity searching, faster and more accurate approaches are still required, and few current methods are capable of substructure (motif searching. Results We developed an improved heuristic for tableau-based protein structure and substructure searching using simulated annealing, that is as fast or faster and comparable in accuracy, with some widely used existing methods. Furthermore, we created a parallel implementation on a modern graphics processing unit (GPU. Conclusions The GPU implementation achieves up to 34 times speedup over the CPU implementation of tableau-based structure search with simulated annealing, making it one of the fastest available methods. To the best of our knowledge, this is the first application of a GPU to the protein structural search problem.

  16. Improved SPICE electrical model of silicon photomultipliers

    Energy Technology Data Exchange (ETDEWEB)

    Marano, D., E-mail: davide.marano@oact.inaf.it [INAF, Osservatorio Astrofisico di Catania, Via S. Sofia 78, I-95123 Catania (Italy); Bonanno, G.; Belluso, M.; Billotta, S.; Grillo, A.; Garozzo, S.; Romeo, G. [INAF, Osservatorio Astrofisico di Catania, Via S. Sofia 78, I-95123 Catania (Italy); Catalano, O.; La Rosa, G.; Sottile, G.; Impiombato, D.; Giarrusso, S. [INAF, Istituto di Astrofisica Spaziale e Fisica Cosmica di Palermo, Via U. La Malfa 153, I-90146 Palermo (Italy)

    2013-10-21

    The present work introduces an improved SPICE equivalent electrical model of silicon photomultiplier (SiPM) detectors, in order to simulate and predict their transient response to avalanche triggering events. In particular, the developed circuit model provides a careful investigation of the magnitude and timing of the read-out signals and can therefore be exploited to perform reliable circuit-level simulations. The adopted modeling approach is strictly related to the physics of each basic microcell constituting the SiPM device, and allows the avalanche timing as well as the photodiode current and voltage to be accurately simulated. Predictive capabilities of the proposed model are demonstrated by means of experimental measurements on a real SiPM detector. Simulated and measured pulses are found to be in good agreement with the expected results. -- Highlights: • An improved SPICE electrical model of silicon photomultipliers is proposed. • The developed model provides a truthful representation of the physics of the device. • An accurate charge collection as a function of the overvoltage is achieved. • The adopted electrical model allows reliable circuit-level simulations to be performed. • Predictive capabilities of the adopted model are experimentally demonstrated.

  17. Improved real-time dynamics from imaginary frequency lattice simulations

    Directory of Open Access Journals (Sweden)

    Pawlowski Jan M.

    2018-01-01

    Full Text Available The computation of real-time properties, such as transport coefficients or bound state spectra of strongly interacting quantum fields in thermal equilibrium is a pressing matter. Since the sign problem prevents a direct evaluation of these quantities, lattice data needs to be analytically continued from the Euclidean domain of the simulation to Minkowski time, in general an ill-posed inverse problem. Here we report on a novel approach to improve the determination of real-time information in the form of spectral functions by setting up a simulation prescription in imaginary frequencies. By carefully distinguishing between initial conditions and quantum dynamics one obtains access to correlation functions also outside the conventional Matsubara frequencies. In particular the range between ω0 and ω1 = 2πT, which is most relevant for the inverse problem may be more highly resolved. In combination with the fact that in imaginary frequencies the kernel of the inverse problem is not an exponential but only a rational function we observe significant improvements in the reconstruction of spectral functions, demonstrated in a simple 0+1 dimensional scalar field theory toy model.

  18. Improved real-time dynamics from imaginary frequency lattice simulations

    Science.gov (United States)

    Pawlowski, Jan M.; Rothkopf, Alexander

    2018-03-01

    The computation of real-time properties, such as transport coefficients or bound state spectra of strongly interacting quantum fields in thermal equilibrium is a pressing matter. Since the sign problem prevents a direct evaluation of these quantities, lattice data needs to be analytically continued from the Euclidean domain of the simulation to Minkowski time, in general an ill-posed inverse problem. Here we report on a novel approach to improve the determination of real-time information in the form of spectral functions by setting up a simulation prescription in imaginary frequencies. By carefully distinguishing between initial conditions and quantum dynamics one obtains access to correlation functions also outside the conventional Matsubara frequencies. In particular the range between ω0 and ω1 = 2πT, which is most relevant for the inverse problem may be more highly resolved. In combination with the fact that in imaginary frequencies the kernel of the inverse problem is not an exponential but only a rational function we observe significant improvements in the reconstruction of spectral functions, demonstrated in a simple 0+1 dimensional scalar field theory toy model.

  19. Judging the Probability of Hypotheses Versus the Impact of Evidence: Which Form of Inductive Inference Is More Accurate and Time-Consistent?

    Science.gov (United States)

    Tentori, Katya; Chater, Nick; Crupi, Vincenzo

    2016-04-01

    Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than probability judgments. Impact judgments also predict the direction of errors in probability judgments. These findings suggest that human inductive reasoning relies more on estimating evidential impact than on posterior probability. Copyright © 2015 Cognitive Science Society, Inc.

  20. Accurate modeling of high frequency microelectromechanical systems (MEMS switches in time- and frequency-domainc

    Directory of Open Access Journals (Sweden)

    F. Coccetti

    2003-01-01

    Full Text Available In this contribution we present an accurate investigation of three different techniques for the modeling of complex planar circuits. The em analysis is performed by means of different electromagnetic full-wave solvers in the timedomain and in the frequency-domain. The first one is the Transmission Line Matrix (TLM method. In the second one the TLM method is combined with the Integral Equation (IE method. The latter is based on the Generalized Transverse Resonance Diffraction (GTRD. In order to test the methods we model different structures and compare the calculated Sparameters to measured results, with good agreement.

  1. Performance improvements on passive activated charcoal 222Rn samplers

    International Nuclear Information System (INIS)

    Wei Suxia

    1996-01-01

    Improvements have been made on passive activated charcoal 222 Rn samplers with sintered metal filters. Based on the samplers of good adaptability to temperature and humidity developed before, better charcoal was selected to further improve their performance in radon absorption ability and moisture-resistance. And charcoal quantity in samplers was strictly controlled. The integration time constant of the improved samplers was about 4.3 days. As the sampler was combined with gamma spectrometer to measure radon concentration, the calibration factor was 0.518 min -1 ·Bq -1 ·m 3 for samplers of 7 days exposure time, and the minimum detectable concentration 0.28 Bq·m -3 if counting time for both background and sample is 1000 minutes. The improved samplers are suited to accurately determine the indoor and outdoor average radon concentration under conditions of great variation in temperature and humidity

  2. Travel-time source-specific station correction improves location accuracy

    Science.gov (United States)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  3. Suitability of the echo-time-shift method as laboratory standard for thermal ultrasound dosimetry

    Science.gov (United States)

    Fuhrmann, Tina; Georg, Olga; Haller, Julian; Jenderka, Klaus-Vitold

    2017-03-01

    Ultrasound therapy is a promising, non-invasive application with potential to significantly improve cancer therapies like surgery, viro- or immunotherapy. This therapy needs faster, cheaper and more easy-to-handle quality assurance tools for therapy devices as well as possibilities to verify treatment plans and for dosimetry. This limits comparability and safety of treatments. Accurate spatial and temporal temperature maps could be used to overcome these shortcomings. In this contribution first results of suitability and accuracy investigations of the echo-time-shift method for two-dimensional temperature mapping during and after sonication are presented. The analysis methods used to calculate time-shifts were a discrete frame-to-frame and a discrete frame-to-base-frame algorithm as well as a sigmoid fit for temperature calculation. In the future accuracy could be significantly enhanced by using continuous methods for time-shift calculation. Further improvements can be achieved by improving filtering algorithms and interpolation of sampled diagnostic ultrasound data. It might be a comparatively accurate, fast and affordable method for laboratory and clinical quality control.

  4. Engaging Frontline Leaders and Staff in Real-Time Improvement.

    Science.gov (United States)

    Phillips, Jennifer; Hebish, Linda J; Mann, Sharon; Ching, Joan M; Blackmore, C Craig

    2016-04-01

    The relationship of staff satisfaction and engagement to organizational success, along with the integral influence of frontline managers on this dimension, is well established in health care and other industries. To specifically address staff engagement, Virginia Mason Medical Center, an integrated, single-hospital health system, developed an approach that involved leaders, through the daily use of standard work for leaders, as well as staff, through a Lean-inspired staff idea system. Kaizen Promotion Office (KPO) staff members established three guiding principles: (1) Staff engagement begins with leader engagement; (2) Integrate daily improve- ment (kaizen) as a habitual way of life not as an add-on; and (3) Create an environment in which staff feel psycho- logically safe and valued. Two design elements--Standard Work for Leaders (SWL) and Everyday Lean Ideas (ELIs) were implemented. For the emergency department (ED), an early adopter of the staff engagement work, the challenge was to apply the guiding principles to improve staff engagement while improving quality and patient and staff satisfaction, even as patient volumes were increasing. Daily huddles for the KPO staff members and weekly leader rounds are used to elicit staff ideas and foster ELIs in real time. Overall progress to date has been tracked in terms of staff satisfaction surveys, voluntary staff turnover, adoption of SWL, and testing and implementation of staff ideas. For example, voluntary turnover of ED staff decreased from 14.6% in 2011 to 7.5% in 2012, and 2.0% in 2013. Organizationwide, at least 800 staff ideas are in motion at any given time, with finished ones posted in an idea supermarket website. A leadership and staff engagement approach that focuses on SWL and on capturing staff ideas for daily problem solving and improvement can contribute to organization success and improve the quality of health care delivery.

  5. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Sargent, T.O.

    1981-01-01

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  6. Fast, Accurate Memory Architecture Simulation Technique Using Memory Access Characteristics

    OpenAIRE

    小野, 貴継; 井上, 弘士; 村上, 和彰

    2007-01-01

    This paper proposes a fast and accurate memory architecture simulation technique. To design memory architecture, the first steps commonly involve using trace-driven simulation. However, expanding the design space makes the evaluation time increase. A fast simulation is achieved by a trace size reduction, but it reduces the simulation accuracy. Our approach can reduce the simulation time while maintaining the accuracy of the simulation results. In order to evaluate validity of proposed techniq...

  7. A simple and accurate two-step long DNA sequences synthesis strategy to improve heterologous gene expression in pichia.

    Directory of Open Access Journals (Sweden)

    Jiang-Ke Yang

    Full Text Available In vitro gene chemical synthesis is a powerful tool to improve the expression of gene in heterologous system. In this study, a two-step gene synthesis strategy that combines an assembly PCR and an overlap extension PCR (AOE was developed. In this strategy, the chemically synthesized oligonucleotides were assembled into several 200-500 bp fragments with 20-25 bp overlap at each end by assembly PCR, and then an overlap extension PCR was conducted to assemble all these fragments into a full length DNA sequence. Using this method, we de novo designed and optimized the codon of Rhizopus oryzae lipase gene ROL (810 bp and Aspergillus niger phytase gene phyA (1404 bp. Compared with the original ROL gene and phyA gene, the codon-optimized genes expressed at a significantly higher level in yeasts after methanol induction. We believe this AOE method to be of special interest as it is simple, accurate and has no limitation with respect to the size of the gene to be synthesized. Combined with de novo design, this method allows the rapid synthesis of a gene optimized for expression in the system of choice and production of sufficient biological material for molecular characterization and biotechnological application.

  8. Coupled optical and thermal detailed simulations for the accurate evaluation and performance improvement of molten salts solar towers

    Science.gov (United States)

    García-Barberena, Javier; Mutuberria, Amaia; Palacin, Luis G.; Sanz, Javier L.; Pereira, Daniel; Bernardos, Ana; Sanchez, Marcelino; Rocha, Alberto R.

    2017-06-01

    The National Renewable Energy Centre of Spain, CENER, and the Technology & Innovation area of ACS Cobra, as a result of their long term expertise in the CSP field, have developed a high-quality and high level of detail optical and thermal simulation software for the accurate evaluation of Molten Salts Solar Towers. The main purpose of this software is to make a step forward in the state-of-the-art of the Solar Towers simulation programs. Generally, these programs deal with the most critical systems of such plants, i.e. the solar field and the receiver, on an independent basis. Therefore, these programs typically neglect relevant aspects in the operation of the plant as heliostat aiming strategies, solar flux shapes onto the receiver, material physical and operational limitations, transient processes as preheating and secure cloud passing operating modes, and more. The modelling approach implemented in the developed program consists on effectively coupling detailed optical simulations of the heliostat field with also detailed and full-transient thermal simulations of the molten salts tube-based external receiver. The optical model is based on an accurate Monte Carlo ray-tracing method which solves the complete solar field by simulating each of the heliostats at once according to their specific layout in the field. In the thermal side, the tube-based cylindrical external receiver of a Molten Salts Solar Tower is modelled assuming one representative tube per panel, and implementing the specific connection layout of the panels as well as the internal receiver pipes. Each tube is longitudinally discretized and the transient energy and mass balances in the temperature dependent molten salts and steel tube models are solved. For this, a one dimensional radial heat transfer model based is used. The thermal model is completed with a detailed control and operation strategy module, able to represent the appropriate operation of the plant. An integration framework has been

  9. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    Science.gov (United States)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  10. A near-optimal low complexity sensor fusion technique for accurate indoor localization based on ultrasound time of arrival measurements from low-quality sensors

    Science.gov (United States)

    Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.

    2009-05-01

    A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.

  11. CoMik : a predictable and cycle-accurately composable real-time microkernel

    NARCIS (Netherlands)

    Nelson, A.T.; Nejad, A.B.; Molnos, A.M.; Koedam, M.L.P.J.; Goossens, K.G.W.

    2014-01-01

    The functionality of embedded systems is ever increasing. This has lead to mixed time-criticality systems, where applications with a variety of real-time requirements co-exist on the same platform and share resources. Due to inter-application interference, verifying the real-time requirements of

  12. Accurate shear measurement with faint sources

    International Nuclear Information System (INIS)

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys

  13. Do physiotherapy staff record treatment time accurately? An observational study.

    Science.gov (United States)

    Bagley, Pam; Hudson, Mary; Green, John; Forster, Anne; Young, John

    2009-09-01

    To assess the reliability of duration of treatment time measured by physiotherapy staff in early-stage stroke patients. Comparison of physiotherapy staff's recording of treatment sessions and video recording. Rehabilitation stroke unit in a general hospital. Thirty-nine stroke patients without trunk control or who were unable to stand with an erect trunk without the support of two therapists recruited to a randomized trial evaluating the Oswestry Standing Frame. Twenty-six physiotherapy staff who were involved in patient treatment. Contemporaneous recording by physiotherapy staff of treatment time (in minutes) compared with video recording. Intraclass correlation with 95% confidence interval and the Bland and Altman method for assessing agreement by calculating the mean difference (standard deviation; 95% confidence interval), reliability coefficient and 95% limits of agreement for the differences between the measurements. The mean duration (standard deviation, SD) of treatment time recorded by physiotherapy staff was 32 (11) minutes compared with 25 (9) minutes as evidenced in the video recording. The mean difference (SD) was -6 (9) minutes (95% confidence interval (CI) -9 to -3). The reliability coefficient was 18 minutes and the 95% limits of agreement were -24 to 12 minutes. Intraclass correlation coefficient for agreement between the two methods was 0.50 (95% CI 0.12 to 0.73). Physiotherapy staff's recording of duration of treatment time was not reliable and was systematically greater than the video recording.

  14. Application of an accurate thermal hydraulics solver in VTT's reactor dynamics codes

    International Nuclear Information System (INIS)

    Rajamaeki, M.; Raety, H.; Kyrki-Rajamaeki, R.; Eskola, M.

    1998-01-01

    VTT's reactor dynamics codes are developed further and new more detailed models are created for tasks related to increased safety requirements. For thermal hydraulics calculations an accurate general flow model based on a new solution method PLIM has been developed. It has been applied in VTT's one-dimensional TRAB and three-dimensional HEXTRAN codes. Results of a demanding international boron dilution benchmark defined by VTT are given and compared against results of other codes with original or improved boron tracking. The new PLIM method not only allows the accurate modelling of a propagating boron dilution front, but also the tracking of a temperature front, which is missed by the special boron tracking models. (orig.)

  15. Patient Satisfaction Is Associated With Time With Provider But Not Clinic Wait Time Among Orthopedic Patients.

    Science.gov (United States)

    Patterson, Brendan M; Eskildsen, Scott M; Clement, R Carter; Lin, Feng-Chang; Olcott, Christopher W; Del Gaizo, Daniel J; Tennant, Joshua N

    2017-01-01

    Clinic wait time is considered an important predictor of patient satisfaction. The goal of this study was to determine whether patient satisfaction among orthopedic patients is associated with clinic wait time and time with the provider. The authors prospectively enrolled 182 patients at their outpatient orthopedic clinic. Clinic wait time was defined as the time between patient check-in and being seen by the surgeon. Time spent with the provider was defined as the total time the patient spent in the examination room with the surgeon. The Consumer Assessment of Healthcare Providers and Systems survey was used to measure patient satisfaction. Factors associated with increased patient satisfaction included patient age and increased time with the surgeon (P=.024 and P=.037, respectively), but not clinic wait time (P=.625). Perceived wait time was subject to a high level of error, and most patients did not accurately report whether they had been waiting longer than 15 minutes to see a provider until they had waited at least 60 minutes (P=.007). If the results of the current study are generalizable, time with the surgeon is associated with patient satisfaction in orthopedic clinics, but wait time is not. Further, the study findings showed that patients in this setting did not have an accurate perception of actual wait time, with many patients underestimating the time they waited to see a provider. Thus, a potential strategy for improving patient satisfaction is to spend more time with each patient, even at the expense of increased wait time. [Orthopedics. 2017; 40(1):43-48.]. Copyright 2016, SLACK Incorporated.

  16. Multi-source least-squares reverse time migration

    KAUST Repository

    Dai, Wei

    2012-06-15

    Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.

  17. Multi-source least-squares reverse time migration

    KAUST Repository

    Dai, Wei; Fowler, Paul J.; Schuster, Gerard T.

    2012-01-01

    Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.

  18. Spectrally accurate contour dynamics

    International Nuclear Information System (INIS)

    Van Buskirk, R.D.; Marcus, P.S.

    1994-01-01

    We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use

  19. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  20. More accurate fitting of 125I and 103Pd radial dose functions

    International Nuclear Information System (INIS)

    Taylor, R. E. P.; Rogers, D. W. O.

    2008-01-01

    In this study an improved functional form for fitting the radial dose functions, g(r), of 125 I and 103 Pd brachytherapy seeds is presented. The new function is capable of accurately fitting radial dose functions over ranges as large as 0.05 cm≤r≤10 cm for 125 I seeds and 0.10 cm≤r≤10 cm for 103 Pd seeds. The average discrepancies between fit and calculated data are less than 0.5% over the full range of fit and maximum discrepancies are 2% or less. The fitting function is also capable of accounting for the sharp increase in g(r) (upturn) seen for some sources for r 125 I seeds and 9 103 Pd seeds using the EGSnrc Monte Carlo user-code BrachyDose. Fitting coefficients of the new function are tabulated for all 27 seeds. Extrapolation characteristics of the function are also investigated. The new functional form is an improvement over currently used fitting functions with its main strength being the ability to accurately fit the rapidly varying radial dose function at small distances. The new function is an excellent candidate for fitting the radial dose function of all 103 Pd and 125 I brachytherapy seeds and will increase the accuracy of dose distributions calculated around brachytherapy seeds using the TG-43 protocol over a wider range of data. More accurate values of g(r) for r<0.5 cm may be particularly important in the treatment of ocular melanoma

  1. Process improvement to enhance existing stroke team activity toward more timely thrombolytic treatment.

    Science.gov (United States)

    Cho, Han-Jin; Lee, Kyung Yul; Nam, Hyo Suk; Kim, Young Dae; Song, Tae-Jin; Jung, Yo Han; Choi, Hye-Yeon; Heo, Ji Hoe

    2014-10-01

    Process improvement (PI) is an approach for enhancing the existing quality improvement process by making changes while keeping the existing process. We have shown that implementation of a stroke code program using a computerized physician order entry system is effective in reducing the in-hospital time delay to thrombolysis in acute stroke patients. We investigated whether implementation of this PI could further reduce the time delays by continuous improvement of the existing process. After determining a key indicator [time interval from emergency department (ED) arrival to intravenous (IV) thrombolysis] and conducting data analysis, the target time from ED arrival to IV thrombolysis in acute stroke patients was set at 40 min. The key indicator was monitored continuously at a weekly stroke conference. The possible reasons for the delay were determined in cases for which IV thrombolysis was not administered within the target time and, where possible, the problems were corrected. The time intervals from ED arrival to the various evaluation steps and treatment before and after implementation of the PI were compared. The median time interval from ED arrival to IV thrombolysis in acute stroke patients was significantly reduced after implementation of the PI (from 63.5 to 45 min, p=0.001). The variation in the time interval was also reduced. A reduction in the evaluation time intervals was achieved after the PI [from 23 to 17 min for computed tomography scanning (p=0.003) and from 35 to 29 min for complete blood counts (p=0.006)]. PI is effective for continuous improvement of the existing process by reducing the time delays between ED arrival and IV thrombolysis in acute stroke patients.

  2. The right care, every time: improving adherence to evidence-based guidelines.

    Science.gov (United States)

    Runnacles, Jane; Roueché, Alice; Lachman, Peter

    2018-02-01

    Guidelines are integral to reducing variation in paediatric care by ensuring that children receive the right care, every time. However, for reasons discussed in this paper, clinicians do not always follow evidence-based guidelines. Strategies to improve guideline usage tend to focus on dissemination and education. These approaches, however, do not address some of the more complex factors that influence whether a guideline is used in clinical practice. In this article, part of the Equipped Quality Improvement series, we outline the literature on barriers to guideline adherence and present practical solutions to address these barriers. Examples outlined include the use of care bundles, integrated care pathways and quality improvement collaboratives. A sophisticated information technology system can improve the use of evidence-based guidelines and provide organisations with valuable data for learning and improvement. Key to success is the support of an organisation that places reliability of service delivery as the way business is done. To do this requires leadership from clinicians in multidisciplinary teams and a system of continual improvement. By learning from successful approaches, we believe that all healthcare organisations can ensure the right care for each patient, every time. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. A highly accurate method for determination of dissolved oxygen: Gravimetric Winkler method

    International Nuclear Information System (INIS)

    Helm, Irja; Jalukse, Lauri; Leito, Ivo

    2012-01-01

    Highlights: ► Probably the most accurate method available for dissolved oxygen concentration measurement was developed. ► Careful analysis of uncertainty sources was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. ► This development enables more accurate calibration of dissolved oxygen sensors for routine analysis than has been possible before. - Abstract: A high-accuracy Winkler titration method has been developed for determination of dissolved oxygen concentration. Careful analysis of uncertainty sources relevant to the Winkler method was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. The most important improvements were: gravimetric measurement of all solutions, pre-titration to minimize the effect of iodine volatilization, accurate amperometric end point detection and careful accounting for dissolved oxygen in the reagents. As a result, the developed method is possibly the most accurate method of determination of dissolved oxygen available. Depending on measurement conditions and on the dissolved oxygen concentration the combined standard uncertainties of the method are in the range of 0.012–0.018 mg dm −3 corresponding to the k = 2 expanded uncertainty in the range of 0.023–0.035 mg dm −3 (0.27–0.38%, relative). This development enables more accurate calibration of electrochemical and optical dissolved oxygen sensors for routine analysis than has been possible before.

  4. Towards Real Time Simulation of Ship-Ship Interaction

    DEFF Research Database (Denmark)

    Lindberg, Ole; Bingham, Harry B.; Engsig-Karup, Allan Peter

    2012-01-01

    We present recent and preliminary work directed towards the development of a simplified, physics-based model for improved simulation of ship-ship interaction that can be used for both analysis and real-time computing (i.e. with real-time constraints due to visualization). The goal is to implement...... accurate (realistic) and much faster ship-wave and ship-ship simulations than are currently possible. The coupling of simulation with visualization should improve the visual experience such that it can be perceived as more realistic in training. Today the state-of-art in real-time ship-ship interaction...... is for efficiency reasons and time-constraints in visualization based on model experiments in towing tanks and precomputed force tables. We anticipate that the fast, and highly parallel, algorithm described by Engsig-Karup et al. [2011] for execution on affordable modern high-throughput Graphics Processing Units...

  5. Prediction of collision cross section and retention time for broad scope screening in gradient reversed-phase liquid chromatography-ion mobility-high resolution accurate mass spectrometry.

    Science.gov (United States)

    Mollerup, Christian Brinch; Mardal, Marie; Dalsgaard, Petur Weihe; Linnet, Kristian; Barron, Leon Patrick

    2018-03-23

    Exact mass, retention time (RT), and collision cross section (CCS) are used as identification parameters in liquid chromatography coupled to ion mobility high resolution accurate mass spectrometry (LC-IM-HRMS). Targeted screening analyses are now more flexible and can be expanded for suspect and non-targeted screening. These allow for tentative identification of new compounds, and in-silico predicted reference values are used for improving confidence and filtering false-positive identifications. In this work, predictions of both RT and CCS values are performed with machine learning using artificial neural networks (ANNs). Prediction was based on molecular descriptors, 827 RTs, and 357 CCS values from pharmaceuticals, drugs of abuse, and their metabolites. ANN models for the prediction of RT or CCS separately were examined, and the potential to predict both from a single model was investigated for the first time. The optimized combined RT-CCS model was a four-layered multi-layer perceptron ANN, and the 95th prediction error percentiles were within 2 min RT error and 5% relative CCS error for the external validation set (n = 36) and the full RT-CCS dataset (n = 357). 88.6% (n = 733) of predicted RTs were within 2 min error for the full dataset. Overall, when using 2 min RT error and 5% relative CCS error, 91.9% (n = 328) of compounds were retained, while 99.4% (n = 355) were retained when using at least one of these thresholds. This combined prediction approach can therefore be useful for rapid suspect/non-targeted screening involving HRMS, and will support current workflows. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Accurate Prediction of Motor Failures by Application of Multi CBM Tools: A Case Study

    Science.gov (United States)

    Dutta, Rana; Singh, Veerendra Pratap; Dwivedi, Jai Prakash

    2018-02-01

    Motor failures are very difficult to predict accurately with a single condition-monitoring tool as both electrical and the mechanical systems are closely related. Electrical problem, like phase unbalance, stator winding insulation failures can, at times, lead to vibration problem and at the same time mechanical failures like bearing failure, leads to rotor eccentricity. In this case study of a 550 kW blower motor it has been shown that a rotor bar crack was detected by current signature analysis and vibration monitoring confirmed the same. In later months in a similar motor vibration monitoring predicted bearing failure and current signature analysis confirmed the same. In both the cases, after dismantling the motor, the predictions were found to be accurate. In this paper we will be discussing the accurate predictions of motor failures through use of multi condition monitoring tools with two case studies.

  7. An improved front tracking method for the Euler equations

    NARCIS (Netherlands)

    Witteveen, J.A.S.; Koren, B.; Bakker, P.G.

    2007-01-01

    An improved front tracking method for hyperbolic conservation laws is presented. The improved method accurately resolves discontinuities as well as continuous phenomena. The method is based on an improved front interaction model for a physically more accurate modeling of the Euler equations, as

  8. Accurate and approximate thermal rate constants for polyatomic chemical reactions

    International Nuclear Information System (INIS)

    Nyman, Gunnar

    2007-01-01

    In favourable cases it is possible to calculate thermal rate constants for polyatomic reactions to high accuracy from first principles. Here, we discuss the use of flux correlation functions combined with the multi-configurational time-dependent Hartree (MCTDH) approach to efficiently calculate cumulative reaction probabilities and thermal rate constants for polyatomic chemical reactions. Three isotopic variants of the H 2 + CH 3 → CH 4 + H reaction are used to illustrate the theory. There is good agreement with experimental results although the experimental rates generally are larger than the calculated ones, which are believed to be at least as accurate as the experimental rates. Approximations allowing evaluation of the thermal rate constant above 400 K are treated. It is also noted that for the treated reactions, transition state theory (TST) gives accurate rate constants above 500 K. TST theory also gives accurate results for kinetic isotope effects in cases where the mass of the transfered atom is unchanged. Due to neglect of tunnelling, TST however fails below 400 K if the mass of the transferred atom changes between the isotopic reactions

  9. Accurate gradient approximation for complex interface problems in 3D by an improved coupling interface method

    Energy Technology Data Exchange (ETDEWEB)

    Shu, Yu-Chen, E-mail: ycshu@mail.ncku.edu.tw [Department of Mathematics, National Cheng Kung University, Tainan 701, Taiwan (China); Mathematics Division, National Center for Theoretical Sciences (South), Tainan 701, Taiwan (China); Chern, I-Liang, E-mail: chern@math.ntu.edu.tw [Department of Applied Mathematics, National Chiao Tung University, Hsin Chu 300, Taiwan (China); Department of Mathematics, National Taiwan University, Taipei 106, Taiwan (China); Mathematics Division, National Center for Theoretical Sciences (Taipei Office), Taipei 106, Taiwan (China); Chang, Chien C., E-mail: mechang@iam.ntu.edu.tw [Institute of Applied Mechanics, National Taiwan University, Taipei 106, Taiwan (China); Department of Mathematics, National Taiwan University, Taipei 106, Taiwan (China)

    2014-10-15

    Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.

  10. Prediction of Accurate Mixed Mode Fatigue Crack Growth Curves using the Paris' Law

    Science.gov (United States)

    Sajith, S.; Krishna Murthy, K. S. R.; Robi, P. S.

    2017-12-01

    Accurate information regarding crack growth times and structural strength as a function of the crack size is mandatory in damage tolerance analysis. Various equivalent stress intensity factor (SIF) models are available for prediction of mixed mode fatigue life using the Paris' law. In the present investigation these models have been compared to assess their efficacy in prediction of the life close to the experimental findings as there are no guidelines/suggestions available on selection of these models for accurate and/or conservative predictions of fatigue life. Within the limitations of availability of experimental data and currently available numerical simulation techniques, the results of present study attempts to outline models that would provide accurate and conservative life predictions.

  11. A Robust Motion Artifact Detection Algorithm for Accurate Detection of Heart Rates From Photoplethysmographic Signals Using Time-Frequency Spectral Features.

    Science.gov (United States)

    Dao, Duy; Salehizadeh, S M A; Noh, Yeonsik; Chong, Jo Woon; Cho, Chae Ho; McManus, Dave; Darling, Chad E; Mendelson, Yitzhak; Chon, Ki H

    2017-09-01

    Motion and noise artifacts (MNAs) impose limits on the usability of the photoplethysmogram (PPG), particularly in the context of ambulatory monitoring. MNAs can distort PPG, causing erroneous estimation of physiological parameters such as heart rate (HR) and arterial oxygen saturation (SpO2). In this study, we present a novel approach, "TifMA," based on using the time-frequency spectrum of PPG to first detect the MNA-corrupted data and next discard the nonusable part of the corrupted data. The term "nonusable" refers to segments of PPG data from which the HR signal cannot be recovered accurately. Two sequential classification procedures were included in the TifMA algorithm. The first classifier distinguishes between MNA-corrupted and MNA-free PPG data. Once a segment of data is deemed MNA-corrupted, the next classifier determines whether the HR can be recovered from the corrupted segment or not. A support vector machine (SVM) classifier was used to build a decision boundary for the first classification task using data segments from a training dataset. Features from time-frequency spectra of PPG were extracted to build the detection model. Five datasets were considered for evaluating TifMA performance: (1) and (2) were laboratory-controlled PPG recordings from forehead and finger pulse oximeter sensors with subjects making random movements, (3) and (4) were actual patient PPG recordings from UMass Memorial Medical Center with random free movements and (5) was a laboratory-controlled PPG recording dataset measured at the forehead while the subjects ran on a treadmill. The first dataset was used to analyze the noise sensitivity of the algorithm. Datasets 2-4 were used to evaluate the MNA detection phase of the algorithm. The results from the first phase of the algorithm (MNA detection) were compared to results from three existing MNA detection algorithms: the Hjorth, kurtosis-Shannon entropy, and time-domain variability-SVM approaches. This last is an approach

  12. A self-interaction-free local hybrid functional: Accurate binding energies vis-à-vis accurate ionization potentials from Kohn-Sham eigenvalues

    International Nuclear Information System (INIS)

    Schmidt, Tobias; Kümmel, Stephan; Kraisler, Eli; Makmal, Adi; Kronik, Leeor

    2014-01-01

    We present and test a new approximation for the exchange-correlation (xc) energy of Kohn-Sham density functional theory. It combines exact exchange with a compatible non-local correlation functional. The functional is by construction free of one-electron self-interaction, respects constraints derived from uniform coordinate scaling, and has the correct asymptotic behavior of the xc energy density. It contains one parameter that is not determined ab initio. We investigate whether it is possible to construct a functional that yields accurate binding energies and affords other advantages, specifically Kohn-Sham eigenvalues that reliably reflect ionization potentials. Tests for a set of atoms and small molecules show that within our local-hybrid form accurate binding energies can be achieved by proper optimization of the free parameter in our functional, along with an improvement in dissociation energy curves and in Kohn-Sham eigenvalues. However, the correspondence of the latter to experimental ionization potentials is not yet satisfactory, and if we choose to optimize their prediction, a rather different value of the functional's parameter is obtained. We put this finding in a larger context by discussing similar observations for other functionals and possible directions for further functional development that our findings suggest

  13. Measures to Improve Diagnostic Safety in Clinical Practice.

    Science.gov (United States)

    Singh, Hardeep; Graber, Mark L; Hofer, Timothy P

    2016-10-20

    Timely and accurate diagnosis is foundational to good clinical practice and an essential first step to achieving optimal patient outcomes. However, a recent Institute of Medicine report concluded that most of us will experience at least one diagnostic error in our lifetime. The report argues for efforts to improve the reliability of the diagnostic process through better measurement of diagnostic performance. The diagnostic process is a dynamic team-based activity that involves uncertainty, plays out over time, and requires effective communication and collaboration among multiple clinicians, diagnostic services, and the patient. Thus, it poses special challenges for measurement. In this paper, we discuss how the need to develop measures to improve diagnostic performance could move forward at a time when the scientific foundation needed to inform measurement is still evolving. We highlight challenges and opportunities for developing potential measures of "diagnostic safety" related to clinical diagnostic errors and associated preventable diagnostic harm. In doing so, we propose a starter set of measurement concepts for initial consideration that seem reasonably related to diagnostic safety and call for these to be studied and further refined. This would enable safe diagnosis to become an organizational priority and facilitate quality improvement. Health-care systems should consider measurement and evaluation of diagnostic performance as essential to timely and accurate diagnosis and to the reduction of preventable diagnostic harm.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

  14. Accurate determination of rates from non-uniformly sampled relaxation data

    Energy Technology Data Exchange (ETDEWEB)

    Stetz, Matthew A.; Wand, A. Joshua, E-mail: wand@upenn.edu [University of Pennsylvania Perelman School of Medicine, Johnson Research Foundation and Department of Biochemistry and Biophysics (United States)

    2016-08-15

    The application of non-uniform sampling (NUS) to relaxation experiments traditionally used to characterize the fast internal motion of proteins is quantitatively examined. Experimentally acquired Poisson-gap sampled data reconstructed with iterative soft thresholding are compared to regular sequentially sampled (RSS) data. Using ubiquitin as a model system, it is shown that 25 % sampling is sufficient for the determination of quantitatively accurate relaxation rates. When the sampling density is fixed at 25 %, the accuracy of rates is shown to increase sharply with the total number of sampled points until eventually converging near the inherent reproducibility of the experiment. Perhaps contrary to some expectations, it is found that accurate peak height reconstruction is not required for the determination of accurate rates. Instead, inaccuracies in rates arise from inconsistencies in reconstruction across the relaxation series that primarily manifest as a non-linearity in the recovered peak height. This indicates that the performance of an NUS relaxation experiment cannot be predicted from comparison of peak heights using a single RSS reference spectrum. The generality of these findings was assessed using three alternative reconstruction algorithms, eight different relaxation measurements, and three additional proteins that exhibit varying degrees of spectral complexity. From these data, it is revealed that non-linearity in peak height reconstruction across the relaxation series is strongly correlated with errors in NUS-derived relaxation rates. Importantly, it is shown that this correlation can be exploited to reliably predict the performance of an NUS-relaxation experiment by using three or more RSS reference planes from the relaxation series. The RSS reference time points can also serve to provide estimates of the uncertainty of the sampled intensity, which for a typical relaxation times series incurs no penalty in total acquisition time.

  15. An efficient and accurate two-stage fourth-order gas-kinetic scheme for the Euler and Navier-Stokes equations

    Science.gov (United States)

    Pan, Liang; Xu, Kun; Li, Qibing; Li, Jiequan

    2016-12-01

    For computational fluid dynamics (CFD), the generalized Riemann problem (GRP) solver and the second-order gas-kinetic scheme (GKS) provide a time-accurate flux function starting from a discontinuous piecewise linear flow distributions around a cell interface. With the adoption of time derivative of the flux function, a two-stage Lax-Wendroff-type (L-W for short) time stepping method has been recently proposed in the design of a fourth-order time accurate method for inviscid flow [21]. In this paper, based on the same time-stepping method and the second-order GKS flux function [42], a fourth-order gas-kinetic scheme is constructed for the Euler and Navier-Stokes (NS) equations. In comparison with the formal one-stage time-stepping third-order gas-kinetic solver [24], the current fourth-order method not only reduces the complexity of the flux function, but also improves the accuracy of the scheme. In terms of the computational cost, a two-dimensional third-order GKS flux function takes about six times of the computational time of a second-order GKS flux function. However, a fifth-order WENO reconstruction may take more than ten times of the computational cost of a second-order GKS flux function. Therefore, it is fully legitimate to develop a two-stage fourth order time accurate method (two reconstruction) instead of standard four stage fourth-order Runge-Kutta method (four reconstruction). Most importantly, the robustness of the fourth-order GKS is as good as the second-order one. In the current computational fluid dynamics (CFD) research, it is still a difficult problem to extend the higher-order Euler solver to the NS one due to the change of governing equations from hyperbolic to parabolic type and the initial interface discontinuity. This problem remains distinctively for the hypersonic viscous and heat conducting flow. The GKS is based on the kinetic equation with the hyperbolic transport and the relaxation source term. The time-dependent GKS flux function

  16. Karect: accurate correction of substitution, insertion and deletion errors for next-generation sequencing data

    KAUST Repository

    Allam, Amin; Kalnis, Panos; Solovyev, Victor

    2015-01-01

    accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality

  17. Previous utilization of service does not improve timely booking in ...

    African Journals Online (AJOL)

    Previous utilization of service does not improve timely booking in antenatal care: Cross sectional study ... Journal Home > Vol 24, No 3 (2010) > ... Results: Past experience on antenatal care service utilization did not come out as a predictor for ...

  18. A novel method for accurate needle-tip identification in trans-rectal ultrasound-based high-dose-rate prostate brachytherapy.

    Science.gov (United States)

    Zheng, Dandan; Todor, Dorin A

    2011-01-01

    In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  19. Microseismic imaging using Geometric-mean Reverse-Time Migration in Hydraulic Fracturing Monitoring

    Science.gov (United States)

    Yin, J.; Ng, R.; Nakata, N.

    2017-12-01

    Unconventional oil and gas exploration techniques such as hydraulic fracturing are associated with microseismic events related to the generation and development of fractures. For example, hydraulic fracturing, which is popular in Southern Oklahoma, produces earthquakes that are greater than magnitude 2.0. Finding the accurate locations, and mechanisms, of these events provides important information of local stress conditions, fracture distribution, hazard assessment, and economical impact. The accurate source location is also important to separate fracking-induced and wastewater disposal induced seismicity. Here, we implement a wavefield-based imaging method called Geometric-mean Reverse-Time Migration (GmRTM), which takes the advantage of accurate microseismic location based on wavefield back projection. We apply GmRTM to microseismic data collected during hydraulic fracturing for imaging microseismic source locations, and potentially, fractures. Assuming an accurate velocity model, GmRTM can improve the spatial resolution of source locations compared to HypoDD or P/S travel-time based methods. We will discuss the results from GmRTM and HypoDD using this field dataset and synthetic data.

  20. Use of Low-Level Sensor Data to Improve the Accuracy of Bluetooth-Based Travel Time Estimation

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Christensen, Lars Tørholm; Krishnan, Rajesh

    2013-01-01

    by a single device. The latter situation could lead to location ambiguity and could reduce the accuracy of travel time estimation. Therefore, the accuracy of travel time estimation by Bluetooth technology depends on how location ambiguity is handled by the estimation method. The issue of multiple detection...... events in the context of travel time estimation by Bluetooth technology has been considered by various researchers. However, treatment of this issue has been simplistic. Most previous studies have used the first detection event (enter-enter) as the best estimate. No systematic analysis has been conducted...... to explore the most accurate method of travel time estimation with multiple detection events. In this study, different aspects of the Bluetooth detection zone, including size and impact on the accuracy of travel time estimation, were discussed. Four methods were applied to estimate travel time: enter...

  1. Metabolic profiling of yeast culture using gas chromatography coupled with orthogonal acceleration accurate mass time-of-flight mass spectrometry: application to biomarker discovery.

    Science.gov (United States)

    Kondo, Elsuida; Marriott, Philip J; Parker, Rhiannon M; Kouremenos, Konstantinos A; Morrison, Paul; Adams, Mike

    2014-01-07

    Yeast and yeast cultures are frequently used as additives in diets of dairy cows. Beneficial effects from the inclusion of yeast culture in diets for dairy mammals have been reported, and the aim of this study was to develop a comprehensive analytical method for the accurate mass identification of the 'global' metabolites in order to differentiate a variety of yeasts at varying growth stages (Diamond V XP, Yea-Sacc and Levucell). Microwave-assisted derivatization for metabolic profiling is demonstrated through the analysis of differing yeast samples developed for cattle feed, which include a wide range of metabolites of interest covering a large range of compound classes. Accurate identification of the components was undertaken using GC-oa-ToFMS (gas chromatography-orthogonal acceleration-time-of-flight mass spectrometry), followed by principal component analysis (PCA) and orthogonal partial least squares discriminant analysis (OPLS-DA) for data reduction and biomarker discovery. Semi-quantification (fold changes in relative peak areas) was reported for metabolites identified as possible discriminative biomarkers (p-value 2), including D-ribose (four fold decrease), myo-inositol (five fold increase), L-phenylalanine (three fold increase), glucopyranoside (two fold increase), fructose (three fold increase) and threitol (three fold increase) respectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Time-dependent resilience assessment and improvement of urban infrastructure systems

    Science.gov (United States)

    Ouyang, Min; Dueñas-Osorio, Leonardo

    2012-09-01

    This paper introduces an approach to assess and improve the time-dependent resilience of urban infrastructure systems, where resilience is defined as the systems' ability to resist various possible hazards, absorb the initial damage from hazards, and recover to normal operation one or multiple times during a time period T. For different values of T and its position relative to current time, there are three forms of resilience: previous resilience, current potential resilience, and future potential resilience. This paper mainly discusses the third form that takes into account the systems' future evolving processes. Taking the power transmission grid in Harris County, Texas, USA as an example, the time-dependent features of resilience and the effectiveness of some resilience-inspired strategies, including enhancement of situational awareness, management of consumer demand, and integration of distributed generators, are all simulated and discussed. Results show a nonlinear nature of resilience as a function of T, which may exhibit a transition from an increasing function to a decreasing function at either a threshold of post-blackout improvement rate, a threshold of load profile with consumer demand management, or a threshold number of integrated distributed generators. These results are further confirmed by studying a typical benchmark system such as the IEEE RTS-96. Such common trends indicate that some resilience strategies may enhance infrastructure system resilience in the short term, but if not managed well, they may compromise practical utility system resilience in the long run.

  3. Ratio-based lengths of intervals to improve fuzzy time series forecasting.

    Science.gov (United States)

    Huarng, Kunhuang; Yu, Tiffany Hui-Kuang

    2006-04-01

    The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.

  4. Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics

    Science.gov (United States)

    Kukreja, Sunil L.; Boyle, Richard D.

    2014-01-01

    Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.

  5. A study of applying variable valve timing to highly rated diesel engines

    Energy Technology Data Exchange (ETDEWEB)

    Stone, C R; Leonard, H J [comps.; Brunel Univ., Uxbridge (United Kingdom); Charlton, S J [comp.; Bath Univ. (United Kingdom)

    1992-10-01

    The main objective of the research was to use Simulation Program for Internal Combustion Engines (SPICE) to quantify the potential offered by Variable Valve Timing (VVT) in improving engine performance. A model has been constructed of a particular engine using SPICE. The model has been validated with experimental data, and it has been shown that accurate predictions are made when the valve timing is changed. (author)

  6. A Time-Domain Structural Damage Detection Method Based on Improved Multiparticle Swarm Coevolution Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Shao-Fei Jiang

    2014-01-01

    Full Text Available Optimization techniques have been applied to structural health monitoring and damage detection of civil infrastructures for two decades. The standard particle swarm optimization (PSO is easy to fall into the local optimum and such deficiency also exists in the multiparticle swarm coevolution optimization (MPSCO. This paper presents an improved MPSCO algorithm (IMPSCO firstly and then integrates it with Newmark’s algorithm to localize and quantify the structural damage by using the damage threshold proposed. To validate the proposed method, a numerical simulation and an experimental study of a seven-story steel frame were employed finally, and a comparison was made between the proposed method and the genetic algorithm (GA. The results show threefold: (1 the proposed method not only is capable of localization and quantification of damage, but also has good noise-tolerance; (2 the damage location can be accurately detected using the damage threshold proposed in this paper; and (3 compared with the GA, the IMPSCO algorithm is more efficient and accurate for damage detection problems in general. This implies that the proposed method is applicable and effective in the community of damage detection and structural health monitoring.

  7. Accurate Evaluation of Quantum Integrals

    Science.gov (United States)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  8. Accurate and precise determination of small quantity uranium by means of automatic potentiometric titration

    International Nuclear Information System (INIS)

    Liu Quanwei; Luo Zhongyan; Zhu Haiqiao; Wu Jizong

    2007-01-01

    For high radioactivity level of dissolved solution of spent fuel and the solution of uranium product, radioactive hazard must be considered and reduced as low as possible during accurate determination of uranium. In this work automatic potentiometric titration was applied and the sample only 10 mg of uranium contained was taken in order to reduce the harm of analyzer suffered from the radioactivity. RSD<0.06%, at the same time the result can be corrected for more reliable and accurate measurement. The determination method can effectively reduce the harm of analyzer suffered from the radioactivity, and meets the requirement of reliable accurate measurement of uranium. (authors)

  9. A highly accurate wireless digital sun sensor based on profile detecting and detector multiplexing technologies

    Science.gov (United States)

    Wei, Minsong; Xing, Fei; You, Zheng

    2017-01-01

    The advancing growth of micro- and nano-satellites requires miniaturized sun sensors which could be conveniently applied in the attitude determination subsystem. In this work, a profile detecting technology based high accurate wireless digital sun sensor was proposed, which could transform a two-dimensional image into two-linear profile output so that it can realize a high update rate under a very low power consumption. A multiple spots recovery approach with an asymmetric mask pattern design principle was introduced to fit the multiplexing image detector method for accuracy improvement of the sun sensor within a large Field of View (FOV). A FOV determination principle based on the concept of FOV region was also proposed to facilitate both sub-FOV analysis and the whole FOV determination. A RF MCU, together with solar cells, was utilized to achieve the wireless and self-powered functionality. The prototype of the sun sensor is approximately 10 times lower in size and weight compared with the conventional digital sun sensor (DSS). Test results indicated that the accuracy of the prototype was 0.01° within a cone FOV of 100°. Such an autonomous DSS could be equipped flexibly on a micro- or nano-satellite, especially for highly accurate remote sensing applications.

  10. An Accurate Approximate-Analytical Technique for Solving Time-Fractional Partial Differential Equations

    Directory of Open Access Journals (Sweden)

    M. Bishehniasar

    2017-01-01

    Full Text Available The demand of many scientific areas for the usage of fractional partial differential equations (FPDEs to explain their real-world systems has been broadly identified. The solutions may portray dynamical behaviors of various particles such as chemicals and cells. The desire of obtaining approximate solutions to treat these equations aims to overcome the mathematical complexity of modeling the relevant phenomena in nature. This research proposes a promising approximate-analytical scheme that is an accurate technique for solving a variety of noninteger partial differential equations (PDEs. The proposed strategy is based on approximating the derivative of fractional-order and reducing the problem to the corresponding partial differential equation (PDE. Afterwards, the approximating PDE is solved by using a separation-variables technique. The method can be simply applied to nonhomogeneous problems and is proficient to diminish the span of computational cost as well as achieving an approximate-analytical solution that is in excellent concurrence with the exact solution of the original problem. In addition and to demonstrate the efficiency of the method, it compares with two finite difference methods including a nonstandard finite difference (NSFD method and standard finite difference (SFD technique, which are popular in the literature for solving engineering problems.

  11. Accurate mass analysis of ethanesulfonic acid degradates of acetochlor and alachlor using high-performance liquid chromatography and time-of-flight mass spectrometry

    Science.gov (United States)

    Thurman, E.M.; Ferrer, I.; Parry, R.

    2002-01-01

    Degradates of acetochlor and alachlor (ethanesulfonic acids, ESAs) were analyzed in both standards and in a groundwater sample using high-performance liquid chromatography-time-of-flight mass spectrometry with electrospray ionization. The negative pseudomolecular ion of the secondary amide of acetochlor ESA and alachlor ESA gave average masses of 256.0750??0.0049 amu and 270.0786??0.0064 amu respectively. Acetochlor and alachlor ESA gave similar masses of 314.1098??0.0061 amu and 314.1153??0.0048 amu; however, they could not be distinguished by accurate mass because they have the same empirical formula. On the other hand, they may be distinguished using positive-ion electrospray because of different fragmentation spectra, which did not occur using negative-ion electrospray.

  12. Decreasing laboratory turnaround time and patient wait time by implementing process improvement methodologies in an outpatient oncology infusion unit.

    Science.gov (United States)

    Gjolaj, Lauren N; Gari, Gloria A; Olier-Pino, Angela I; Garcia, Juan D; Fernandez, Gustavo L

    2014-11-01

    Prolonged patient wait times in the outpatient oncology infusion unit indicated a need to streamline phlebotomy processes by using existing resources to decrease laboratory turnaround time and improve patient wait time. Using the DMAIC (define, measure, analyze, improve, control) method, a project to streamline phlebotomy processes within the outpatient oncology infusion unit in an academic Comprehensive Cancer Center known as the Comprehensive Treatment Unit (CTU) was completed. Laboratory turnaround time for patients who needed same-day lab and CTU services and wait time for all CTU patients was tracked for 9 weeks. During the pilot, the wait time from arrival to CTU to sitting in treatment area decreased by 17% for all patients treated in the CTU during the pilot. A total of 528 patients were seen at the CTU phlebotomy location, representing 16% of the total patients who received treatment in the CTU, with a mean turnaround time of 24 minutes compared with a baseline turnaround time of 51 minutes. Streamlining workflows and placing a phlebotomy station inside of the CTU decreased laboratory turnaround times by 53% for patients requiring same day lab and CTU services. The success of the pilot project prompted the team to make the station a permanent fixture. Copyright © 2014 by American Society of Clinical Oncology.

  13. Prediction of collision cross section and retention time for broad scope screening in gradient reversed-phase liquid chromatography-ion mobility-high resolution accurate mass spectrometry

    DEFF Research Database (Denmark)

    Mollerup, Christian Brinch; Mardal, Marie; Dalsgaard, Petur Weihe

    2018-01-01

    artificial neural networks (ANNs). Prediction was based on molecular descriptors, 827 RTs, and 357 CCS values from pharmaceuticals, drugs of abuse, and their metabolites. ANN models for the prediction of RT or CCS separately were examined, and the potential to predict both from a single model......Exact mass, retention time (RT), and collision cross section (CCS) are used as identification parameters in liquid chromatography coupled to ion mobility high resolution accurate mass spectrometry (LC-IM-HRMS). Targeted screening analyses are now more flexible and can be expanded for suspect...

  14. Real-time temperature field measurement based on acoustic tomography

    International Nuclear Information System (INIS)

    Bao, Yong; Jia, Jiabin; Polydorides, Nick

    2017-01-01

    Acoustic tomography can be used to measure the temperature field from the time-of-flight (TOF). In order to capture real-time temperature field changes and accurately yield quantitative temperature images, two improvements to the conventional acoustic tomography system are studied: simultaneous acoustic transmission and TOF collection along multiple ray paths, and an offline iteration reconstruction algorithm. During system operation, all the acoustic transceivers send modulated and filtered wideband Kasami sequences simultaneously to facilitate fast and accurate TOF measurements using cross-correlation detection. For image reconstruction, the iteration process is separated and executed offline beforehand to shorten computation time for online temperature field reconstruction. The feasibility and effectiveness of the developed methods are validated in the simulation study. The simulation results demonstrate that the proposed method can reduce the processing time per frame from 160 ms to 20 ms, while the reconstruction error remains less than 5%. Hence, the proposed method has great potential in the measurement of rapid temperature change with good temporal and spatial resolution. (paper)

  15. Real-Time Location-Based Rendering of Urban Underground Pipelines

    Directory of Open Access Journals (Sweden)

    Wei Li

    2018-01-01

    Full Text Available The concealment and complex spatial relationships of urban underground pipelines present challenges in managing them. Recently, augmented reality (AR has been a hot topic around the world, because it can enhance our perception of reality by overlaying information about the environment and its objects onto the real world. Using AR, underground pipelines can be displayed accurately, intuitively, and in real time. We analyzed the characteristics of AR and their application in underground pipeline management. We mainly focused on the AR pipeline rendering procedure based on the BeiDou Navigation Satellite System (BDS and simultaneous localization and mapping (SLAM technology. First, in aiming to improve the spatial accuracy of pipeline rendering, we used differential corrections received from the Ground-Based Augmentation System to compute the precise coordinates of users in real time, which helped us accurately retrieve and draw pipelines near the users, and by scene recognition the accuracy can be further improved. Second, in terms of pipeline rendering, we used Visual-Inertial Odometry (VIO to track the rendered objects and made some improvements to visual effects, which can provide steady dynamic tracking of pipelines even in relatively markerless environments and outdoors. Finally, we used the occlusion method based on real-time 3D reconstruction to realistically express the immersion effect of underground pipelines. We compared our methods to the existing methods and concluded that the method proposed in this research improves the spatial accuracy of pipeline rendering and the portability of the equipment. Moreover, the updating of our rendering procedure corresponded with the moving of the user’s location, thus we achieved a dynamic rendering of pipelines in the real environment.

  16. Using lean principles to improve outpatient adult infusion clinic chemotherapy preparation turnaround times.

    Science.gov (United States)

    Lamm, Matthew H; Eckel, Stephen; Daniels, Rowell; Amerine, Lindsey B

    2015-07-01

    The workflow and chemotherapy preparation turnaround times at an adult infusion clinic were evaluated to identify opportunities to optimize workflow and efficiency. A three-phase study using Lean Six Sigma methodology was conducted. In phase 1, chemotherapy turnaround times in the adult infusion clinic were examined one year after the interim goal of a 45-minute turnaround time was established. Phase 2 implemented various experiments including a five-day Kaizen event, using lean principles in an effort to decrease chemotherapy preparation turnaround times in a controlled setting. Phase 3 included the implementation of process-improvement strategies identified during the Kaizen event, coupled with a final refinement of operational processes. In phase 1, the mean turnaround time for all chemotherapy preparations decreased from 60 to 44 minutes, and a mean of 52 orders for adult outpatient chemotherapy infusions was received each day. After installing new processes, the mean turnaround time had improved to 37 minutes for each chemotherapy preparation in phase 2. In phase 3, the mean turnaround time decreased from 37 to 26 minutes. The overall mean turnaround time was reduced by 26 minutes, representing a 57% decrease in turnaround times in 19 months through the elimination of waste and the implementation of lean principles. This reduction was accomplished through increased efficiencies in the workplace, with no addition of human resources. Implementation of Lean Six Sigma principles improved workflow and efficiency at an adult infusion clinic and reduced the overall chemotherapy turnaround times from 60 to 26 minutes. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  17. Continuous quality improvement intervention for adolescent and young adult HIV testing services in Kenya improves HIV knowledge.

    Science.gov (United States)

    Wagner, Anjuli D; Mugo, Cyrus; Bluemer-Miroite, Shay; Mutiti, Peter M; Wamalwa, Dalton C; Bukusi, David; Neary, Jillian; Njuguna, Irene N; O'Malley, Gabrielle; John-Stewart, Grace C; Slyker, Jennifer A; Kohler, Pamela K

    2017-07-01

    To determine whether continuous quality improvement (CQI) improves quality of HIV testing services for adolescents and young adults (AYA). CQI was introduced at two HIV testing settings: Youth Centre and Voluntary Counseling and Testing (VCT) Center, at a national referral hospital in Nairobi, Kenya. Primary outcomes were AYA satisfaction with HIV testing services, intent to return, and accurate HIV prevention and transmission knowledge. Healthcare worker (HCW) satisfaction assessed staff morale. T tests and interrupted time series analysis using Prais-Winsten regression and generalized estimating equations accounting for temporal trends and autocorrelation were conducted. There were 172 AYA (Youth Centre = 109, VCT = 63) during 6 baseline weeks and 702 (Youth Centre = 454, VCT = 248) during 24 intervention weeks. CQI was associated with an immediate increase in the proportion of AYA with accurate knowledge of HIV transmission at Youth Centre: 18 vs. 63% [adjusted risk difference (aRD) 0.42,95% confidence interval (CI) 0.21 to 0.63], and a trend at VCT: 38 vs. 72% (aRD 0.30, 95% CI -0.04 to 0.63). CQI was associated with an increase in the proportion of AYA with accurate HIV prevention knowledge in VCT: 46 vs. 61% (aRD 0.39, 95% CI 0.02-0.76), but not Youth Centre (P = 0.759). In VCT, CQI showed a trend towards increased intent to retest (4.0 vs. 4.3; aRD 0.78, 95% CI -0.11 to 1.67), but not at Youth Centre (P = 0.19). CQI was not associated with changes in AYA satisfaction, which was high during baseline and intervention at both clinics (P = 0.384, P = 0.755). HCW satisfaction remained high during intervention and baseline (P = 0.746). CQI improved AYA knowledge and did not negatively impact HCW satisfaction. Quality improvement interventions may be useful to improve adolescent-friendly service delivery.

  18. Accurate mass measurements on neutron-deficient krypton isotopes

    CERN Document Server

    Rodríguez, D.; Äystö, J.; Beck, D.; Blaum, K.; Bollen, G.; Herfurth, F.; Jokinen, A.; Kellerbauer, A.; Kluge, H.-J.; Kolhinen, V.S.; Oinonen, M.; Sauvan, E.; Schwarz, S.

    2006-01-01

    The masses of $^{72–78,80,82,86}$Kr were measured directly with the ISOLTRAP Penning trap mass spectrometer at ISOLDE/CERN. For all these nuclides, the measurements yielded mass uncertainties below 10 keV. The ISOLTRAP mass values for $^{72–75}$Kr being more precise than the previous results obtained by means of other techniques, and thus completely determine the new values in the Atomic-Mass Evaluation. Besides the interest of these masses for nuclear astrophysics, nuclear structure studies, and Standard Model tests, these results constitute a valuable and accurate input to improve mass models. In this paper, we present the mass measurements and discuss the mass evaluation for these Kr isotopes.

  19. Improving hospital discharge time: a successful implementation of Six Sigma methodology.

    Science.gov (United States)

    El-Eid, Ghada R; Kaddoum, Roland; Tamim, Hani; Hitti, Eveline A

    2015-03-01

    Delays in discharging patients can impact hospital and emergency department (ED) throughput. The discharge process is complex and involves setting specific challenges that limit generalizability of solutions. The aim of this study was to assess the effectiveness of using Six Sigma methods to improve the patient discharge process. This is a quantitative pre and post-intervention study. Three hundred and eighty-six bed tertiary care hospital. A series of Six Sigma driven interventions over a 10-month period. The primary outcome was discharge time (time from discharge order to patient leaving the room). Secondary outcome measures included percent of patients whose discharge order was written before noon, percent of patients leaving the room by noon, hospital length of stay (LOS), and LOS of admitted ED patients. Discharge time decreased by 22.7% from 2.2 hours during the preintervention period to 1.7 hours post-intervention (P Six Sigma methodology can be an effective change management tool to improve discharge time. The focus of institutions aspiring to tackle delays in the discharge process should be on adopting the core principles of Six Sigma rather than specific interventions that may be institution-specific.

  20. Integrating real-time subsurface hydrologic monitoring with empirical rainfall thresholds to improve landslide early warning

    Science.gov (United States)

    Mirus, Benjamin B.; Becker, Rachel E.; Baum, Rex L.; Smith, Joel B.

    2018-01-01

    Early warning for rainfall-induced shallow landsliding can help reduce fatalities and economic losses. Although these commonly occurring landslides are typically triggered by subsurface hydrological processes, most early warning criteria rely exclusively on empirical rainfall thresholds and other indirect proxies for subsurface wetness. We explore the utility of explicitly accounting for antecedent wetness by integrating real-time subsurface hydrologic measurements into landslide early warning criteria. Our efforts build on previous progress with rainfall thresholds, monitoring, and numerical modeling along the landslide-prone railway corridor between Everett and Seattle, Washington, USA. We propose a modification to a previously established recent versus antecedent (RA) cumulative rainfall thresholds by replacing the antecedent 15-day rainfall component with an average saturation observed over the same timeframe. We calculate this antecedent saturation with real-time telemetered measurements from five volumetric water content probes installed in the shallow subsurface within a steep vegetated hillslope. Our hybrid rainfall versus saturation (RS) threshold still relies on the same recent 3-day rainfall component as the existing RA thresholds, to facilitate ready integration with quantitative precipitation forecasts. During the 2015–2017 monitoring period, this RS hybrid approach has an increase of true positives and a decrease of false positives and false negatives relative to the previous RA rainfall-only thresholds. We also demonstrate that alternative hybrid threshold formats could be even more accurate, which suggests that further development and testing during future landslide seasons is needed. The positive results confirm that accounting for antecedent wetness conditions with direct subsurface hydrologic measurements can improve thresholds for alert systems and early warning of rainfall-induced shallow landsliding.

  1. An Improved Approach for Accurate and Efficient Measurement of Common Carotid Artery Intima-Media Thickness in Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Qiang Li

    2014-01-01

    Full Text Available The intima-media thickness (IMT of common carotid artery (CCA can serve as an important indicator for the assessment of cardiovascular diseases (CVDs. In this paper an improved approach for automatic IMT measurement with low complexity and high accuracy is presented. 100 ultrasound images from 100 patients were tested with the proposed approach. The ground truth (GT of the IMT was manually measured for six times and averaged, while the automatic segmented (AS IMT was computed by the algorithm proposed in this paper. The mean difference ± standard deviation between AS and GT IMT is 0.0231 ± 0.0348 mm, and the correlation coefficient between them is 0.9629. The computational time is 0.3223 s per image with MATLAB under Windows XP on an Intel Core 2 Duo CPU E7500 @2.93 GHz. The proposed algorithm has the potential to achieve real-time measurement under Visual Studio.

  2. Improved time-dependent harmonic oscillator method for vibrationally inelastic collisions

    International Nuclear Information System (INIS)

    DePristo, A.E.

    1985-01-01

    A quantal solution to vibrationally inelastic collisions is presented based upon a linear expansion of the interaction potential around the time-dependent classical positions of all translational and vibrational degrees of freedom. The full time-dependent wave function is a product of a Gaussian translational wave packet and a multidimensional harmonic oscillator wave function, both centered around the appropriate classical position variables. The computational requirements are small since the initial vibrational coordinates are the equilibrium values in the classical trajectory (i.e., phase space sampling does not occur). Different choices of the initial width of the translational wave packet and the initial classical translational momenta are possible, and two combinations are investigated. The first involves setting the initial classical momenta equal to the quantal expectation value, and varying the width to satisfy normalization of the transition probability matrix. The second involves adjusting the initial classical momenta to ensure detailed balancing for each set of transitions, i→f and f→i, and varying the width to satisfy normalization. This choice illustrates the origin of the empirical correction of using the arithmetic average momenta as the initial classical momenta in the forced oscillator approximation. Both methods are tested for the collinear collision systems CO 2 --(He, Ne), and are found to be accurate except for near-resonant vibration--vibration exchange at low initial kinetic energies

  3. Space-time wind speed forecasting for improved power system dispatch

    KAUST Repository

    Zhu, Xinxin

    2014-02-27

    To support large-scale integration of wind power into electric energy systems, state-of-the-art wind speed forecasting methods should be able to provide accurate and adequate information to enable efficient, reliable, and cost-effective scheduling of wind power. Here, we incorporate space-time wind forecasts into electric power system scheduling. First, we propose a modified regime-switching, space-time wind speed forecasting model that allows the forecast regimes to vary with the dominant wind direction and with the seasons, hence avoiding a subjective choice of regimes. Then, results from the wind forecasts are incorporated into a power system economic dispatch model, the cost of which is used as a loss measure of the quality of the forecast models. This, in turn, leads to cost-effective scheduling of system-wide wind generation. Potential economic benefits arise from the system-wide generation of cost savings and from the ancillary service cost savings. We illustrate the economic benefits using a test system in the northwest region of the United States. Compared with persistence and autoregressive models, our model suggests that cost savings from integration of wind power could be on the scale of tens of millions of dollars annually in regions with high wind penetration, such as Texas and the Pacific northwest. © 2014 Sociedad de Estadística e Investigación Operativa.

  4. Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation

    Energy Technology Data Exchange (ETDEWEB)

    Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp

    2017-02-01

    The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculating the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.

  5. Chemical dynamics in the gas phase: Time-dependent quantum mechanics of chemical reactions

    Energy Technology Data Exchange (ETDEWEB)

    Gray, S.K. [Argonne National Laboratory, IL (United States)

    1993-12-01

    A major goal of this research is to obtain an understanding of the molecular reaction dynamics of three and four atom chemical reactions using numerically accurate quantum dynamics. This work involves: (i) the development and/or improvement of accurate quantum mechanical methods for the calculation and analysis of the properties of chemical reactions (e.g., rate constants and product distributions), and (ii) the determination of accurate dynamical results for selected chemical systems, which allow one to compare directly with experiment, determine the reliability of the underlying potential energy surfaces, and test the validity of approximate theories. This research emphasizes the use of recently developed time-dependent quantum mechanical methods, i.e. wave packet methods.

  6. Improvements on Near Real Time Detection of Volcanic Ash Emissions for Emergency Monitoring with Limited Satellite Bands

    Directory of Open Access Journals (Sweden)

    Torge Steensen

    2015-03-01

    Full Text Available Quantifying volcanic ash emissions syneruptively is an important task for the global aviation community. However, due to the near real time nature of volcano monitoring, many parameters important for accurate ash mass estimates cannot be obtained easily. Even when using the best possible estimates of those parameters, uncertainties associated with the ash masses remain high, especially if the satellite data is only available in the traditional 10.8 and 12.0 μm bands. To counteract this limitation, we developed a quantitative comparison between the ash extents in satellite and model data. The focus is the manual cloud edge definition based on the available satellite reverse absorption (RA data as well as other knowledge like pilot reports or ground-based observations followed by an application of the Volcanic Ash Retrieval on the defined subset with an RA threshold of 0 K. This manual aspect, although subjective to the experience of the observer, can show a significant improvement as it provides the ability to highlight ash that otherwise would be obscured by meteorological clouds or, by passing over different surfaces with unaccounted temperatures, might be lost entirely and thus remains undetectable for an automated satellite approach. We show comparisons to Volcanic Ash Transport and Dispersion models and outline a quantitative match as well as percentages of overestimates based on satellite or dispersion model data which can be converted into a level of reliability for near real time volcano monitoring. 

  7. NOAA's Strategy to Improve Operational Weather Prediction Outlooks at Subseasonal Time Range

    Science.gov (United States)

    Schneider, T.; Toepfer, F.; Stajner, I.; DeWitt, D.

    2017-12-01

    NOAA is planning to extend operational global numerical weather prediction to sub-seasonal time range under the auspices of its Next Generation Global Prediction System (NGGPS) and Extended Range Outlook Programs. A unification of numerical prediction capabilities for weather and subseasonal to seasonal (S2S) timescales is underway at NOAA using the Finite Volume Cubed Sphere (FV3) dynamical core as the basis for the emerging unified system. This presentation will overview NOAA's strategic planning and current activities to improve prediction at S2S time-scales that are ongoing in response to the Weather Research and Forecasting Innovation Act of 2017, Section 201. Over the short-term, NOAA seeks to improve the operational capability through improvements to its ensemble forecast system to extend its range to 30 days using the new FV3 Global Forecast System model, and by using this system to provide reforecast and re-analyses. In parallel, work is ongoing to improve NOAA's operational product suite for 30 day outlooks for temperature, precipitation and extreme weather phenomena.

  8. Accurate particle speed prediction by improved particle speed measurement and 3-dimensional particle size and shape characterization technique

    DEFF Research Database (Denmark)

    Cernuschi, Federico; Rothleitner, Christian; Clausen, Sønnik

    2017-01-01

    Accurate particle mass and velocity measurement is needed for interpreting test results in erosion tests of materials and coatings. The impact and damage of a surface is influenced by the kinetic energy of a particle, i.e. particle mass and velocity. Particle mass is usually determined with optic...

  9. Improving wheat productivity through source and timing of nitrogen fertilization

    International Nuclear Information System (INIS)

    Jan, M.T.; Khan, A.; Afridi, M.Z.; Arif, M.; Khan, M.J.; Farhatullah; Jan, D.; Saeed, M.

    2011-01-01

    Efficient nitrogen (N) fertilizer management is critical for the improved production of wheat (Triticum aestivum L.) and can be achieved through source and timing of N application. Thus, an experiment was carried out at the Research Farm of KPK Agricultural University Peshawar during 2005-06 to test the effects of sources and timing of N application on yield and yield components of wheat. Nitrogen sources were ammonium (NH/sub 4/) and nitrate (NO/sub 3/) applied at the rate of 100 kg ha/sup -1/ at three different stages i.e., at sowing (S1), tillering (S2) and boot stage (S3). Ammonium N increased yield component but did not affect the final grain yield. Split N application at sowing, tillering and boot stages had increased productive tillers m-2, and thousand grains weight, whereas grain yield was higher when N was applied at tillering and boot stages. Nitrogen fertilization increased 20% grain yield compared to control regardless of N application time. It was concluded from the experiment that split application of NH/sub 4/-N performed better than full dose application and/or NO/sub 3/-N for improved wheat productivity and thus, is recommended for general practice in agro-climatic conditions of Peshawar. (author)

  10. Fast and accurate edge orientation processing during object manipulation

    Science.gov (United States)

    Flanagan, J Randall; Johansson, Roland S

    2018-01-01

    Quickly and accurately extracting information about a touched object’s orientation is a critical aspect of dexterous object manipulation. However, the speed and acuity of tactile edge orientation processing with respect to the fingertips as reported in previous perceptual studies appear inadequate in these respects. Here we directly establish the tactile system’s capacity to process edge-orientation information during dexterous manipulation. Participants extracted tactile information about edge orientation very quickly, using it within 200 ms of first touching the object. Participants were also strikingly accurate. With edges spanning the entire fingertip, edge-orientation resolution was better than 3° in our object manipulation task, which is several times better than reported in previous perceptual studies. Performance remained impressive even with edges as short as 2 mm, consistent with our ability to precisely manipulate very small objects. Taken together, our results radically redefine the spatial processing capacity of the tactile system. PMID:29611804

  11. Lean-driven improvements slash wait times, drive up patient satisfaction scores.

    Science.gov (United States)

    2012-07-01

    Administrators at LifePoint Hospitals, based in Brentwood, TN, used lean manufacturing techniques to slash wait times by as much as 30 minutes and achieve double-digit increases in patient satisfaction scores in the EDs at three hospitals. In each case, front-line workers took the lead on identifying opportunities for improvement and redesigning the patient-flow process. As a result of the new efficiencies, patient volume is up by about 25% at all three hospitals. At each hospital, the improvement process began with Kaizen, a lean process that involves bringing personnel together to flow-chart the current system, identify problem areas, and redesign the process. Improvement teams found big opportunities for improvement at the front end of the flow process. Key to the approach was having a plan up front to deal with non-compliance. To sustain improvements, administrators gather and disseminate key metrics on a daily basis.

  12. Real-time optoacoustic monitoring of temperature in tissues

    International Nuclear Information System (INIS)

    Larina, Irina V; Larin, Kirill V; Esenaliev, Rinat O

    2005-01-01

    To improve the safety and efficacy of thermal therapy, it is necessary to map tissue temperature in real time with submillimetre spatial resolution. Accurate temperature maps may provide the necessary control of the boundaries of the heated regions and minimize thermal damage to surrounding normal tissues. Current imaging modalities fail to monitor tissue temperature in real time with high resolution and accuracy. We investigated a non-invasive optoacoustic method for accurate, real-time monitoring of tissue temperature during thermotherapy. In this study, we induced temperature gradients in tissue and tissue-like samples and monitored the temperature distribution using the optoacoustic technique. The fundamental harmonic of a Q-switched Nd : YAG laser (λ = 1064 nm) was used for optoacoustic wave generation and probing of tissue temperature. The tissue temperature was also monitored with a multi-sensor temperature probe inserted in the samples. Good agreement between optoacoustically measured and actual tissue temperatures was obtained. The accuracy of temperature monitoring was better than 1 0 C, while the spatial resolution was about 1 mm. These data suggest that the optoacoustic technique has the potential to be used for non-invasive, real-time temperature monitoring during thermotherapy

  13. Measuring Quality Improvement in Acute Ischemic Stroke Care: Interrupted Time Series Analysis of Door-to-Needle Time

    Directory of Open Access Journals (Sweden)

    Anne Margreet van Dishoeck

    2014-06-01

    Full Text Available Background: In patients with acute ischemic stroke, early treatment with recombinant tissue plasminogen activator (rtPA improves functional outcome by effectively reducing disability and dependency. Timely thrombolysis, within 1 h, is a vital aspect of acute stroke treatment, and is reflected in the widely used performance indicator ‘door-to-needle time' (DNT. DNT measures the time from the moment the patient enters the emergency department until he/she receives intravenous rtPA. The purpose of the study was to measure quality improvement from the first implementation of thrombolysis in stroke patients in a university hospital in the Netherlands. We further aimed to identify specific interventions that affect DNT. Methods: We included all patients with acute ischemic stroke consecutively admitted to a large university hospital in the Netherlands between January 2006 and December 2012, and focused on those treated with thrombolytic therapy on admission. Data were collected routinely for research purposes and internal quality measurement (the Erasmus Stroke Study. We used a retrospective interrupted time series design to study the trend in DNT, analyzed by means of segmented regression. Results: Between January 2006 and December 2012, 1,703 patients with ischemic stroke were admitted and 262 (17% were treated with rtPA. Patients treated with thrombolysis were on average 63 years old at the time of the stroke and 52% were male. Mean age (p = 0.58 and sex distribution (p = 0.98 did not change over the years. The proportion treated with thrombolysis increased from 5% in 2006 to 22% in 2012. In 2006, none of the patients were treated within 1 h. In 2012, this had increased to 81%. In a logistic regression analysis, this trend was significant (OR 1.6 per year, CI 1.4-1.8. The median DNT was reduced from 75 min in 2006 to 45 min in 2012 (p Conclusion and Implications: The DNT steadily improved from the first implementation of thrombolysis. Specific

  14. Is identification of smoking, risky alcohol consumption and overweight and obesity by General Practitioners improving? A comparison over time.

    Science.gov (United States)

    Bryant, Jamie; Yoong, Sze Lin; Sanson-Fisher, Rob; Mazza, Danielle; Carey, Mariko; Walsh, Justin; Bisquera, Alessandra

    2015-12-01

    Detection of lifestyle risk factors by GPs is the first step required for intervention. Despite significant investment in preventive health care in general practice, little is known about whether GP detection of lifestyle risk factors have improved over time. To examine whether sensitivity and specificity of GP detection of smoking, risky alcohol consumption and overweight and obesity has increased in patients presenting to see their GP, by comparing results from four Australian studies conducted between 1982 and 2011. Demographic characteristics of patient and GP samples and the prevalence, sensitivity and specificity of detection of each risk factor were extracted from published studies. Differences between GP and patient sample characteristics were examined. To identify trends over time in prevalence of risk factors, sensitivity and specificity of detection across studies and the Cochran-Armitage test for trend were calculated for each risk factor for the overall sample and by male and female subgroups. There were no statistically significant changes in the sensitivity of GP detection of smoking or overweight or obesity over time. Specificity of detection of smoking increased from 64.7% to 98% (P investment to increase GP detection and intervention for lifestyle risk factors, accurate detection of smoking, risky alcohol consumption and overweight and obesity occurs for less than two-thirds of all patients. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. An Accurate Technique for Calculation of Radiation From Printed Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min; Sorensen, Stig B.; Jorgensen, Erik

    2011-01-01

    The accuracy of various techniques for calculating the radiation from printed reflectarrays is examined, and an improved technique based on the equivalent currents approach is proposed. The equivalent currents are found from a continuous plane wave spectrum calculated by use of the spectral dyadic...... Green's function. This ensures a correct relation between the equivalent electric and magnetic currents and thus allows an accurate calculation of the radiation over the entire far-field sphere. A comparison to DTU-ESA Facility measurements of a reference offset reflectarray designed and manufactured...

  16. Accurate characterization of OPVs: Device masking and different solar simulators

    DEFF Research Database (Denmark)

    Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.

    2013-01-01

    One of the prime objects of organic solar cell research has been to improve the power conversion efficiency. Unfortunately, the accurate determination of this property is not straight forward and has led to the recommendation that record devices be tested and certified at a few accredited...... laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...

  17. Deconvolution based attenuation correction for time-of-flight positron emission tomography

    Science.gov (United States)

    Lee, Nam-Yong

    2017-10-01

    For an accurate quantitative reconstruction of the radioactive tracer distribution in positron emission tomography (PET), we need to take into account the attenuation of the photons by the tissues. For this purpose, we propose an attenuation correction method for the case when a direct measurement of the attenuation distribution in the tissues is not available. The proposed method can determine the attenuation factor up to a constant multiple by exploiting the consistency condition that the exact deconvolution of noise-free time-of-flight (TOF) sinogram must satisfy. Simulation studies shows that the proposed method corrects attenuation artifacts quite accurately for TOF sinograms of a wide range of temporal resolutions and noise levels, and improves the image reconstruction for TOF sinograms of higher temporal resolutions by providing more accurate attenuation correction.

  18. Improving optimal control of grid-connected lithium-ion batteries through more accurate battery and degradation modelling

    Science.gov (United States)

    Reniers, Jorn M.; Mulder, Grietus; Ober-Blöbaum, Sina; Howey, David A.

    2018-03-01

    The increased deployment of intermittent renewable energy generators opens up opportunities for grid-connected energy storage. Batteries offer significant flexibility but are relatively expensive at present. Battery lifetime is a key factor in the business case, and it depends on usage, but most techno-economic analyses do not account for this. For the first time, this paper quantifies the annual benefits of grid-connected batteries including realistic physical dynamics and nonlinear electrochemical degradation. Three lithium-ion battery models of increasing realism are formulated, and the predicted degradation of each is compared with a large-scale experimental degradation data set (Mat4Bat). A respective improvement in RMS capacity prediction error from 11% to 5% is found by increasing the model accuracy. The three models are then used within an optimal control algorithm to perform price arbitrage over one year, including degradation. Results show that the revenue can be increased substantially while degradation can be reduced by using more realistic models. The estimated best case profit using a sophisticated model is a 175% improvement compared with the simplest model. This illustrates that using a simplistic battery model in a techno-economic assessment of grid-connected batteries might substantially underestimate the business case and lead to erroneous conclusions.

  19. Musically cued gait-training improves both perceptual and motor timing in Parkinson's disease

    Directory of Open Access Journals (Sweden)

    Charles-Etienne eBenoit

    2014-07-01

    Full Text Available It is well established that auditory cueing improves gait in patients with Idiopathic Parkinson’s Disease (IPD. Disease-related reductions in speed and step length can be improved by providing rhythmical auditory cues via a metronome or music. However, effects on cognitive aspects of motor control have yet to be thoroughly investigated. If synchronization of movement to an auditory cue relies on a supramodal timing system involved in perceptual, motor and sensorimotor integration, auditory cueing can be expected to affect both motor and perceptual timing. Here we tested this hypothesis by assessing perceptual and motor timing in 15 IPD patients before and after a four-week music training program with rhythmic auditory cueing. Long-term effects were assessed one month after the end of the training. Perceptual and motor timing was evaluated with the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA and compared to that of age-, gender-, and education-matched healthy controls. Prior to training, IPD patients exhibited impaired perceptual and motor timing. Training improved patients’ performance in tasks requiring synchronization with isochronous sequences, and enhanced their ability to adapt to durational changes in a sequence in hand tapping tasks. Benefits of cueing extended to time perception (duration discrimination and detection of misaligned beats in musical excerpts. The current results demonstrate that auditory cueing leads to benefits beyond gait and support the idea that coupling gait to rhythmic auditory cues in IPD patients relies on a neuronal network engaged in both perceptual and motor timing.

  20. Simple, fast and accurate two-diode model for photovoltaic modules

    Energy Technology Data Exchange (ETDEWEB)

    Ishaque, Kashif; Salam, Zainal; Taheri, Hamed [Faculty of Electrical Engineering, Universiti Teknologi Malaysia, UTM 81310, Skudai, Johor Bahru (Malaysia)

    2011-02-15

    This paper proposes an improved modeling approach for the two-diode model of photovoltaic (PV) module. The main contribution of this work is the simplification of the current equation, in which only four parameters are required, compared to six or more in the previously developed two-diode models. Furthermore the values of the series and parallel resistances are computed using a simple and fast iterative method. To validate the accuracy of the proposed model, six PV modules of different types (multi-crystalline, mono-crystalline and thin-film) from various manufacturers are tested. The performance of the model is evaluated against the popular single diode models. It is found that the proposed model is superior when subjected to irradiance and temperature variations. In particular the model matches very accurately for all important points of the I-V curves, i.e. the peak power, short-circuit current and open circuit voltage. The modeling method is useful for PV power converter designers and circuit simulator developers who require simple, fast yet accurate model for the PV module. (author)

  1. BLESS 2: accurate, memory-efficient and fast error correction method.

    Science.gov (United States)

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. A Highly Accurate Approach for Aeroelastic System with Hysteresis Nonlinearity

    Directory of Open Access Journals (Sweden)

    C. C. Cui

    2017-01-01

    Full Text Available We propose an accurate approach, based on the precise integration method, to solve the aeroelastic system of an airfoil with a pitch hysteresis. A major procedure for achieving high precision is to design a predictor-corrector algorithm. This algorithm enables accurate determination of switching points resulting from the hysteresis. Numerical examples show that the results obtained by the presented method are in excellent agreement with exact solutions. In addition, the high accuracy can be maintained as the time step increases in a reasonable range. It is also found that the Runge-Kutta method may sometimes provide quite different and even fallacious results, though the step length is much less than that adopted in the presented method. With such high computational accuracy, the presented method could be applicable in dynamical systems with hysteresis nonlinearities.

  3. Implementation of a Quality Improvement Initiative: Improved Congenital Muscular Torticollis Outcomes in a Large Hospital Setting.

    Science.gov (United States)

    Strenk, Mariann L; Kiger, Michelle; Hawke, Jesse L; Mischnick, Amy; Quatman-Yates, Catherine

    2017-06-01

    The American Physical Therapy Association (APTA) published a guideline for congenital muscular torticollis (CMT) in 2013. Our division adopted the guideline as the institutional practice standard and engaged in a quality improvement (QI) initiative to increase the percentage of patients who achieved resolution of CMT within 6 months of evaluation. The aims of this report are to describe the QI activities conducted to improve patient outcomes and discuss the results and implications for other institutions and patient populations. This was a quality improvement study. In alignment with the Chronic Care Model and Model of Improvement, an aim and operationally defined key outcome and process measures were established. Interventions were tested using Plan-Do-Study-Act cycles. A CMT registry was established to store and manage data extracted from the electronic record over the course of testing. Statistical process control charts were used to monitor progress over time. The QI initiative resulted in an increase in the percentage of patients who achieved full resolution of CMT within a 6-month episode of care from 42% to 61% over an 18-month period. Themes that emerged as key drivers of improvement included: (1) timely, optimal access to care, (2) effective audit and clinician feedback, and (3) accurate, timely documentation. The initiative took place at a single institution with a supportive culture and strong QI resources, which may limit direct translation of interventions and findings to other institutions and patient populations. Improvement science methodologies provided the tools and structure to improve division-wide workflow and increase consistency in the implementation of the APTA CMT guideline. In doing so, significant CMT population outcome improvements were achieved. © 2017 American Physical Therapy Association

  4. Improved Thévenin equivalent methods for real-time voltage stability assessment

    DEFF Research Database (Denmark)

    Perez, Angel; Jóhannsson, Hjörtur; Østergaard, Jacob

    2016-01-01

    An improved Thévenin equivalent method for real-time voltage stability assessment that uses wide-area information from synchrophasors is proposed. The improvements are a better modeling of the limited synchronous generators, and a processing that anticipates the effect of field current limiters......, before the latter are activated. Several study cases using detailed dynamic simulations of the Nordic test system have been used to assess the performance of the proposed improvements. Their effectiveness is analyzed and, based on the results, their possible application in combination...

  5. Averaging Gone Wrong: Using Time-Aware Analyses to Better Understand Behavior

    OpenAIRE

    Barbosa, Samuel; Cosley, Dan; Sharma, Amit; Cesar-Jr, Roberto M.

    2016-01-01

    Online communities provide a fertile ground for analyzing people's behavior and improving our understanding of social processes. Because both people and communities change over time, we argue that analyses of these communities that take time into account will lead to deeper and more accurate results. Using Reddit as an example, we study the evolution of users based on comment and submission data from 2007 to 2014. Even using one of the simplest temporal differences between users---yearly coho...

  6. Validation of a method for accurate and highly reproducible quantification of brain dopamine transporter SPECT studies

    DEFF Research Database (Denmark)

    Jensen, Peter S; Ziebell, Morten; Skouboe, Glenna

    2011-01-01

    In nuclear medicine brain imaging, it is important to delineate regions of interest (ROIs) so that the outcome is both accurate and reproducible. The purpose of this study was to validate a new time-saving algorithm (DATquan) for accurate and reproducible quantification of the striatal dopamine t...... transporter (DAT) with appropriate radioligands and SPECT and without the need for structural brain scanning....

  7. How accurately can 21cm tomography constrain cosmology?

    Science.gov (United States)

    Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver

    2008-07-01

    There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 6noise, to uncertainties in the reionization history, and to the level of contamination from astrophysical foregrounds. We derive simple analytic estimates for how various assumptions affect an experiment’s sensitivity, and we find that the modeling of reionization is the most important, followed by the array layout. We present an accurate yet robust method for measuring cosmological parameters that exploits the fact that the ionization power spectra are rather smooth functions that can be accurately fit by 7 phenomenological parameters. We find that for future experiments, marginalizing over these nuisance parameters may provide constraints almost as tight on the cosmology as if 21 cm tomography measured the matter power spectrum directly. A future square kilometer array optimized for 21 cm tomography could improve the sensitivity to spatial curvature and neutrino masses by up to 2 orders of magnitude, to ΔΩk≈0.0002 and Δmν≈0.007eV, and give a 4σ detection of the spectral index running predicted by the simplest inflation models.

  8. Accurate measurement of indoor radon concentration using a low-effective volume radon monitor

    International Nuclear Information System (INIS)

    Tanaka, Aya; Minami, Nodoka; Mukai, Takahiro; Yasuoka, Yumi; Iimoto, Takeshi; Omori, Yasutaka; Nagahama, Hiroyuki; Muto, Jun

    2017-01-01

    AlphaGUARD is a low-effective volume detector and one of the most popular portable radon monitors which is currently available. This study investigated whether AlphaGUARD can accurately measure the variable indoor radon levels. The consistency of the radon-concentration data obtained by AlphaGUARD is evaluated against simultaneous measurements by two other monitors (each ∼10 times more sensitive than AlphaGUARD). When accurately measuring radon concentration with AlphaGUARD, we found that the net counts of the AlphaGUARD were required of at least 500 counts, <25% of the relative percent difference. AlphaGUARD can provide accurate measurements of radon concentration for the world average level (∼50 Bq m -3 ) and the reference level of workplace (1000 Bq m -3 ), using integrated data over at least 3 h and 10 min, respectively. (authors)

  9. Real time information management for improving productivity in metallurgical complexes

    International Nuclear Information System (INIS)

    Bascur, O.A.; Kennedy, J.P.

    1999-01-01

    Applying the latest information technologies in industrial plants has become a serious challenge to management and technical teams. The availability of real time and historical operations information to identify the most critical part of the processing system from mechanical integrity is a must for global plant optimization. Expanded use of plant information on the desktop is a standard tool for revenue improvement, cost reduction, and adherence to production constraints. The industrial component desktop supports access to information for process troubleshooting, continuous improvement and innovation by plant and staff personnel. Collaboration between groups enables the implementation of an overall process effectiveness index based on losses due to equipment availability, production and product quality. The key to designing technology is to use the Internet based technologies created by Microsoft for its marketplace-office automation and the Web. Time derived variables are used for process analysis, troubleshooting and performance assessment. Connectivity between metallurgical complexes, research centers and their business system has become a reality. Two case studies of large integrated mining/metallurgical complexes are highlighted. (author)

  10. Accurate and efficient spin integration for particle accelerators

    International Nuclear Information System (INIS)

    Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; Barber, Desmond P.

    2015-01-01

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.

  11. Accurate and efficient spin integration for particle accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Abell, Dan T.; Meiser, Dominic [Tech-X Corporation, Boulder, CO (United States); Ranjbar, Vahid H. [Brookhaven National Laboratory, Upton, NY (United States); Barber, Desmond P. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2015-01-15

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.

  12. Time constraints in the treatment of nuclear transients - MONSTREAV code

    International Nuclear Information System (INIS)

    Amorim, E.S. do; Sudano, J.P.; Moura Neto, C. de; Ferreira, W.J.

    1980-08-01

    An improved approach to the spatial dynamics problem is described. This approach alows the factorization of the flux in amplitude and shape functions. Boundaries conditions are treated as time dependent functions and the coupling between functions is treated as the improved quasistatic approximation (1,2). A burnup feedback has been included allowing to describe extreme excursion in fast reactors very accurately even for treatment of nonlinear problems. A benchmark analysis shows that an improved method is fully sufficient for fast reactor dynamics calculations. Computation modules embodying the improved model of neutronic behaviour will be integrared with the other tests of the fast reactor dynamics analysis system now under development at EAV-IAE. (Author) [pt

  13. Improved process control through real-time measurement of mineral content

    Energy Technology Data Exchange (ETDEWEB)

    Turler, Daniel; Karaca, Murat; Davis, William B.; Giauque, Robert D.; Hopkins, Deborah

    2001-11-02

    In a highly collaborative research and development project with mining and university partners, sensors and data-analysis tools are being developed for rock-mass characterization and real-time measurement of mineral content. Determining mineralogy prior to mucking in an open-pit mine is important for routing the material to the appropriate processing stream. A possible alternative to lab assay of dust and cuttings obtained from drill holes is continuous on-line sampling and real-time x-ray fluorescence (XRF) spectroscopy. Results presented demonstrate that statistical analyses combined with XRF data can be employed to identify minerals and, possibly, different rock types. The objective is to create a detailed three-dimensional mineralogical map in real time that would improve downstream process efficiency.

  14. An improved real time superresolution FPGA system

    Science.gov (United States)

    Lakshmi Narasimha, Pramod; Mudigoudar, Basavaraj; Yue, Zhanfeng; Topiwala, Pankaj

    2009-05-01

    In numerous computer vision applications, enhancing the quality and resolution of captured video can be critical. Acquired video is often grainy and low quality due to motion, transmission bottlenecks, etc. Postprocessing can enhance it. Superresolution greatly decreases camera jitter to deliver a smooth, stabilized, high quality video. In this paper, we extend previous work on a real-time superresolution application implemented in ASIC/FPGA hardware. A gradient based technique is used to register the frames at the sub-pixel level. Once we get the high resolution grid, we use an improved regularization technique in which the image is iteratively modified by applying back-projection to get a sharp and undistorted image. The algorithm was first tested in software and migrated to hardware, to achieve 320x240 -> 1280x960, about 30 fps, a stunning superresolution by 16X in total pixels. Various input parameters, such as size of input image, enlarging factor and the number of nearest neighbors, can be tuned conveniently by the user. We use a maximum word size of 32 bits to implement the algorithm in Matlab Simulink as well as in FPGA hardware, which gives us a fine balance between the number of bits and performance. The proposed system is robust and highly efficient. We have shown the performance improvement of the hardware superresolution over the software version (C code).

  15. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  16. An Accurate and Dynamic Computer Graphics Muscle Model

    Science.gov (United States)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  17. Improved Criteria on Delay-Dependent Stability for Discrete-Time Neural Networks with Interval Time-Varying Delays

    Directory of Open Access Journals (Sweden)

    O. M. Kwon

    2012-01-01

    Full Text Available The purpose of this paper is to investigate the delay-dependent stability analysis for discrete-time neural networks with interval time-varying delays. Based on Lyapunov method, improved delay-dependent criteria for the stability of the networks are derived in terms of linear matrix inequalities (LMIs by constructing a suitable Lyapunov-Krasovskii functional and utilizing reciprocally convex approach. Also, a new activation condition which has not been considered in the literature is proposed and utilized for derivation of stability criteria. Two numerical examples are given to illustrate the effectiveness of the proposed method.

  18. Accurate first-principles structures and energies of diversely bonded systems from an efficient density functional.

    Science.gov (United States)

    Sun, Jianwei; Remsing, Richard C; Zhang, Yubo; Sun, Zhaoru; Ruzsinszky, Adrienn; Peng, Haowei; Yang, Zenghui; Paul, Arpita; Waghmare, Umesh; Wu, Xifan; Klein, Michael L; Perdew, John P

    2016-09-01

    One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and van der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.

  19. Improving prehospital trauma care in Rwanda through continuous quality improvement: an interrupted time series analysis.

    Science.gov (United States)

    Scott, John W; Nyinawankusi, Jeanne D'Arc; Enumah, Samuel; Maine, Rebecca; Uwitonze, Eric; Hu, Yihan; Kabagema, Ignace; Byiringiro, Jean Claude; Riviello, Robert; Jayaraman, Sudha

    2017-07-01

    Injury is a major cause of premature death and disability in East Africa, and high-quality pre-hospital care is essential for optimal trauma outcomes. The Rwandan pre-hospital emergency care service (SAMU) uses an electronic database to evaluate and optimize pre-hospital care through a continuous quality improvement programme (CQIP), beginning March 2014. The SAMU database was used to assess pre-hospital quality metrics including supplementary oxygen for hypoxia (O2), intravenous fluids for hypotension (IVF), cervical collar placement for head injuries (c-collar), and either splinting (splint) or administration of pain medications (pain) for long bone fractures. Targets of >90% were set for each metric and daily team meetings and monthly feedback sessions were implemented to address opportunities for improvement. These five pre-hospital quality metrics were assessed monthly before and after implementation of the CQIP. Met and unmet needs for O2, IVF, and c-collar were combined into a summative monthly SAMU Trauma Quality Scores (STQ score). An interrupted time series linear regression model compared the STQ score during 14 months before the CQIP implementation to the first 14 months after. During the 29-month study period 3,822 patients met study criteria. 1,028 patients needed one or more of the five studied interventions during the study period. All five endpoints had a significant increase between the pre-CQI and post-CQI periods (pRwanda. This programme may be used as an example for additional efforts engaging frontline staff with real-time data feedback in order to rapidly translate data collection efforts into improved care for the injured in a resource-limited setting. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    International Nuclear Information System (INIS)

    Song, N; Frey, E C; He, B; Wahl, R L

    2011-01-01

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  1. Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR

    Directory of Open Access Journals (Sweden)

    Pricila da Silva Cunha

    2014-01-01

    Full Text Available Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH, which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH, and/or multiplex ligation-dependent probe amplification (MLPA all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs.

  2. Diurnal patterns of salivary cortisol and DHEA using a novel collection device: electronic monitoring confirms accurate recording of collection time using this device.

    Science.gov (United States)

    Laudenslager, Mark L; Calderone, Jacqueline; Philips, Sam; Natvig, Crystal; Carlson, Nichole E

    2013-09-01

    The accurate indication of saliva collection time is important for defining the diurnal decline in salivary cortisol as well as characterizing the cortisol awakening response. We tested a convenient and novel collection device for collecting saliva on strips of filter paper in a specially constructed booklet for determination of both cortisol and DHEA. In the present study, 31 healthy adults (mean age 43.5 years) collected saliva samples four times a day on three consecutive days using filter paper collection devices (Saliva Procurement and Integrated Testing (SPIT) booklet) which were maintained during the collection period in a large plastic bottle with an electronic monitoring cap. Subjects were asked to collect saliva samples at awakening, 30 min after awakening, before lunch and 600 min after awakening. The time of awakening and the time of collection before lunch were allowed to vary by each subjects' schedule. A reliable relationship was observed between the time recorded by the subject directly on the booklet and the time recorded by electronic collection device (n=286 observations; r(2)=0.98). However, subjects did not consistently collect the saliva samples at the two specific times requested, 30 and 600 min after awakening. Both cortisol and DHEA revealed diurnal declines. In spite of variance in collection times at 30 min and 600 min after awakening, the slope of the diurnal decline in both salivary cortisol and DHEA was similar when we compared collection tolerances of ±7.5 and ±15 min for each steroid. These unique collection booklets proved to be a reliable method for recording collection times by subjects as well as for estimating diurnal salivary cortisol and DHEA patterns. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Acute acetaminophen (paracetamol) ingestion improves time to exhaustion during exercise in the heat.

    Science.gov (United States)

    Mauger, Alexis R; Taylor, Lee; Harding, Christopher; Wright, Benjamin; Foster, Josh; Castle, Paul C

    2014-01-01

    Acetaminophen (paracetamol) is a commonly used over-the-counter analgesic and antipyretic and has previously been shown to improve exercise performance through a reduction in perceived pain. This study sought to establish whether its antipyretic action may also improve exercise capacity in the heat by moderating the increase in core temperature. On separate days, 11 recreationally active participants completed two experimental time-to-exhaustion trials on a cycle ergometer in hot conditions (30°C, 50% relative humidity) after ingesting a placebo control or an oral dose of acetaminophen in a randomized, double-blind design. Following acetaminophen ingestion, participants cycled for a significantly longer period of time (acetaminophen, 23 ± 15 min versus placebo, 19 ± 13 min; P = 0.005; 95% confidence interval = 90-379 s), and this was accompanied by significantly lower core (-0.15°C), skin (-0.47°C) and body temperatures (0.19°C; P 0.05). This is the first study to demonstrate that an acute dose of acetaminophen can improve cycling capacity in hot conditions, and that this may be due to the observed reduction in core, skin and body temperature and the subjective perception of thermal comfort. These findings suggest that acetaminophen may reduce the thermoregulatory strain elicited from exercise, thus improving time to exhaustion.

  4. An improved method to accurately calibrate the gantry angle indicators of the radiotherapy linear accelerators

    International Nuclear Information System (INIS)

    Chang Liyun; Ho, S.-Y.; Du, Y.-C.; Lin, C.-M.; Chen Tainsong

    2007-01-01

    The calibration of the gantry angle indicator is an important and basic quality assurance (QA) item for the radiotherapy linear accelerator. In this study, we propose a new and practical method, which uses only the digital level, V-film, and general solid phantoms. By taking the star shot only, we can accurately calculate the true gantry angle according to the geometry of the film setup. The results on our machine showed that the gantry angle was shifted by -0.11 deg. compared with the digital indicator, and the standard deviation was within 0.05 deg. This method can also be used for the simulator. In conclusion, this proposed method could be adopted as an annual QA item for mechanical QA of the accelerator

  5. An improved procedure to accurately assess the variability of the exposure to electromagnetic radiation emitted by GSM base station antennas

    International Nuclear Information System (INIS)

    Bechet, Paul; Miclaus, Simona

    2013-01-01

    Long-term human exposure around Global System for Mobile Communications (GSM) base station antennas has not yet been precisely established; this is of interest from human health and epidemiological perspectives. Actual exposure is difficult to assess accurately, mainly because there is a lack of technical information directly from the GSM operators. The in situ measurement standards available at present provide only a worst-case prediction method; the present work goes beyond this and proposes a methodology that, without the need for data from operators, allows a reliable way to express real exposure with a greater accuracy than all other methods proposed to date. The method is based on dual measurements of the signal strengths in the frequency domain and the time domain and takes into consideration the instantaneous traffic in GSM channels. In addition, it allows a channel-individualized exposure assessement, by making possible the separate analysis of the electric field level in the two types of channel of the GSM standard—the traffic channels and the control channels. (paper)

  6. Improvement of laboratory turnaround time using lean methodology.

    Science.gov (United States)

    Gupta, Shradha; Kapil, Sahil; Sharma, Monica

    2018-05-14

    Purpose The purpose of this paper is to discuss the implementation of lean methodology to reduce the turnaround time (TAT) of a clinical laboratory in a super speciality hospital. Delays in report delivery lead to delayed diagnosis increased waiting time and decreased customer satisfaction. The reduction in TAT will lead to increased patient satisfaction, quality of care, employee satisfaction and ultimately the hospital's revenue. Design/methodology/approach The generic causes resulting in increasing TAT of clinical laboratories were identified using lean tools and techniques such as value stream mapping (VSM), Gemba, Pareto Analysis and Root Cause Analysis. VSM was used as a tool to analyze the current state of the process and further VSM was used to design the future state with suggestions for process improvements. Findings This study identified 12 major non-value added factors for the hematology laboratory and 5 major non-value added factors for the biochemistry lab which were acting as bottlenecks resulting in limiting throughput. A four-month research study by the authors together with hospital quality department and laboratory staff members led to reduction of the average TAT from 180 to 95minutes in the hematology lab and from 268 to 208 minutes in the biochemistry lab. Practical implications Very few improvement initiatives in Indian healthcare are based on industrial engineering tools and techniques, which might be due to a lack of interaction between healthcare and engineering. The study provides a positive outcome in terms of improving the efficiency of services in hospitals and identifies a scope for lean in the Indian healthcare sector. Social implications Applying lean in the Indian healthcare sector gives its own potential solution to the problem caused, due to a wide gap between lean accessibility and lean implementation. Lean helped in changing the mindset of an organization toward providing the highest quality of services with faster delivery at

  7. Time resolution improvement of Schottky CdTe PET detectors using digital signal processing

    International Nuclear Information System (INIS)

    Nakhostin, M.; Ishii, K.; Kikuchi, Y.; Matsuyama, S.; Yamazaki, H.; Torshabi, A. Esmaili

    2009-01-01

    We present the results of our study on the timing performance of Schottky CdTe PET detectors using the technique of digital signal processing. The coincidence signals between a CdTe detector (15x15x1 mm 3 ) and a fast liquid scintillator detector were digitized by a fast digital oscilloscope and analyzed. In the analysis, digital versions of the elements of timing circuits, including pulse shaper and time discriminator, were created and a digital implementation of the Amplitude and Rise-time Compensation (ARC) mode of timing was performed. Owing to a very fine adjustment of the parameters of timing measurement, a good time resolution of less than 9.9 ns (FWHM) at an energy threshold of 150 keV was achieved. In the next step, a new method of time pickoff for improvement of timing resolution without loss in the detection efficiency of CdTe detectors was examined. In the method, signals from a CdTe detector are grouped by their rise-times and different procedures of time pickoff are applied to the signals of each group. Then, the time pickoffs are synchronized by compensating the fixed time offset, caused by the different time pickoff procedures. This method leads to an improved time resolution of ∼7.2 ns (FWHM) at an energy threshold of as low as 150 keV. The methods presented in this work are computationally fast enough to be used for online processing of data in an actual PET system.

  8. Spectrally accurate initial data in numerical relativity

    Science.gov (United States)

    Battista, Nicholas A.

    Einstein's theory of general relativity has radically altered the way in which we perceive the universe. His breakthrough was to realize that the fabric of space is deformable in the presence of mass, and that space and time are linked into a continuum. Much evidence has been gathered in support of general relativity over the decades. Some of the indirect evidence for GR includes the phenomenon of gravitational lensing, the anomalous perihelion of mercury, and the gravitational redshift. One of the most striking predictions of GR, that has not yet been confirmed, is the existence of gravitational waves. The primary source of gravitational waves in the universe is thought to be produced during the merger of binary black hole systems, or by binary neutron stars. The starting point for computer simulations of black hole mergers requires highly accurate initial data for the space-time metric and for the curvature. The equations describing the initial space-time around the black hole(s) are non-linear, elliptic partial differential equations (PDE). We will discuss how to use a pseudo-spectral (collocation) method to calculate the initial puncture data corresponding to single black hole and binary black hole systems.

  9. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  10. Improvement of Quality of Reconstructed Images in Multi-Frame Fresnel Digital Holography

    International Nuclear Information System (INIS)

    Xiao-Wei, Lu; Jing-Zhen, Li; Hong-Yi, Chen

    2010-01-01

    A modified reconstruction algorithm to improve the quality of reconstructed images of multi-frame Fresnel digital holography is presented. When the reference beams are plane or spherical waves with azimuth encoding, by introducing two spherical wave factors, images can be reconstructed with only one time Fourier transform. In numerical simulation, this algorithm could simplify the reconstruction process and improve the signal-to-noise ratio of the reconstructed images. In single-frame reconstruction experiments, the accurate reconstructed image is obtained with this simplified algorithm

  11. Health activism: the way forward to improve health in difficult times.

    Science.gov (United States)

    Laverack, Glenn

    2013-09-01

    Health activism is an action on behalf of a cause, action that goes beyond what is conventional or routine in society. It involves a challenge to the existing order whenever it is perceived to lead to a social injustice or inequality. Today social injustice is killing people on a grand scale and it is timely for health activism to be used as a way forward to improve health during difficult economic and political times. Health activism is essential because it can create the necessary conditions for people to take control over their own lives when others cannot or will not act on their behalf. Health promotion agencies and the practitioners that they employ, professional organisations and researchers can also play an important role. What is clear is that if greedy corporations and complacent governments are not challenged, we will continue to have limited success in improving health.

  12. On timing response improvement of an NE213 scintillator attached to two PMTs

    International Nuclear Information System (INIS)

    Zare, S.; Ghal-Eh, N.; Bayat, E.

    2013-01-01

    A 5 cm diameter by 6 cm height NE213 scintillator attached to two XP2282 PHOTONIS photomultiplier tubes (PMTs) exposed to 241 Americium–Berylium (Am–Be) neutron–gamma source has been used for timing response studies. The neutron–gamma discrimination (NGD) measurements based on a modified zero-crossing (ZC) method show that the discrimination quality, usually expressed in figure-of-merit (FoM) and peak-to-valley (P/V) values, has been improved. The timing response evaluated with Monte Carlo light transport code, PHOTRACK, also verifies this improvement. - Highlights: • An NE213 scintillator attached to two photomultiplier tubes (PMTs) has been proposed. • The neutron–gamma discrimination (NGD) quality factors have been obtained. • The results confirm that the NGD quality of the proposed assembly has been improved

  13. Improving GNSS time series for volcano monitoring: application to Canary Islands (Spain)

    Science.gov (United States)

    García-Cañada, Laura; Sevilla, Miguel J.; Pereda de Pablo, Jorge; Domínguez Cerdeña, Itahiza

    2017-04-01

    The number of permanent GNSS stations has increased significantly in recent years for different geodetic applications such as volcano monitoring, which require a high precision. Recently we have started to have coordinates time series long enough so that we can apply different analysis and filters that allow us to improve the GNSS coordinates results. Following this idea we have processed data from GNSS permanent stations used by the Spanish Instituto Geográfico Nacional (IGN) for volcano monitoring in Canary Islands to obtained time series by double difference processing method with Bernese v5.0 for the period 2007-2014. We have identified the characteristics of these time series and obtained models to estimate velocities with greater accuracy and more realistic uncertainties. In order to improve the results we have used two kinds of filters to improve the time series. The first, a spatial filter, has been computed using the series of residuals of all stations in the Canary Islands without an anomalous behaviour after removing a linear trend. This allows us to apply this filter to all sets of coordinates of the permanent stations reducing their dispersion. The second filter takes account of the temporal correlation in the coordinate time series for each station individually. A research about the evolution of the velocity depending on the series length has been carried out and it has demonstrated the need for using time series of at least four years. Therefore, in those stations with more than four years of data, we calculated the velocity and the characteristic parameters in order to have time series of residuals. This methodology has been applied to the GNSS data network in El Hierro (Canary Islands) during the 2011-2012 eruption and the subsequent magmatic intrusions (2012-2014). The results show that in the new series it is easier to detect anomalous behaviours in the coordinates, so they are most useful to detect crustal deformations in volcano monitoring.

  14. Accurate estimation of camera shot noise in the real-time

    Science.gov (United States)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the

  15. Some techniques to improve time structure of slow extracted beam

    International Nuclear Information System (INIS)

    Shoji, Y.; Sato, H.; Toyama, T.; Marutsuka, K.; Sueno, T.; Mikawa, K.; Ninomiya, S.; Yoshii, M.

    1992-01-01

    In order to improve the time structure of slow extracted beam spill for the KEK 12GeV PS, the spill control system has been upgraded by adding feed forward signal to feedback signal. Further, the wake field in the RF cavity has been cancelled by the beam bunch signal to reduce the re-bunch effect during extraction period. (author)

  16. Improvements in brain activation detection using time-resolved diffuse optical means

    Science.gov (United States)

    Montcel, Bruno; Chabrier, Renee; Poulet, Patrick

    2005-08-01

    An experimental method based on time-resolved absorbance difference is described. The absorbance difference is calculated over each temporal step of the optical signal with the time-resolved Beer-Lambert law. Finite element simulations show that each step corresponds to a different scanned zone and that cerebral contribution increases with the arrival time of photons. Experiments are conducted at 690 and 830 nm with a time-resolved system consisting of picosecond laser diodes, micro-channel plate photo-multiplier tube and photon counting modules. The hemodynamic response to a short finger tapping stimulus is measured over the motor cortex. Time-resolved absorbance difference maps show that variations in the optical signals are not localized in superficial regions of the head, which testify for their cerebral origin. Furthermore improvements in the detection of cerebral activation is achieved through the increase of variations in absorbance by a factor of almost 5 for time-resolved measurements as compared to non-time-resolved measurements.

  17. An Accurate Method to Determine the Muzzle Leaving Time of Guns

    Directory of Open Access Journals (Sweden)

    H. X. Chao

    2014-11-01

    Full Text Available This paper states the importance of determining the muzzle leaving time of guns with a high degree of accuracy. Two commonly used methods are introduced, which are the high speed photography method and photoelectric transducer method, and the advantage and disadvantage of these two methods are analyzed. Furthermore, a new method to determine the muzzle leaving time of guns based on the combination of high speed photography and synchronized trigger technology is present in this paper, and its principle and uncertainty of measurement are evaluated. The firing experiments shows that the present method has distinguish advantage in accuracy and reliability from other methods.

  18. Quality Improvement, Inventory Management, Lead Time Reduction and Production Scheduling in High-Mix Manufacturing Environments

    Science.gov (United States)

    2017-01-13

    Quality Improvement , Inventory Management, Lead Time Reduction and Production Scheduling in High-mix Manufacturing Environments by Sean Daigle B.S...Mechanical Engineering Chairman, Department Committee on Graduate Theses 2 Quality Improvement , Inventory Management, Lead Time Reduction and... Production Scheduling in High-mix Manufacturing Environments by Sean Daigle Submitted to the Department of Mechanical Engineering on January 13, 2017, in

  19. Late Miocene climate and time scale reconciliation: Accurate orbital calibration from a deep-sea perspective

    Science.gov (United States)

    Drury, Anna Joy; Westerhold, Thomas; Frederichs, Thomas; Tian, Jun; Wilkens, Roy; Channell, James E. T.; Evans, Helen; John, Cédric M.; Lyle, Mitch; Röhl, Ursula

    2017-10-01

    Accurate age control of the late Tortonian to early Messinian (8.3-6.0 Ma) is essential to ascertain the origin of benthic foraminiferal δ18O trends and the late Miocene carbon isotope shift (LMCIS), and to examine temporal relationships between the deep-sea, terrasphere and cryosphere. The current Tortonian-Messinian Geological Time Scale (GTS2012) is based on astronomically calibrated Mediterranean sections; however, no comparable non-Mediterranean stratigraphies exist for 8-6 Ma suitable for testing the GTS2012. Here, we present the first high-resolution, astronomically tuned benthic stable isotope stratigraphy (1.5 kyr resolution) and magnetostratigraphy from a single deep-sea location (IODP Site U1337, equatorial Pacific Ocean), which provides unprecedented insight into climate evolution from 8.3-6.0 Ma. The astronomically calibrated magnetostratigraphy provides robust ages, which differ by 2-50 kyr relative to the GTS2012 for polarity Chrons C3An.1n to C4r.1r, and eliminates the exceptionally high South Atlantic spreading rates based on the GTS2012 during Chron C3Bn. We show that the LMCIS was globally synchronous within 2 kyr, and provide astronomically calibrated ages anchored to the GPTS for its onset (7.537 Ma; 50% from base Chron C4n.1n) and termination (6.727 Ma; 11% from base Chron C3An.2n), confirming that the terrestrial C3:C4 shift could not have driven the LMCIS. The benthic records show that the transition into the 41-kyr world, when obliquity strongly influenced climate variability, already occurred at 7.7 Ma and further strengthened at 6.4 Ma. Previously unseen, distinctive, asymmetric saw-tooth patterns in benthic δ18O imply that high-latitude forcing played an important role in late Miocene climate dynamics from 7.7-6.9 Ma. This new integrated deep-sea stratigraphy from Site U1337 can act as a new stable isotope and magnetic polarity reference section for the 8.3-6.0 Ma interval.

  20. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    Science.gov (United States)

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Accurate and efficient spin integration for particle accelerators

    Directory of Open Access Journals (Sweden)

    Dan T. Abell

    2015-02-01

    Full Text Available Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code gpuSpinTrack. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.

  2. Just-in-Time Training: A Novel Approach to Quality Improvement Education.

    Science.gov (United States)

    Knutson, Allison; Park, Nesha D; Smith, Denise; Tracy, Kelly; Reed, Danielle J W; Olsen, Steven L

    2015-01-01

    Just-in-time training (JITT) is accepted in medical education as a training method for newer concepts or seldom-performed procedures. Providing JITT to a large nursing staff may be an effective method to teach quality improvement (QI) initiatives. We sought to determine if JITT could increase knowledge of a specific nutrition QI initiative. Members of the nutrition QI team interviewed staff using the Frontline Contextual Inquiry to assess knowledge regarding the specific QI project. The inquiry was completed pre- and post-JITT. A JITT educational cart was created, which allowed trainers to bring the educational information to the bedside for a short, small group educational session. The results demonstrated a marked improvement in the knowledge of the frontline staff regarding our Vermont Oxford Network involvement and the specifics of the nutrition QI project. Just-in-time training can be a valuable and effective method to disseminate QI principles to a large audience of staff members.

  3. Identification and evaluation of new reference genes in Gossypium hirsutum for accurate normalization of real-time quantitative RT-PCR data

    Directory of Open Access Journals (Sweden)

    Alves-Ferreira Marcio

    2010-03-01

    Full Text Available Abstract Background Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR. Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on important crops such as cotton has been hampered by the lack of suitable reference genes. Results By the use of two distinct algorithms, implemented by geNorm and NormFinder, we have assessed the gene expression of nine candidate reference genes in cotton: GhACT4, GhEF1α5, GhFBX6, GhPP2A1, GhMZA, GhPTB, GhGAPC2, GhβTUB3 and GhUBQ14. The candidate reference genes were evaluated in 23 experimental samples consisting of six distinct plant organs, eight stages of flower development, four stages of fruit development and in flower verticils. The expression of GhPP2A1 and GhUBQ14 genes were the most stable across all samples and also when distinct plants organs are examined. GhACT4 and GhUBQ14 present more stable expression during flower development, GhACT4 and GhFBX6 in the floral verticils and GhMZA and GhPTB during fruit development. Our analysis provided the most suitable combination of reference genes for each experimental set tested as internal control for reliable qPCR data normalization. In addition, to illustrate the use of cotton reference genes we checked the expression of two cotton MADS-box genes in distinct plant and floral organs and also during flower development. Conclusion We have tested the expression stabilities of nine candidate genes in a set of 23 tissue samples from cotton plants divided into five different experimental sets. As a result of this evaluation, we recommend the use of GhUBQ14 and GhPP2A1 housekeeping genes as superior references

  4. Identification and evaluation of new reference genes in Gossypium hirsutum for accurate normalization of real-time quantitative RT-PCR data.

    Science.gov (United States)

    Artico, Sinara; Nardeli, Sarah M; Brilhante, Osmundo; Grossi-de-Sa, Maria Fátima; Alves-Ferreira, Marcio

    2010-03-21

    Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR). Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on important crops such as cotton has been hampered by the lack of suitable reference genes. By the use of two distinct algorithms, implemented by geNorm and NormFinder, we have assessed the gene expression of nine candidate reference genes in cotton: GhACT4, GhEF1alpha5, GhFBX6, GhPP2A1, GhMZA, GhPTB, GhGAPC2, GhbetaTUB3 and GhUBQ14. The candidate reference genes were evaluated in 23 experimental samples consisting of six distinct plant organs, eight stages of flower development, four stages of fruit development and in flower verticils. The expression of GhPP2A1 and GhUBQ14 genes were the most stable across all samples and also when distinct plants organs are examined. GhACT4 and GhUBQ14 present more stable expression during flower development, GhACT4 and GhFBX6 in the floral verticils and GhMZA and GhPTB during fruit development. Our analysis provided the most suitable combination of reference genes for each experimental set tested as internal control for reliable qPCR data normalization. In addition, to illustrate the use of cotton reference genes we checked the expression of two cotton MADS-box genes in distinct plant and floral organs and also during flower development. We have tested the expression stabilities of nine candidate genes in a set of 23 tissue samples from cotton plants divided into five different experimental sets. As a result of this evaluation, we recommend the use of GhUBQ14 and GhPP2A1 housekeeping genes as superior references for normalization of gene expression measures in

  5. Requirements for accurately diagnosing chronic partial upper urinary tract obstruction in children with hydronephrosis

    International Nuclear Information System (INIS)

    Koff, Stephen A.

    2008-01-01

    Successful management of hydronephrosis in the newborn requires early accurate diagnosis to identify or exclude ureteropelvic junction obstruction. However, the presence of hydronephrosis does not define obstruction and displays unique behavior in the newborn. The hydronephrotic kidney usually has nearly normal differential renal function at birth, has not been subjected to progressive dilation and except for pelvocaliectasis does not often show signs of high-grade obstruction. Furthermore, severe hydronephrosis resolves spontaneously in more than 65% of newborns with differential renal function stable or improving. The diagnosis of obstruction in newborn hydronephrosis is challenging because the currently available diagnostic tests, ultrasonography and diuretic renography have demonstrated inaccuracy in diagnosing obstruction and predicting which hydronephrotic kidney will undergo deterioration if untreated. Accurate diagnosis of obstruction is possible but it requires an understanding of the uniqueness of both the pathophysiology of obstruction and the biology of the kidney and renal collecting system in this age group. We examine here the requirements for making an accurate diagnosis of obstruction in the young child with hydronephrosis. (orig.)

  6. Requirements for accurately diagnosing chronic partial upper urinary tract obstruction in children with hydronephrosis

    Energy Technology Data Exchange (ETDEWEB)

    Koff, Stephen A. [Ohio State University College of Medicine, Section of Pediatric Urology, Columbus Children' s Hospital, Columbus, OH (United States)

    2008-01-15

    Successful management of hydronephrosis in the newborn requires early accurate diagnosis to identify or exclude ureteropelvic junction obstruction. However, the presence of hydronephrosis does not define obstruction and displays unique behavior in the newborn. The hydronephrotic kidney usually has nearly normal differential renal function at birth, has not been subjected to progressive dilation and except for pelvocaliectasis does not often show signs of high-grade obstruction. Furthermore, severe hydronephrosis resolves spontaneously in more than 65% of newborns with differential renal function stable or improving. The diagnosis of obstruction in newborn hydronephrosis is challenging because the currently available diagnostic tests, ultrasonography and diuretic renography have demonstrated inaccuracy in diagnosing obstruction and predicting which hydronephrotic kidney will undergo deterioration if untreated. Accurate diagnosis of obstruction is possible but it requires an understanding of the uniqueness of both the pathophysiology of obstruction and the biology of the kidney and renal collecting system in this age group. We examine here the requirements for making an accurate diagnosis of obstruction in the young child with hydronephrosis. (orig.)

  7. A security analysis of version 2 of the Network Time Protocol (NTP): A report to the privacy and security research group

    Science.gov (United States)

    Bishop, Matt

    1991-01-01

    The Network Time Protocol is being used throughout the Internet to provide an accurate time service. The security requirements are examined of such a service, version 2 of the NTP protocol is analyzed to determine how well it meets these requirements, and improvements are suggested where appropriate.

  8. Learning fast accurate movements requires intact frontostriatal circuits

    Directory of Open Access Journals (Sweden)

    Britne eShabbott

    2013-11-01

    Full Text Available The basal ganglia are known to play a crucial role in movement execution, but their importance for motor skill learning remains unclear. Obstacles to our understanding include the lack of a universally accepted definition of motor skill learning (definition confound, and difficulties in distinguishing learning deficits from execution impairments (performance confound. We studied how healthy subjects and subjects with a basal ganglia disorder learn fast accurate reaching movements, and we addressed the definition and performance confounds by: 1 focusing on an operationally defined core element of motor skill learning (speed-accuracy learning, and 2 using normal variation in initial performance to separate movement execution impairment from motor learning abnormalities. We measured motor skill learning learning as performance improvement in a reaching task with a speed-accuracy trade-off. We compared the performance of subjects with Huntington’s disease (HD, a neurodegenerative basal ganglia disorder, to that of premanifest carriers of the HD mutation and of control subjects. The initial movements of HD subjects were less skilled (slower and/or less accurate than those of control subjects. To factor out these differences in initial execution, we modeled the relationship between learning and baseline performance in control subjects. Subjects with HD exhibited a clear learning impairment that was not explained by differences in initial performance. These results support a role for the basal ganglia in both movement execution and motor skill learning.

  9. FASTSIM2: a second-order accurate frictional rolling contact algorithm

    Science.gov (United States)

    Vollebregt, E. A. H.; Wilders, P.

    2011-01-01

    In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.

  10. 3D Vision Provides Shorter Operative Time and More Accurate Intraoperative Surgical Performance in Laparoscopic Hiatal Hernia Repair Compared With 2D Vision.

    Science.gov (United States)

    Leon, Piera; Rivellini, Roberta; Giudici, Fabiola; Sciuto, Antonio; Pirozzi, Felice; Corcione, Francesco

    2017-04-01

    The aim of this study is to evaluate if 3-dimensional high-definition (3D) vision in laparoscopy can prompt advantages over conventional 2D high-definition vision in hiatal hernia (HH) repair. Between September 2012 and September 2015, we randomized 36 patients affected by symptomatic HH to undergo surgery; 17 patients underwent 2D laparoscopic HH repair, whereas 19 patients underwent the same operation in 3D vision. No conversion to open surgery occurred. Overall operative time was significantly reduced in the 3D laparoscopic group compared with the 2D one (69.9 vs 90.1 minutes, P = .006). Operative time to perform laparoscopic crura closure did not differ significantly between the 2 groups. We observed a tendency to a faster crura closure in the 3D group in the subgroup of patients with mesh positioning (7.5 vs 8.9 minutes, P = .09). Nissen fundoplication was faster in the 3D group without mesh positioning ( P = .07). 3D vision in laparoscopic HH repair helps surgeon's visualization and seems to lead to operative time reduction. Advantages can result from the enhanced spatial perception of narrow spaces. Less operative time and more accurate surgery translate to benefit for patients and cost savings, compensating the high costs of the 3D technology. However, more data from larger series are needed to firmly assess the advantages of 3D over 2D vision in laparoscopic HH repair.

  11. Improvements for real-time magnetic equilibrium reconstruction on ASDEX Upgrade

    International Nuclear Information System (INIS)

    Giannone, L.; Fischer, R.; McCarthy, P.J.; Odstrcil, T.; Zammuto, I.; Bock, A.; Conway, G.; Fuchs, J.C.; Gude, A.; Igochine, V.; Kallenbach, A.; Lackner, K.; Maraschek, M.; Rapson, C.; Ruan, Q.; Schuhbeck, K.H.; Suttrop, W.; Wenzel, L.

    2015-01-01

    Highlights: • Spline basis current functions with second-order linear regularisation. • Perturbations of magnetic probe measurements due to ferromagnetic tiles on the inner wall and from oscillations in the fast position coil current are corrected. • A constraint of the safety factor on the magnetic axis is introduced. Soft X-ray tomography is used to assess the quality of the real-time magnetic equilibrium reconstruction. • External loop voltage measurements and magnetic probe pairs inside and outside the vessel wall were used to measure the vacuum vessel wall resistivity. - Abstract: Real-time magnetic equilibria are needed for NTM stabilization and disruption avoidance experiments on ASDEX Upgrade. Five improvements to real-time magnetic equilibrium reconstruction on ASDEX Upgrade have been investigated. The aim is to include as many features of the offline magnetic equilibrium reconstruction code in the real-time equilibrium reconstruction code. Firstly, spline current density basis functions with regularization are used in the offline equilibrium reconstruction code, CLISTE [1]. It is now possible to have the same number of spline basis functions in the real-time code. Secondly, in the presence of edge localized modes, (ELM's), it is found to be necessary to include the low pass filter effect of the vacuum vessel on the fast position control coil currents to correctly compensate the magnetic probes for current oscillations in these coils. Thirdly, the introduction of ferromagnetic tiles in ASDEX Upgrade means that a real-time algorithm for including the perturbations of the magnetic equilibrium generated by these tiles is required. A methodology based on tile surface currents is described. Fourthly, during current ramps it was seen that the difference between fitted and measured magnetic measurements in the equilibrium reconstruction were larger than in the constant current phase. External loop voltage measurements and magnetic probe pairs inside and

  12. Improvements for real-time magnetic equilibrium reconstruction on ASDEX Upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Giannone, L.; Fischer, R. [Max Planck Institute for Plasma Physics, 85748 Garching (Germany); McCarthy, P.J. [Department of Physics, University College Cork, Cork (Ireland); Odstrcil, T.; Zammuto, I.; Bock, A.; Conway, G.; Fuchs, J.C.; Gude, A.; Igochine, V.; Kallenbach, A.; Lackner, K.; Maraschek, M.; Rapson, C. [Max Planck Institute for Plasma Physics, 85748 Garching (Germany); Ruan, Q. [National Instruments, Austin, TX 78759-3504 (United States); Schuhbeck, K.H.; Suttrop, W. [Max Planck Institute for Plasma Physics, 85748 Garching (Germany); Wenzel, L. [National Instruments, Austin, TX 78759-3504 (United States)

    2015-11-15

    Highlights: • Spline basis current functions with second-order linear regularisation. • Perturbations of magnetic probe measurements due to ferromagnetic tiles on the inner wall and from oscillations in the fast position coil current are corrected. • A constraint of the safety factor on the magnetic axis is introduced. Soft X-ray tomography is used to assess the quality of the real-time magnetic equilibrium reconstruction. • External loop voltage measurements and magnetic probe pairs inside and outside the vessel wall were used to measure the vacuum vessel wall resistivity. - Abstract: Real-time magnetic equilibria are needed for NTM stabilization and disruption avoidance experiments on ASDEX Upgrade. Five improvements to real-time magnetic equilibrium reconstruction on ASDEX Upgrade have been investigated. The aim is to include as many features of the offline magnetic equilibrium reconstruction code in the real-time equilibrium reconstruction code. Firstly, spline current density basis functions with regularization are used in the offline equilibrium reconstruction code, CLISTE [1]. It is now possible to have the same number of spline basis functions in the real-time code. Secondly, in the presence of edge localized modes, (ELM's), it is found to be necessary to include the low pass filter effect of the vacuum vessel on the fast position control coil currents to correctly compensate the magnetic probes for current oscillations in these coils. Thirdly, the introduction of ferromagnetic tiles in ASDEX Upgrade means that a real-time algorithm for including the perturbations of the magnetic equilibrium generated by these tiles is required. A methodology based on tile surface currents is described. Fourthly, during current ramps it was seen that the difference between fitted and measured magnetic measurements in the equilibrium reconstruction were larger than in the constant current phase. External loop voltage measurements and magnetic probe pairs inside

  13. An accurate, fast, and scalable solver for high-frequency wave propagation

    Science.gov (United States)

    Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.

    2017-12-01

    In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and

  14. Improving Emergency Department radiology transportation time: a successful implementation of lean methodology.

    Science.gov (United States)

    Hitti, Eveline A; El-Eid, Ghada R; Tamim, Hani; Saleh, Rana; Saliba, Miriam; Naffaa, Lena

    2017-09-05

    Emergency Department overcrowding has become a global problem and a growing safety and quality concern. Radiology and laboratory turnaround time, ED boarding and increased ED visits are some of the factors that contribute to ED overcrowding. Lean methods have been used in the ED to address multiple flow challenges from improving door-to-doctor time to reducing length of stay. The objective of this study is to determine the effectiveness of using Lean management methods on improving Emergency Department transportation times for plain radiography. We performed a before and after study at an academic urban Emergency Department with 49,000 annual visits after implementing a Lean driven intervention. The primary outcome was mean radiology transportation turnaround time (TAT). Secondary outcomes included overall study turnaround time from order processing to preliminary report time as well as ED length of stay. All ED patients undergoing plain radiography 6 months pre-intervention were compared to all ED patients undergoing plain radiography 6 months post-intervention after a 1 month washout period. Post intervention there was a statistically significant decrease in the mean transportation TAT (mean ± SD: 9.87 min ± 15.05 versus 22.89 min ± 22.05, respectively, p-value <0.0001). In addition, it was found that 71.6% of patients in the post-intervention had transportation TAT ≤ 10 min, as compared to 32.3% in the pre-intervention period, p-value <0.0001, with narrower interquartile ranges in the post-intervention period. Similarly, the "study processing to preliminary report time" and the length of stay were lower in the post-intervention as compared to the pre-intervention, (52.50 min ± 35.43 versus 54.04 min ± 34.72, p-value = 0.02 and 3.65 h ± 5.17 versus 4.57 h ± 10.43, p < 0.0001, respectively), in spite of an increase in the time it took to elease a preliminary report in the post-intervention period. Using Lean change management

  15. Virtual Reality Based Accurate Radioactive Source Representation and Dosimetry for Training Applications

    International Nuclear Information System (INIS)

    Molto-Caracena, T.; Vendrell Vidal, E.; Goncalves, J.G.M.; Peerani, P.; )

    2015-01-01

    Virtual Reality (VR) technologies have much potential for training applications. Success relies on the capacity to provide a real-time immersive effect to a trainee. For a training application to be an effective/meaningful tool, 3D realistic scenarios are not enough. Indeed, it is paramount having sufficiently accurate models of the behaviour of the instruments to be used by a trainee. This will enable the required level of user's interactivity. Specifically, when dealing with simulation of radioactive sources, a VR model based application must compute the dose rate with equivalent accuracy and in about the same time as a real instrument. A conflicting requirement is the need to provide a smooth visual rendering enabling spatial interactivity and interaction. This paper presents a VR based prototype which accurately computes the dose rate of radioactive and nuclear sources that can be selected from a wide library. Dose measurements reflect local conditions, i.e., presence of (a) shielding materials with any shape and type and (b) sources with any shape and dimension. Due to a novel way of representing radiation sources, the system is fast enough to grant the necessary user interactivity. The paper discusses the application of this new method and its advantages in terms of time setting, cost and logistics. (author)

  16. Effect of real-time teledermatology on diagnosis, treatment and clinical improvement.

    Science.gov (United States)

    Al Quran, Hanadi A; Khader, Yousef Saleh; Ellauzi, Ziad Mohd; Shdaifat, Amjad

    2015-03-01

    We assessed the effect of real-time teledermatology consultations on diagnosis and disease management, patients' quality of life and time- and cost-savings. All consecutive patients with skin diseases attending teledermatology clinics at two rural hospitals in Jordan were included in the study. Patients were interviewed at their initial visit and again after eight weeks. Various questionnaires and forms, including quality of life questionnaires, were used to collect the data. Ninety teledermatology consultations were performed for 88 patients between September 2013 and January 2014. A diagnosis was established as part of the teledermatology consultation in 43% of patients and changed from that of the referring provider in 19% of patients. The treatment plan was established for 67% of patients and changed for 9% patients. The mean SF-8 score increased significantly (P < 0.005). The mean DLQI score decreased significantly (P < 0.005) indicating that there had been an improvement in the patients' quality of life since baseline. Most patients perceived that the visit to the teledermatology clinic required less travel time (96%), shorter waiting time (83%) and less cost (96%) than a visit to the specialist clinic at the main hospital. The patients' mean satisfaction score was 90.5 (SD 8.5), indicating a high level of satisfaction. Teledermatology resulted in changes in the patients' diagnosis and treatment plan, and was associated with improved health state and quality of life. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  17. Improved response time of flexible microelectromechanical sensors employing eco-friendly nanomaterials.

    Science.gov (United States)

    Fan, Shicheng; Dan, Li; Meng, Lingju; Zheng, Wei; Elias, Anastasia; Wang, Xihua

    2017-11-09

    Flexible force/pressure sensors are of interest for academia and industry and have applications in wearable technologies. Most of such sensors on the market or reported in journal publications are based on the operation mechanism of probing capacitance or resistance changes of the materials under pressure. Recently, we reported the microelectromechanical (MEM) sensors based on a different mechanism: mechanical switches. Multiples of such MEM sensors can be integrated to achieve the same function of regular force/pressure sensors while having the advantages of ease of fabrication and long-term stability in operation. Herein, we report the dramatically improved response time (more than one order of magnitude) of these MEM sensors by employing eco-friendly nanomaterials-cellulose nanocrystals. For instance, the incorporation of polydimethysiloxane filled with cellulose nanocrystals shortened the response time of MEM sensors from sub-seconds to several milliseconds, leading to the detection of both diastolic and systolic pressures in the radial arterial blood pressure measurement. Comprehensive mechanical and electrical characterization of the materials and the devices reveal that greatly enhanced storage modulus and loss modulus play key roles in this improved response time. The demonstrated fast-response flexible sensors enabled continuous monitoring of heart rate and complex cardiovascular signals using pressure sensors for future wearable sensing platforms.

  18. A hybrid method for accurate star tracking using star sensor and gyros.

    Science.gov (United States)

    Lu, Jiazhen; Yang, Lie; Zhang, Hao

    2017-10-01

    Star tracking is the primary operating mode of star sensors. To improve tracking accuracy and efficiency, a hybrid method using a star sensor and gyroscopes is proposed in this study. In this method, the dynamic conditions of an aircraft are determined first by the estimated angular acceleration. Under low dynamic conditions, the star sensor is used to measure the star vector and the vector difference method is adopted to estimate the current angular velocity. Under high dynamic conditions, the angular velocity is obtained by the calibrated gyros. The star position is predicted based on the estimated angular velocity and calibrated gyros using the star vector measurements. The results of the semi-physical experiment show that this hybrid method is accurate and feasible. In contrast with the star vector difference and gyro-assisted methods, the star position prediction result of the hybrid method is verified to be more accurate in two different cases under the given random noise of the star centroid.

  19. An estimation of tropospheric corrections using GPS and synoptic data: Improving Urmia Lake water level time series from Jason-2 and SARAL/AltiKa satellite altimetry

    Science.gov (United States)

    Arabsahebi, Reza; Voosoghi, Behzad; Tourian, Mohammad J.

    2018-05-01

    Tropospheric correction is one of the most important corrections in satellite altimetry measurements. Tropospheric wet and dry path delays have strong dependence on temperature, pressure and humidity. Tropospheric layer has particularly high variability over coastal regions due to humidity, wind and temperature gradients. Depending on the extent of water body and wind conditions over an inland water, Wet Tropospheric Correction (WTC) is within the ranges from a few centimeters to tens of centimeters. Therefore, an extra care is needed to estimate tropospheric corrections on the altimetric measurements over inland waters. This study assesses the role of tropospheric correction on the altimetric measurements over the Urmia Lake in Iran. For this purpose, four types of tropospheric corrections have been used: (i) microwave radiometer (MWR) observations, (ii) tropospheric corrections computed from meteorological models, (iii) GPS observations and (iv) synoptic station data. They have been applied to Jason-2 track no. 133 and SARAL/AltiKa track no. 741 and 356 corresponding to 117-153 and the 23-34 cycles, respectively. In addition, the corresponding measurements of PISTACH and PEACHI, include new retracking method and an innovative wet tropospheric correction, have also been used. Our results show that GPS observation leads to the most accurate tropospheric correction. The results obtained from the PISTACH and PEACHI projects confirm those obtained with the standard SGDR, i.e., the role of GPS in improving the tropospheric corrections. It is inferred that the MWR data from Jason-2 mission is appropriate for the tropospheric corrections, however the SARAL/AltiKa one is not proper because Jason-2 possesses an enhanced WTC near the coast. Furthermore, virtual stations are defined for assessment of the results in terms of time series of Water Level Height (WLH). The results show that GPS tropospheric corrections lead to the most accurate WLH estimation for the selected

  20. Research on Monte Carlo improved quasi-static method for reactor space-time dynamics

    International Nuclear Information System (INIS)

    Xu Qi; Wang Kan; Li Shirui; Yu Ganglin

    2013-01-01

    With large time steps, improved quasi-static (IQS) method can improve the calculation speed for reactor dynamic simulations. The Monte Carlo IQS method was proposed in this paper, combining the advantages of both the IQS method and MC method. Thus, the Monte Carlo IQS method is beneficial for solving space-time dynamics problems of new concept reactors. Based on the theory of IQS, Monte Carlo algorithms for calculating adjoint neutron flux, reactor kinetic parameters and shape function were designed and realized. A simple Monte Carlo IQS code and a corresponding diffusion IQS code were developed, which were used for verification of the Monte Carlo IQS method. (authors)

  1. Improvements in real time {sup 222}Rn monitoring at Stromboli volcano

    Energy Technology Data Exchange (ETDEWEB)

    Lavagno, A., E-mail: andrea.lavagno@polito.it [Dipartimento di Scienze Applicata e Tecnologia, Politecnico di Torino (Italy); INFN, Sezione di Torino (Italy); Laiolo, M. [Dipartimento di Scienze della Terra, Università di Torino (Italy); Gervino, G. [Dipartimento di Fisica, Università di Torino (Italy); INFN, Sezione di Torino (Italy); Cigolini, C.; Coppola, D.; Piscopo, D. [Dipartimento di Scienze della Terra, Università di Torino (Italy); Marino, C. [Dipartimento di Fisica, Università di Torino (Italy); INFN, Sezione di Torino (Italy)

    2013-08-01

    Monitoring gas emissions from soil allow to get information on volcanic activity, hidden faults and hydrothermal dynamics. Radon activities at Stromboli were collected by means of multi-parametric real-time stations, that measure radon as well as environmental parameters. The last improvements on the detection system are presented and discussed.

  2. Time Domain Induced Polarization

    DEFF Research Database (Denmark)

    Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest

    2012-01-01

    Time-domain-induced polarization has significantly broadened its field of reference during the last decade, from mineral exploration to environmental geophysics, e.g., for clay and peat identification and landfill characterization. Though, insufficient modeling tools have hitherto limited the use...... of time-domaininduced polarization for wider purposes. For these reasons, a new forward code and inversion algorithm have been developed using the full-time decay of the induced polarization response, together with an accurate description of the transmitter waveform and of the receiver transfer function......, to reconstruct the distribution of the Cole-Cole parameters of the earth. The accurate modeling of the transmitter waveform had a strong influence on the forward response, and we showed that the difference between a solution using a step response and a solution using the accurate modeling often is above 100...

  3. Evaluation of a timing integrated circuit architecture for continuous crystal and SiPM based PET systems

    OpenAIRE

    Monzó Ferrer, José María; Ros García, Ana; Herrero Bosch, Vicente; Perino Vicentini, Ivan Virgilio; Aliaga Varea, Ramón José; Gadea Gironés, Rafael; Colom Palero, Ricardo José

    2013-01-01

    Improving timing resolution in positron emission tomography (PET), thus having fine time information of the detected pulses, is important to increase the reconstructed images signal to noise ratio (SNR) [1]. In the present work, an integrated circuit topology for time extraction of the incoming pulses is evaluated. An accurate simulation including the detector physics and the electronics with different configurations has been developed. The selected architecture is intended for a PET sys...

  4. Improved microwave-assisted wet digestion procedures for accurate Se determination in fish and shellfish by flow injection-hydride generation-atomic absorption spectrometry

    International Nuclear Information System (INIS)

    Lavilla, I.; Gonzalez-Costas, J.M.; Bendicho, C.

    2007-01-01

    Accurate determination of Se in biological samples, especially fish and shellfish, by hydride generation techniques has generally proven troublesome owing to the presence of organoselenium that cannot readily converted into inorganic selenium under usual oxidising conditions. Further improvements in the oxidation procedures are needed so as to obtain accurate concentration values when this type of samples is analyzed. Microwave-assisted wet digestion (MAWD) procedures of seafood based on HNO 3 or the mixture HNO 3 /H 2 O 2 and further thermal reduction of the Se(VI) formed to Se(IV) were evaluated. These procedures were as follows: (I) without H 2 O 2 and without heating to dryness; (II) without H 2 O 2 and with heating to dryness; (III) with H 2 O 2 and without heating to dryness; (IV) with H 2 O 2 and with heating to dryness. In general, low recoveries of selenium are obtained for several marine species (e.g., crustaceans and cephalopods), which may be ascribed to the presence of Se forms mainly associated with nonpolar proteins and lipids. Post-digestion UV irradiation proved very efficient since not only complete organoselenium decomposition was achieved but also the final step required for prereduction of Se(VI) into Se(IV) (i.e. heating at 90 deg. C for 30 min in 6 M HCl) could be avoided. With the MAWD/UV procedure, the use of strong oxidising agents (persuphate, etc.) or acids (e.g. perchloric acid) which are typically applied prior to Se determination by hydride generation techniques is overcome, and as a result, sample pre-treatment is significantly simplified. The method was successfully validated against CRM DOLT-2 (dogfish liver), CRM DORM-2 (dogfish muscle) and CRM TORT-2 (lobster hepatopancreas). Automated ultrasonic slurry sampling with electrothermal atomic absorption spectrometry was also applied for comparison. Total Se contents in ten seafood samples were established. Se levels ranged from 0.7 to 2.9 μg g -1

  5. The development of accurate data for the desing of fast reactors

    International Nuclear Information System (INIS)

    Rossouw, P.A.

    1976-04-01

    The proposed use of nuclear power in the generation of electricity in South Africa and the use of fast reactors in the country's nuclear porgram, requires a method for fast reactor evluation. The availability of accurate neutron data and neutronics computation techniques for fast reactors are required for such an evaluation. The reacotr physics and reactor parameters of importance in the evaluation of fast reacotrs are discussed, and computer programs for the computation of reactor spectra and reacotr parameters from differential nuclear data are presented in this treatise. In endeavouring to increase the accuracy in fast reactor design, two methods for the improvement of differential nuclear data were developed and are discussed in detail. The computer programs which were developed for this purpose are also given. The neutron data of the most important fissionable and breeding nuclei (U 235 x U 238 x Pu 239 and Pu 240 ) are adjusted using both methods and the improved neutron data are tested by computation with an advanced neutronics computer program. The improved and orginal neutron data are compared and the use of the improved data in fast reactor design is discussed

  6. Improving automated disturbance maps using snow-covered landsat time series stacks

    Science.gov (United States)

    Kirk M. Stueve; Ian W. Housman; Patrick L. Zimmerman; Mark D. Nelson; Jeremy Webb; Charles H. Perry; Robert A. Chastain; Dale D. Gormanson; Chengquan Huang; Sean P. Healey; Warren B. Cohen

    2012-01-01

    Snow-covered winter Landsat time series stacks are used to develop a nonforest mask to enhance automated disturbance maps produced by the Vegetation Change Tracker (VCT). This method exploits the enhanced spectral separability between forested and nonforested areas that occurs with sufficient snow cover. This method resulted in significant improvements in Vegetation...

  7. Low- and high-order accurate boundary conditions: From Stokes to Darcy porous flow modeled with standard and improved Brinkman lattice Boltzmann schemes

    International Nuclear Information System (INIS)

    Silva, Goncalo; Talon, Laurent; Ginzburg, Irina

    2017-01-01

    The present contribution focuses on the accuracy of reflection-type boundary conditions in the Stokes–Brinkman–Darcy modeling of porous flows solved with the lattice Boltzmann method (LBM), which we operate with the two-relaxation-time (TRT) collision and the Brinkman-force based scheme (BF), called BF-TRT scheme. In parallel, we compare it with the Stokes–Brinkman–Darcy linear finite element method (FEM) where the Dirichlet boundary conditions are enforced on grid vertices. In bulk, both BF-TRT and FEM share the same defect: in their discretization a correction to the modeled Brinkman equation appears, given by the discrete Laplacian of the velocity-proportional resistance force. This correction modifies the effective Brinkman viscosity, playing a crucial role in the triggering of spurious oscillations in the bulk solution. While the exact form of this defect is available in lattice-aligned, straight or diagonal, flows; in arbitrary flow/lattice orientations its approximation is constructed. At boundaries, we verify that such a Brinkman viscosity correction has an even more harmful impact. Already at the first order, it shifts the location of the no-slip wall condition supported by traditional LBM boundary schemes, such as the bounce-back rule. For that reason, this work develops a new class of boundary schemes to prescribe the Dirichlet velocity condition at an arbitrary wall/boundary-node distance and that supports a higher order accuracy in the accommodation of the TRT-Brinkman solutions. For their modeling, we consider the standard BF scheme and its improved version, called IBF; this latter is generalized in this work to suppress or to reduce the viscosity correction in arbitrarily oriented flows. Our framework extends the one- and two-point families of linear and parabolic link-wise boundary schemes, respectively called B-LI and B-MLI, which avoid the interference of the Brinkman viscosity correction in their closure relations. The performance of LBM

  8. Low- and high-order accurate boundary conditions: From Stokes to Darcy porous flow modeled with standard and improved Brinkman lattice Boltzmann schemes

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Goncalo, E-mail: goncalo.nuno.silva@gmail.com [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France); Talon, Laurent, E-mail: talon@fast.u-psud.fr [CNRS (UMR 7608), Laboratoire FAST, Batiment 502, Campus University, 91405 Orsay (France); Ginzburg, Irina, E-mail: irina.ginzburg@irstea.fr [Irstea, Antony Regional Centre, HBAN, 1 rue Pierre-Gilles de Gennes CS 10030, 92761 Antony cedex (France)

    2017-04-15

    The present contribution focuses on the accuracy of reflection-type boundary conditions in the Stokes–Brinkman–Darcy modeling of porous flows solved with the lattice Boltzmann method (LBM), which we operate with the two-relaxation-time (TRT) collision and the Brinkman-force based scheme (BF), called BF-TRT scheme. In parallel, we compare it with the Stokes–Brinkman–Darcy linear finite element method (FEM) where the Dirichlet boundary conditions are enforced on grid vertices. In bulk, both BF-TRT and FEM share the same defect: in their discretization a correction to the modeled Brinkman equation appears, given by the discrete Laplacian of the velocity-proportional resistance force. This correction modifies the effective Brinkman viscosity, playing a crucial role in the triggering of spurious oscillations in the bulk solution. While the exact form of this defect is available in lattice-aligned, straight or diagonal, flows; in arbitrary flow/lattice orientations its approximation is constructed. At boundaries, we verify that such a Brinkman viscosity correction has an even more harmful impact. Already at the first order, it shifts the location of the no-slip wall condition supported by traditional LBM boundary schemes, such as the bounce-back rule. For that reason, this work develops a new class of boundary schemes to prescribe the Dirichlet velocity condition at an arbitrary wall/boundary-node distance and that supports a higher order accuracy in the accommodation of the TRT-Brinkman solutions. For their modeling, we consider the standard BF scheme and its improved version, called IBF; this latter is generalized in this work to suppress or to reduce the viscosity correction in arbitrarily oriented flows. Our framework extends the one- and two-point families of linear and parabolic link-wise boundary schemes, respectively called B-LI and B-MLI, which avoid the interference of the Brinkman viscosity correction in their closure relations. The performance of LBM

  9. Performance Improvement of Real-Time System for Plasma Control in RFX-mod

    International Nuclear Information System (INIS)

    Luchetta, A.; Manduchi, G.; Soppelsa, A.; Taliercio, C.

    2006-01-01

    The real-time system for plasma control has been used routinely in RFX-mod since commissioning (mid 2005). It is based on a modular hardware/software infrastructure, currently including 7 VME stations, capable of fulfilling the tight system requirements in terms of input/output channels (> 700 / > 250), real-time data flow (> 2 Mbyte/s), computation capability (> 1 GFLOP/s per station), and real-time constraints (application cycle times rd EPS Conf. on Plasma Physics, Rome Italy, June 19 - 23 2006]. The high flexibility of the system has stimulated the development of a large number of control schemes with progressively increasing requests in terms of computation complexity and real-time data flow, demanding, at the same time, strict control on cycle times and system latency. Even though careful optimisation of algorithm implementation and real-time data transmission have been performed, allowing to keep pace, so far, with the increased control requirements, future developments require to evolve the current technology, retaining the basic architecture and concepts. Two system enhancements are envisaged in the near future. The 500 MHz PowerPC-based Single Board Computer currently in use will be substituted with the 1 GHz version, whereas the real-time communication system will increase in bandwidth from 100 Mbit/s to 1 Gbit/s. These improvements will surely enhance the overall system performance, even if it is not possible to quantify a priori the exact performance boost, since other components may limit the performance in the new configuration. The paper reports in detail on the analysis of the bottlenecks of the current architecture. Based on measurements carried out in laboratory, it presents the results achieved with the proposed enhancements in terms of real-time data throughput, cycle times and latency. The paper analyses in detail the effects of the increased computing power on the components of the control system and of the improved bandwidth in real-time

  10. Three-dimensional ultrasound image-guided robotic system for accurate microwave coagulation of malignant liver tumours.

    Science.gov (United States)

    Xu, Jing; Jia, Zhen-zhong; Song, Zhang-jun; Yang, Xiang-dong; Chen, Ken; Liang, Ping

    2010-09-01

    The further application of conventional ultrasound (US) image-guided microwave (MW) ablation of liver cancer is often limited by two-dimensional (2D) imaging, inaccurate needle placement and the resulting skill requirement. The three-dimensional (3D) image-guided robotic-assisted system provides an appealing alternative option, enabling the physician to perform consistent, accurate therapy with improved treatment effectiveness. Our robotic system is constructed by integrating an imaging module, a needle-driven robot, a MW thermal field simulation module, and surgical navigation software in a practical and user-friendly manner. The robot executes precise needle placement based on the 3D model reconstructed from freehand-tracked 2D B-scans. A qualitative slice guidance method for fine registration is introduced to reduce the placement error caused by target motion. By incorporating the 3D MW specific absorption rate (SAR) model into the heat transfer equation, the MW thermal field simulation module determines the MW power level and the coagulation time for improved ablation therapy. Two types of wrists are developed for the robot: a 'remote centre of motion' (RCM) wrist and a non-RCM wrist, which is preferred in real applications. The needle placement accuracies were robot with the RCM wrist was improved to 1.6 +/- 1.0 mm when real-time 2D US feedback was used in the artificial-tissue phantom experiment. By using the slice guidance method, the robot with the non-RCM wrist achieved accuracy of 1.8 +/- 0.9 mm in the ex vivo experiment; even target motion was introduced. In the thermal field experiment, a 5.6% relative mean error was observed between the experimental coagulated neurosis volume and the simulation result. The proposed robotic system holds promise to enhance the clinical performance of percutaneous MW ablation of malignant liver tumours. Copyright 2010 John Wiley & Sons, Ltd.

  11. Plane-wave Least-squares Reverse Time Migration

    KAUST Repository

    Dai, Wei

    2012-11-04

    Least-squares reverse time migration is formulated with a new parameterization, where the migration image of each shot is updated separately and a prestack image is produced with common image gathers. The advantage is that it can offer stable convergence for least-squares migration even when the migration velocity is not completely accurate. To significantly reduce computation cost, linear phase shift encoding is applied to hundreds of shot gathers to produce dozens of planes waves. A regularization term which penalizes the image difference between nearby angles are used to keep the prestack image consistent through all the angles. Numerical tests on a marine dataset is performed to illustrate the advantages of least-squares reverse time migration in the plane-wave domain. Through iterations of least-squares migration, the migration artifacts are reduced and the image resolution is improved. Empirical results suggest that the LSRTM in plane wave domain is an efficient method to improve the image quality and produce common image gathers.

  12. Ultra-accurate collaborative information filtering via directed user similarity

    Science.gov (United States)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  13. Accurate 3D Localization Method for Public Safety Applications in Vehicular Ad-hoc Networks

    KAUST Repository

    Ansari, Abdul Rahim

    2018-04-10

    Vehicular ad hoc networks (VANETs) represent a very promising research area because of their ever increasing demand, especially for public safety applications. In VANETs vehicles communicate with each other to exchange road maps and traffic information. In many applications, location-based services are the main service, and localization accuracy is the main problem. VANETs also require accurate vehicle location information in real time. To fulfill this requirement, a number of algorithms have been proposed; however, the location accuracy required for public safety applications in VANETs has not been achieved. In this paper, an improved subspace algorithm is proposed for time of arrival (TOA) measurements in VANETs localization. The proposed method gives a closed-form solution and it is robust for large measurement noise, as it is based on the eigen form of a scalar product and dimensionality. Furthermore, we developed the Cramer-Rao Lower Bound (CRLB) to evaluate the performance of the proposed 3D VANETs localization method. The performance of the proposed method was evaluated by comparison with the CRLB and other localization algorithms available in the literature through numerous simulations. Simulation results show that the proposed 3D VANETs localization method is better than the literature methods especially for fewer anchors at road side units and large noise variance.

  14. Fault diagnosis for analog circuits utilizing time-frequency features and improved VVRKFA

    Science.gov (United States)

    He, Wei; He, Yigang; Luo, Qiwu; Zhang, Chaolong

    2018-04-01

    This paper proposes a novel scheme for analog circuit fault diagnosis utilizing features extracted from the time-frequency representations of signals and an improved vector-valued regularized kernel function approximation (VVRKFA). First, the cross-wavelet transform is employed to yield the energy-phase distribution of the fault signals over the time and frequency domain. Since the distribution is high-dimensional, a supervised dimensionality reduction technique—the bilateral 2D linear discriminant analysis—is applied to build a concise feature set from the distributions. Finally, VVRKFA is utilized to locate the fault. In order to improve the classification performance, the quantum-behaved particle swarm optimization technique is employed to gradually tune the learning parameter of the VVRKFA classifier. The experimental results for the analog circuit faults classification have demonstrated that the proposed diagnosis scheme has an advantage over other approaches.

  15. Approximate distance oracles for planar graphs with improved query time-space tradeoff

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    2016-01-01

    We consider approximate distance oracles for edge-weighted n-vertex undirected planar graphs. Given fixed ϵ > 0, we present a (1 + ϵ)-approximate distance oracle with O(n(log log n)2) space and O((loglogr?,)3) query time. This improves the previous best product of query time and space...... of the oracles of Thorup (FOCS 2001, J. ACM 2004) and Klein (SODA 2002) from O(nlogn) to O(n(loglogn)5)....

  16. Improvement of a land surface model for accurate prediction of surface energy and water balances

    International Nuclear Information System (INIS)

    Katata, Genki

    2009-02-01

    In order to predict energy and water balances between the biosphere and atmosphere accurately, sophisticated schemes to calculate evaporation and adsorption processes in the soil and cloud (fog) water deposition on vegetation were implemented in the one-dimensional atmosphere-soil-vegetation model including CO 2 exchange process (SOLVEG2). Performance tests in arid areas showed that the above schemes have a significant effect on surface energy and water balances. The framework of the above schemes incorporated in the SOLVEG2 and instruction for running the model are documented. With further modifications of the model to implement the carbon exchanges between the vegetation and soil, deposition processes of materials on the land surface, vegetation stress-growth-dynamics etc., the model is suited to evaluate an effect of environmental loads to ecosystems by atmospheric pollutants and radioactive substances under climate changes such as global warming and drought. (author)

  17. Overlay improvements using a real time machine learning algorithm

    Science.gov (United States)

    Schmitt-Weaver, Emil; Kubis, Michael; Henke, Wolfgang; Slotboom, Daan; Hoogenboom, Tom; Mulkens, Jan; Coogans, Martyn; ten Berge, Peter; Verkleij, Dick; van de Mast, Frank

    2014-04-01

    While semiconductor manufacturing is moving towards the 14nm node using immersion lithography, the overlay requirements are tightened to below 5nm. Next to improvements in the immersion scanner platform, enhancements in the overlay optimization and process control are needed to enable these low overlay numbers. Whereas conventional overlay control methods address wafer and lot variation autonomously with wafer pre exposure alignment metrology and post exposure overlay metrology, we see a need to reduce these variations by correlating more of the TWINSCAN system's sensor data directly to the post exposure YieldStar metrology in time. In this paper we will present the results of a study on applying a real time control algorithm based on machine learning technology. Machine learning methods use context and TWINSCAN system sensor data paired with post exposure YieldStar metrology to recognize generic behavior and train the control system to anticipate on this generic behavior. Specific for this study, the data concerns immersion scanner context, sensor data and on-wafer measured overlay data. By making the link between the scanner data and the wafer data we are able to establish a real time relationship. The result is an inline controller that accounts for small changes in scanner hardware performance in time while picking up subtle lot to lot and wafer to wafer deviations introduced by wafer processing.

  18. Using time-driven activity-based costing to identify value improvement opportunities in healthcare.

    Science.gov (United States)

    Kaplan, Robert S; Witkowski, Mary; Abbott, Megan; Guzman, Alexis Barboza; Higgins, Laurence D; Meara, John G; Padden, Erin; Shah, Apurva S; Waters, Peter; Weidemeier, Marco; Wertheimer, Sam; Feeley, Thomas W

    2014-01-01

    As healthcare providers cope with pricing pressures and increased accountability for performance, they should be rededicating themselves to improving the value they deliver to their patients: better outcomes and lower costs. Time-driven activity-based costing offers the potential for clinicians to redesign their care processes toward that end. This costing approach, however, is new to healthcare and has not yet been systematically implemented and evaluated. This article describes early time-driven activity-based costing work at several leading healthcare organizations in the United States and Europe. It identifies the opportunities they found to improve value for patients and demonstrates how this costing method can serve as the foundation for new bundled payment reimbursement approaches.

  19. Accurate phylogenetic classification of DNA fragments based onsequence composition

    Energy Technology Data Exchange (ETDEWEB)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  20. Improving your real-time data infrastructure using advanced data validation and reconciliation

    Energy Technology Data Exchange (ETDEWEB)

    Wising, Ulrika; Campan, Julien; Vrielynck, Bruno; Anjos, Cristiano dos; Kalitventzeff, Pierre-Boris [Belsim S.A., Awans (Belgium)

    2008-07-01

    'Smart fields', 'e-fields', 'field of the future', 'digital oil fields' and 'field monitoring' are all names of real-time data infrastructures aimed at providing information for decision making. This paper discusses these new real-time data infrastructures that are being developed and deployed in oil and gas production and in particular the challenge of supplying these new systems with high quality data. In order for these infrastructures to be successful and provide efficient and successful performance management and optimization, they need to have access to high quality production data. Advanced Data Validation and Reconciliation is a technology that could meet this data quality challenge. It has been successfully deployed in many different industry sectors and more recently in oil and gas production. Advanced Data Validation and Reconciliation provides a coherent, accurate set of production data and basing these new infrastructures on validated and reconciled data brings a solution to the data quality challenge. There are numerous other benefits by applying advanced data validation and reconciliation in oil and gas production, such as uninterrupted well production, optimized valves opening and water or gas injection, backup values for traditional multiphase flow meters, and the avoidance of production upsets. (author)

  1. Multiscale Analysis of Time Irreversibility Based on Phase-Space Reconstruction and Horizontal Visibility Graph Approach

    Science.gov (United States)

    Zhang, Yongping; Shang, Pengjian; Xiong, Hui; Xia, Jianan

    Time irreversibility is an important property of nonequilibrium dynamic systems. A visibility graph approach was recently proposed, and this approach is generally effective to measure time irreversibility of time series. However, its result may be unreliable when dealing with high-dimensional systems. In this work, we consider the joint concept of time irreversibility and adopt the phase-space reconstruction technique to improve this visibility graph approach. Compared with the previous approach, the improved approach gives a more accurate estimate for the irreversibility of time series, and is more effective to distinguish irreversible and reversible stochastic processes. We also use this approach to extract the multiscale irreversibility to account for the multiple inherent dynamics of time series. Finally, we apply the approach to detect the multiscale irreversibility of financial time series, and succeed to distinguish the time of financial crisis and the plateau. In addition, Asian stock indexes away from other indexes are clearly visible in higher time scales. Simulations and real data support the effectiveness of the improved approach when detecting time irreversibility.

  2. arXiv Mass-improvement of the vector current in three-flavor QCD

    CERN Document Server

    Fritzsch, Patrick

    2018-06-04

    We determine two improvement coefficients which are relevant to cancel mass-dependent cutoff effects in correlation functions with operator insertions of the non-singlet local QCD vector current. This determination is based on degenerate three-flavor QCD simulations of non-perturbatively O(a) improved Wilson fermions with tree-level improved gauge action. Employing a very robust strategy that has been pioneered in the quenched approximation leads to an accurate estimate of a counterterm cancelling dynamical quark cutoff effects linear in the trace of the quark mass matrix. To our knowledge this is the first time that such an effect has been determined systematically with large significance.

  3. Real-Time Electronic Dashboard Technology and Its Use to Improve Pediatric Radiology Workflow.

    Science.gov (United States)

    Shailam, Randheer; Botwin, Ariel; Stout, Markus; Gee, Michael S

    The purpose of our study was to create a real-time electronic dashboard in the pediatric radiology reading room providing a visual display of updated information regarding scheduled and in-progress radiology examinations that could help radiologists to improve clinical workflow and efficiency. To accomplish this, a script was set up to automatically send real-time HL7 messages from the radiology information system (Epic Systems, Verona, WI) to an Iguana Interface engine, with relevant data regarding examinations stored in an SQL Server database for visual display on the dashboard. Implementation of an electronic dashboard in the reading room of a pediatric radiology academic practice has led to several improvements in clinical workflow, including decreasing the time interval for radiologist protocol entry for computed tomography or magnetic resonance imaging examinations as well as fewer telephone calls related to unprotocoled examinations. Other advantages include enhanced ability of radiologists to anticipate and attend to examinations requiring radiologist monitoring or scanning, as well as to work with technologists and operations managers to optimize scheduling in radiology resources. We foresee increased utilization of electronic dashboard technology in the future as a method to improve radiology workflow and quality of patient care. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Accurate determination of 3-alkyl-2-methoxypyrazines in wines by gas chromatography quadrupole time-of-flight tandem mass spectrometry following solid-phase extraction and dispersive liquid-liquid microextraction.

    Science.gov (United States)

    Fontana, Ariel; Rodríguez, Isaac; Cela, Rafael

    2017-09-15

    A new reliable method for the determination 3-alkyl-2-methoxypyrazines (MPs) in wine samples based on the sequential combination of solid-phase extraction (SPE), dispersive liquid-liquid microextraction (DLLME) and gas chromatography (GC) quadrupole time-of-flight accurate tandem mass spectrometry (QTOF-MS/MS) is presented. Primary extraction of target analytes was carried out by using a reversed-phase Oasis HLB (200mg) SPE cartridge combined with acetonitrile as elution solvent. Afterwards, the SPE extract was submitted to DLLME concentration using 0.06mL carbon tetrachloride (CCl 4 ) as extractant. Under final working conditions, sample concentration factors above 379 times and limits of quantification (LOQs) between 0.3 and 2.1ngL -1 were achieved. Moreover, the overall extraction efficiency of the method was unaffected by the particular characteristics of each wine; thus, accurate results (relative recoveries from 84 to 108% for samples spiked at concentrations from 5 to 25ngL -1 ) were obtained using matrix-matched standards, without using standard additions over every sample. Highly selective chromatographic records were achieved considering a mass window of 5mDa, centered in the quantification product ion corresponding to each compound. Twelve commercial wines, elaborated with grapes from different varieties and geographical origins, were processed with the optimized method. The 2-isobutyl-3-methoxypyrazine (IBMP) was determined at levels above the LOQs of the method in half of the samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. How Photonic Crystals Can Improve the Timing Resolution of Scintillators

    CERN Document Server

    Lecoq, P; Knapitsch, A

    2013-01-01

    Photonic crystals (PhCs) and quantum optics phenomena open interesting perspectives to enhance the light extraction from scintillating me dia with high refractive indices as demonstrated by our previous work. By doing so, they also in fl uence the timing resolution of scintillators by improving the photostatistics. The present cont ribution will demonstrate that they are actually doing much more. Indeed, photonic crystals, if properly designed, allow the extr action of fast light propagation modes in the crystal with higher efficiency, therefore contributing to increasing the density of photons in the early phase of the light pulse. This is of particular interest to tag events at future high-energy physics colliders, such as CLIC, with a bunch-crossing rate of 2 GHz, as well as for a new generation of time-of-flight positron emission tomographs (TOFPET) aiming at a coincidence timing resolution of 100 ps FWHM. At this level of precision, good control of the light propagation modes is crucial if we consid...

  6. Surgical Outcome of Intermittent Exotropia With Improvement in Control Grade Subsequent to Part-time Preoperative Occlusion Therapy.

    Science.gov (United States)

    Bang, Seung Pil; Lee, Dong Cheol; Lee, Se Youp

    2018-01-01

    To evaluate the effect of improvement in the control grade of intermittent exotropia using part-time occlusion therapy on the final postoperative outcome. Control of intermittent exotropia was graded as good, fair, or poor in 89 consecutive patients with intermittent exotropia during their first visit. The patients were reevaluated after part-time preoperative occlusion therapy and divided into two groups (improvement and no improvement) according to whether they showed improvement in control grade. The surgical success rate was compared retrospectively between the two groups. The mean angle of deviation on the first visit was 27.61 ± 5.40 prism diopters (PD) at distance and 29.82 ± 5.28 PD at near. There were significant improvements in the angles of deviation for distance (26.17 ± 5.09 PD) and near (27.26 ± 5.56 PD) after part-time occlusion (both P Part-time occlusion therapy improves the control grade of intermittent exotropia, leading to a better likelihood of successful surgery and a reduction of the angles of deviation for distance and near. [J Pediatr Ophthalmol Strabismus. 2018;55(1):59-64.]. Copyright 2017, SLACK Incorporated.

  7. Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters

    Science.gov (United States)

    Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.

    2004-12-01

    Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various

  8. Improving photometric redshift estimation using GPZ: size information, post processing, and improved photometry

    Science.gov (United States)

    Gomes, Zahra; Jarvis, Matt J.; Almosallam, Ibrahim A.; Roberts, Stephen J.

    2018-03-01

    The next generation of large-scale imaging surveys (such as those conducted with the Large Synoptic Survey Telescope and Euclid) will require accurate photometric redshifts in order to optimally extract cosmological information. Gaussian Process for photometric redshift estimation (GPZ) is a promising new method that has been proven to provide efficient, accurate photometric redshift estimations with reliable variance predictions. In this paper, we investigate a number of methods for improving the photometric redshift estimations obtained using GPZ (but which are also applicable to others). We use spectroscopy from the Galaxy and Mass Assembly Data Release 2 with a limiting magnitude of r Program Data Release 1 and find that it produces significant improvements in accuracy, similar to the effect of including additional features.

  9. Real-time fMRI feedback training may improve chronic tinnitus

    Energy Technology Data Exchange (ETDEWEB)

    Haller, Sven [University Hospital Basel, Institute of Radiology, Department of Neuroradiology, Basel (Switzerland); Department of Imaging and Medical Informatics, Geneva University Hospital, Institute of Neuroradiology, Geneva (Switzerland); Birbaumer, Niels [University of Tuebingen, Institute of Medical Psychology and Behavioral Neurobiology, Tuebingen (Germany); Instituto di Ricovero e Cura a Carattere Scientifico, Ospedale San Camillo, Venezia (Italy); Veit, Ralf [University of Tuebingen, Institute of Medical Psychology and Behavioral Neurobiology, Tuebingen (Germany)

    2010-03-15

    Tinnitus consists of a more or less constant aversive tone or noise and is associated with excess auditory activation. Transient distortion of this activation (repetitive transcranial magnetic stimulation, rTMS) may improve tinnitus. Recently proposed operant training in real-time functional magnetic resonance imaging (rtfMRI) neurofeedback allows voluntary modification of specific circumscribed neuronal activations. Combining these observations, we investigated whether patients suffering from tinnitus can (1) learn to voluntarily reduce activation of the auditory system by rtfMRI neurofeedback and whether (2) successful learning improves tinnitus symptoms. Six participants with chronic tinnitus were included. First, location of the individual auditory cortex was determined in a standard fMRI auditory block-design localizer. Then, participants were trained to voluntarily reduce the auditory activation (rtfMRI) with visual biofeedback of the current auditory activation. Auditory activation significantly decreased after rtfMRI neurofeedback. This reduced the subjective tinnitus in two of six participants. These preliminary results suggest that tinnitus patients learn to voluntarily reduce spatially specific auditory activations by rtfMRI neurofeedback and that this may reduce tinnitus symptoms. Optimized training protocols (frequency, duration, etc.) may further improve the results. (orig.)

  10. Real-time fMRI feedback training may improve chronic tinnitus

    International Nuclear Information System (INIS)

    Haller, Sven; Birbaumer, Niels; Veit, Ralf

    2010-01-01

    Tinnitus consists of a more or less constant aversive tone or noise and is associated with excess auditory activation. Transient distortion of this activation (repetitive transcranial magnetic stimulation, rTMS) may improve tinnitus. Recently proposed operant training in real-time functional magnetic resonance imaging (rtfMRI) neurofeedback allows voluntary modification of specific circumscribed neuronal activations. Combining these observations, we investigated whether patients suffering from tinnitus can (1) learn to voluntarily reduce activation of the auditory system by rtfMRI neurofeedback and whether (2) successful learning improves tinnitus symptoms. Six participants with chronic tinnitus were included. First, location of the individual auditory cortex was determined in a standard fMRI auditory block-design localizer. Then, participants were trained to voluntarily reduce the auditory activation (rtfMRI) with visual biofeedback of the current auditory activation. Auditory activation significantly decreased after rtfMRI neurofeedback. This reduced the subjective tinnitus in two of six participants. These preliminary results suggest that tinnitus patients learn to voluntarily reduce spatially specific auditory activations by rtfMRI neurofeedback and that this may reduce tinnitus symptoms. Optimized training protocols (frequency, duration, etc.) may further improve the results. (orig.)

  11. Measuring the value of process improvement initiatives in a preoperative assessment center using time-driven activity-based costing.

    Science.gov (United States)

    French, Katy E; Albright, Heidi W; Frenzel, John C; Incalcaterra, James R; Rubio, Augustin C; Jones, Jessica F; Feeley, Thomas W

    2013-12-01

    The value and impact of process improvement initiatives are difficult to quantify. We describe the use of time-driven activity-based costing (TDABC) in a clinical setting to quantify the value of process improvements in terms of cost, time and personnel resources. Difficulty in identifying and measuring the cost savings of process improvement initiatives in a Preoperative Assessment Center (PAC). Use TDABC to measure the value of process improvement initiatives that reduce the costs of performing a preoperative assessment while maintaining the quality of the assessment. Apply the principles of TDABC in a PAC to measure the value, from baseline, of two phases of performance improvement initiatives and determine the impact of each implementation in terms of cost, time and efficiency. Through two rounds of performance improvements, we quantified an overall reduction in time spent by patient and personnel of 33% that resulted in a 46% reduction in the costs of providing care in the center. The performance improvements resulted in a 17% decrease in the total number of full time equivalents (FTE's) needed to staff the center and a 19% increase in the numbers of patients assessed in the center. Quality of care, as assessed by the rate of cancellations on the day of surgery, was not adversely impacted by the process improvements. © 2013 Published by Elsevier Inc.

  12. Accurate estimation of influenza epidemics using Google search data via ARGO.

    Science.gov (United States)

    Yang, Shihao; Santillana, Mauricio; Kou, S C

    2015-11-24

    Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.

  13. A miniature shoe-mounted orientation determination system for accurate indoor heading and trajectory tracking.

    Science.gov (United States)

    Zhang, Shengzhi; Yu, Shuai; Liu, Chaojun; Liu, Sheng

    2016-06-01

    Tracking the position of pedestrian is urgently demanded when the most commonly used GPS (Global Position System) is unavailable. Benefited from the small size, low-power consumption, and relatively high reliability, micro-electro-mechanical system sensors are well suited for GPS-denied indoor pedestrian heading estimation. In this paper, a real-time miniature orientation determination system (MODS) was developed for indoor heading and trajectory tracking based on a novel dual-linear Kalman filter. The proposed filter precludes the impact of geomagnetic distortions on pitch and roll that the heading is subjected to. A robust calibration approach was designed to improve the accuracy of sensors measurements based on a unified sensor model. Online tests were performed on the MODS with an improved turntable. The results demonstrate that the average RMSE (root-mean-square error) of heading estimation is less than 1°. Indoor heading experiments were carried out with the MODS mounted on the shoe of pedestrian. Besides, we integrated the existing MODS into an indoor pedestrian dead reckoning application as an example of its utility in realistic actions. A human attitude-based walking model was developed to calculate the walking distance. Test results indicate that mean percentage error of indoor trajectory tracking achieves 2% of the total walking distance. This paper provides a feasible alternative for accurate indoor heading and trajectory tracking.

  14. Accurate calculation of Green functions on the d-dimensional hypercubic lattice

    International Nuclear Information System (INIS)

    Loh, Yen Lee

    2011-01-01

    We write the Green function of the d-dimensional hypercubic lattice in a piecewise form covering the entire real frequency axis. Each piece is a single integral involving modified Bessel functions of the first and second kinds. The smoothness of the integrand allows both real and imaginary parts of the Green function to be computed quickly and accurately for any dimension d and any real frequency, and the computational time scales only linearly with d.

  15. Improvement of the real-time processor in JT-60 data processing system

    International Nuclear Information System (INIS)

    Sakata, S.; Kiyono, K.; Sato, M.; Kominato, T.; Sueoka, M.; Hosoyama, H.; Kawamata, Y.

    2009-01-01

    Real-time processor, RTP is a basic subsystem in the JT-60 data processing system and plays an important role in JT-60 feedback control for plasma experiment. During the experiment, RTP acquires various diagnostic signals, processes them into a form of physical values, and transfers them as sensor signals to the particle supply and heating control supervisor for feedback control via reflective memory synchronization with 1 ms clock signals. After the start of RTP operation in 1997, to meet the demand for advanced plasma experiment, RTP had been improved continuously such as by addition of diagnostic signals with faster digitizers, reducing time for data transfer utilizing reflective memory instead of CAMAC. However, it is becoming increasingly difficult to maintain, manage, and improve the outdated RTP with limited system CPU capability. Currently, a prototype RTP system is being developed for the next real-time processing system, which is composed of clustered system utilizing VxWorks computer. The processes on the existing RTP system will be decentralized to the VxWorks computer to solve the issues of the existing RTP system. The prototype RTP system will start to operate in August 2008.

  16. Improvements in Cycling Time Trial Performance Are Not Sustained Following the Acute Provision of Challenging and Deceptive Feedback

    Directory of Open Access Journals (Sweden)

    Hollie S Jones

    2016-09-01

    Full Text Available TThe provision of performance-related feedback during exercise is acknowledged as an influential external cue used to inform pacing decisions. The provision of this feedback in a challenging or deceptive context allows research to explore how feedback can be used to improve performance and influence perceptual responses. However, the effects of deception on both acute and residual responses have yet to be explored, despite potential application for performance enhancement. Therefore, this study investigated the effects of challenging and deceptive feedback on perceptual responses and performance in self-paced cycling time trials (TT and explored whether changes in performance are sustained in a subsequent TT following the disclosure of the deception.Seventeen trained male cyclists were assigned to either an accurate or deceptive feedback group and performed four 16.1 km cycling TTs; 1 and 2 ride-alone baseline TTs where a fastest baseline (FBL performance was identified, 3 a TT against a virtual avatar representing 102% of their FBL performance (PACER, and 4 a subsequent ride-alone TT (SUB. The deception group, however, were initially informed that the avatar accurately represented their FBL, but prior to SUB were correctly informed of the nature of the avatar. Affect, self-efficacy and RPE were measured every quartile. Both groups performed PACER faster than FBL and SUB (p < 0.05 and experienced lower affect (p = 0.016, lower self-efficacy (p = 0.011, and higher RPE (p < 0.001 in PACER than FBL. No significant differences were found between FBL and SUB for any variable. The presence of the pacer rather than the manipulation of performance beliefs acutely facilitates TT performance and perceptual responses. Revealing that athletes’ performance beliefs were falsely negative due to deceptive feedback provision has no effect on subsequent perceptions or performance. A single experiential exposure may not be sufficient to produce meaningful

  17. Fifty years of atomic time-keeping at VNIIFTRI

    Science.gov (United States)

    Domnin, Yu; Gaigerov, B.; Koshelyaevsky, N.; Poushkin, S.; Rusin, F.; Tatarenkov, V.; Yolkin, G.

    2005-06-01

    Time metrology in Russia in the second half of the twentieth century has been marked, as in other advanced countries, by the rapid development of time and frequency quantum standards and the beginning of atomic time-keeping. This brief review presents the main developments and studies in time and frequency measurement, and the improvement of accuracy and atomic time-keeping at the VNIIFTRI—the National Metrology Institute keeping primary time and frequency standards and ensuring unification of measurement. The milestones along the way have been the ammonia and hydrogen masers, primary caesium beam and fountain standards and laser frequency standards. For many years, VNIIFTRI was the only world laboratory that applied hydrogen-maser clock ensembles for time-keeping. VNIIFTRI's work on international laser standard frequency comparisons and absolute frequency measurements contributed greatly to the adoption by the CIPM of a highly accurate value for the He-Ne/CH4 laser frequency. VNIIFTRI and the VNIIM were the first to establish a united time, frequency and length standard.

  18. Accurate and Simple Calibration of DLP Projector Systems

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination...

  19. Digital holography super-resolution for accurate three-dimensional reconstruction of particle holograms.

    Science.gov (United States)

    Verrier, Nicolas; Fournier, Corinne

    2015-01-15

    In-line digital holography (DH) is used in many fields to locate and size micro or nano-objects spread in a volume. To reconstruct simple shaped objects, the optimal approach is to fit an imaging model to accurately estimate their position and their characteristic parameters. Increasing the accuracy of the reconstruction is a big issue in DH, particularly when the pixel is large or the signal-to-noise ratio is low. We suggest exploiting the information redundancy of videos to improve the reconstruction of the holograms by jointly estimating the position of the objects and the characteristic parameters. Using synthetic and experimental data, we checked experimentally that this approach can improve the accuracy of the reconstruction by a factor more than the square root of the image number.

  20. Accurate location estimation of moving object In Wireless Sensor network

    Directory of Open Access Journals (Sweden)

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  1. The accurate assessment of small-angle X-ray scattering data.

    Science.gov (United States)

    Grant, Thomas D; Luft, Joseph R; Carter, Lester G; Matsui, Tsutomu; Weiss, Thomas M; Martel, Anne; Snell, Edward H

    2015-01-01

    Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.

  2. Quantum Monte Carlo: Faster, More Reliable, And More Accurate

    Science.gov (United States)

    Anderson, Amos Gerald

    2010-06-01

    The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our

  3. Time to improvement of pain and sleep quality in clinical trials of pregabalin for the treatment of fibromyalgia.

    Science.gov (United States)

    Arnold, Lesley M; Emir, Birol; Pauer, Lynne; Resnick, Malca; Clair, Andrew

    2015-01-01

    To determine the time to immediate and sustained clinical improvement in pain and sleep quality with pregabalin in patients with fibromyalgia. A post hoc analysis of four 8- to 14-week phase 2-3, placebo-controlled trials of fixed-dose pregabalin (150-600 mg/day) for fibromyalgia, comprising 12 pregabalin and four placebo treatment arms. A total of 2,747 patients with fibromyalgia, aged 18-82 years. Pain and sleep quality scores, recorded daily on 11-point numeric rating scales (NRSs), were analyzed to determine time to immediate improvement with pregabalin, defined as the first of ≥2 consecutive days when the mean NRS score was significantly lower for pregabalin vs placebo in those treatment arms with a significant improvement at endpoint, and time to sustained clinical improvement with pregabalin, defined as a ≥1-point reduction of the baseline NRS score of patient responders who had a ≥30% improvement on the pain NRS, sleep NRS, or Fibromyalgia Impact Questionnaire (FIQ) from baseline to endpoint, or who reported "much improved" or "very much improved" on the Patient Global Impression of Change (PGIC) at endpoint. Significant improvements in pain and sleep quality scores at endpoint vs placebo were seen in 8/12 and 11/12 pregabalin treatment arms, respectively (P < 0.05). In these arms, time to immediate improvements in pain or sleep occurred by day 1 or 2. Time to sustained clinical improvement occurred significantly earlier in pain, sleep, PGIC, and FIQ responders (P < 0.02) with pregabalin vs placebo. Both immediate and sustained clinical improvements in pain and sleep quality occurred faster with pregabalin vs placebo. Wiley Periodicals, Inc.

  4. Improving the time efficiency of identifying dairy herds with poorer welfare in a population.

    Science.gov (United States)

    de Vries, M; Bokkers, E A M; van Schaik, G; Engel, B; Dijkstra, T; de Boer, I J M

    2016-10-01

    Animal-based welfare assessment is time consuming and expensive. A promising strategy for improving the efficiency of identifying dairy herds with poorer welfare is to first estimate levels of welfare in herds based on data that are more easily obtained. Our aims were to evaluate the potential of herd housing and management data for estimating the level of welfare in dairy herds, and to estimate the associated reduction in the number of farm visits required for identification of herds with poorer welfare in a population. Seven trained observers collected data on 6 animal-based welfare indicators in a selected sample of 181 loose-housed Dutch dairy herds (herd size: 22 to 211 cows). Severely lame cows, cows with lesions or swellings, cows with a dirty hindquarter, and very lean cows were counted, and avoidance distance was assessed for a sample of cows. Occurrence of displacements (social behavior) was recorded in the whole barn during 120 min of observation. For the same herds, data regarding cattle housing and management were collected on farms, and data relating to demography, management, milk production and composition, and fertility were extracted from national databases. A herd was classified as having poorer welfare when it belonged to the 25% worst-scoring herds. We used variables of herd housing and management data as potential predictors for individual animal-based welfare indicators in logistic regressions at the herd level. Prediction was less accurate for the avoidance distance index [area under the curve (AUC)=0.69], and moderately accurate for prevalence of severely lame cows (AUC=0.83), prevalence of cows with lesions or swellings (AUC=0.81), prevalence of cows with a dirty hindquarter (AUC=0.74), prevalence of very lean cows (AUC=0.83), and frequency of displacements (AUC=0.72). We compared the number of farm visits required for identifying herds with poorer welfare in a population for a risk-based screening with predictions based on herd housing

  5. Highly accurate surface maps from profilometer measurements

    Science.gov (United States)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  6. Application of time-resolved glucose concentration photoacoustic signals based on an improved wavelet denoising

    Science.gov (United States)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-10-01

    Real-time monitoring of blood glucose concentration (BGC) is a great important procedure in controlling diabetes mellitus and preventing the complication for diabetic patients. Noninvasive measurement of BGC has already become a research hotspot because it can overcome the physical and psychological harm. Photoacoustic spectroscopy is a well-established, hybrid and alternative technique used to determine the BGC. According to the theory of photoacoustic technique, the blood is irradiated by plused laser with nano-second repeation time and micro-joule power, the photoacoustic singals contained the information of BGC are generated due to the thermal-elastic mechanism, then the BGC level can be interpreted from photoacoustic signal via the data analysis. But in practice, the time-resolved photoacoustic signals of BGC are polluted by the varities of noises, e.g., the interference of background sounds and multi-component of blood. The quality of photoacoustic signal of BGC directly impacts the precision of BGC measurement. So, an improved wavelet denoising method was proposed to eliminate the noises contained in BGC photoacoustic signals. To overcome the shortcoming of traditional wavelet threshold denoising, an improved dual-threshold wavelet function was proposed in this paper. Simulation experimental results illustrated that the denoising result of this improved wavelet method was better than that of traditional soft and hard threshold function. To varify the feasibility of this improved function, the actual photoacoustic BGC signals were test, the test reslut demonstrated that the signal-to-noises ratio(SNR) of the improved function increases about 40-80%, and its root-mean-square error (RMSE) decreases about 38.7-52.8%.

  7. Improving the distinguishable cluster results: spin-component scaling

    Science.gov (United States)

    Kats, Daniel

    2018-06-01

    The spin-component scaling is employed in the energy evaluation to improve the distinguishable cluster approach. SCS-DCSD reaction energies reproduce reference values with a root-mean-squared deviation well below 1 kcal/mol, the interaction energies are three to five times more accurate than DCSD, and molecular systems with a large amount of static electron correlation are still described reasonably well. SCS-DCSD represents a pragmatic approach to achieve chemical accuracy with a simple method without triples, which can also be applied to multi-configurational molecular systems.

  8. Improved Resolution Optical Time Stretch Imaging Based on High Efficiency In-Fiber Diffraction.

    Science.gov (United States)

    Wang, Guoqing; Yan, Zhijun; Yang, Lei; Zhang, Lin; Wang, Chao

    2018-01-12

    Most overlooked challenges in ultrafast optical time stretch imaging (OTSI) are sacrificed spatial resolution and higher optical loss. These challenges are originated from optical diffraction devices used in OTSI, which encode image into spectra of ultrashort optical pulses. Conventional free-space diffraction gratings, as widely used in existing OTSI systems, suffer from several inherent drawbacks: limited diffraction efficiency in a non-Littrow configuration due to inherent zeroth-order reflection, high coupling loss between free-space gratings and optical fibers, bulky footprint, and more importantly, sacrificed imaging resolution due to non-full-aperture illumination for individual wavelengths. Here we report resolution-improved and diffraction-efficient OTSI using in-fiber diffraction for the first time to our knowledge. The key to overcome the existing challenges is a 45° tilted fiber grating (TFG), which serves as a compact in-fiber diffraction device offering improved diffraction efficiency (up to 97%), inherent compatibility with optical fibers, and improved imaging resolution owning to almost full-aperture illumination for all illumination wavelengths. 50 million frames per second imaging of fast moving object at 46 m/s with improved imaging resolution has been demonstrated. This conceptually new in-fiber diffraction design opens the way towards cost-effective, compact and high-resolution OTSI systems for image-based high-throughput detection and measurement.

  9. Characterization of 3-Dimensional PET Systems for Accurate Quantification of Myocardial Blood Flow.

    Science.gov (United States)

    Renaud, Jennifer M; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Eric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C; Turkington, Timothy G; Beanlands, Rob S; deKemp, Robert A

    2017-01-01

    Three-dimensional (3D) mode imaging is the current standard for PET/CT systems. Dynamic imaging for quantification of myocardial blood flow with short-lived tracers, such as 82 Rb-chloride, requires accuracy to be maintained over a wide range of isotope activities and scanner counting rates. We proposed new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative imaging. 82 Rb or 13 N-ammonia (1,100-3,000 MBq) was injected into the heart wall insert of an anthropomorphic torso phantom. A decaying isotope scan was obtained over 5 half-lives on 9 different 3D PET/CT systems and 1 3D/2-dimensional PET-only system. Dynamic images (28 × 15 s) were reconstructed using iterative algorithms with all corrections enabled. Dynamic range was defined as the maximum activity in the myocardial wall with less than 10% bias, from which corresponding dead-time, counting rates, and/or injected activity limits were established for each scanner. Scatter correction residual bias was estimated as the maximum cavity blood-to-myocardium activity ratio. Image quality was assessed via the coefficient of variation measuring nonuniformity of the left ventricular myocardium activity distribution. Maximum recommended injected activity/body weight, peak dead-time correction factor, counting rates, and residual scatter bias for accurate cardiac myocardial blood flow imaging were 3-14 MBq/kg, 1.5-4.0, 22-64 Mcps singles and 4-14 Mcps prompt coincidence counting rates, and 2%-10% on the investigated scanners. Nonuniformity of the myocardial activity distribution varied from 3% to 16%. Accurate dynamic imaging is possible on the 10 3D PET systems if the maximum injected MBq/kg values are respected to limit peak dead-time losses during the bolus first-pass transit. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  10. 3T MRI of the knee with optimised isotropic 3D sequences. Accurate delineation of intra-articular pathology without prolonged acquisition times

    Energy Technology Data Exchange (ETDEWEB)

    Abdulaal, Osamah M.; Rainford, Louise; Galligan, Marie; McGee, Allison [University College Dublin, Radiography and Diagnostic Imaging, School of Medicine, Belfield, Dublin (Ireland); MacMahon, Peter; Kavanagh, Eoin [Mater Misericordiae University Hospital, Department of Radiology, Dublin (Ireland); University College Dublin, School of Medicine, Dublin (Ireland); Cashman, James [Mater Misericordiae University Hospital, Department of Orthopaedics, Dublin (Ireland); University College Dublin, School of Medicine, Dublin (Ireland)

    2017-11-15

    To investigate optimised isotropic 3D turbo spin echo (TSE) and gradient echo (GRE)-based pulse sequences for visualisation of articular cartilage lesions within the knee joint. Optimisation of experimental imaging sequences was completed using healthy volunteers (n=16) with a 3-Tesla (3T) MRI scanner. Imaging of patients with knee cartilage abnormalities (n=57) was then performed. Acquired sequences included 3D proton density-weighted (PDW) TSE (SPACE) with and without fat-suppression (FS), and T2*W GRE (TrueFISP) sequences, with acquisition times of 6:51, 6:32 and 5:35 min, respectively. One hundred sixty-one confirmed cartilage lesions were detected and categorised (Grade II n=90, Grade III n=71). The highest sensitivity and specificity for detecting cartilage lesions were obtained with TrueFISP with values of 84.7% and 92%, respectively. Cartilage SNR mean for PDW SPACE-FS was the highest at 72.2. TrueFISP attained the highest CNR means for joint fluid/cartilage (101.5) and joint fluid/ligament (156.5), and the lowest CNR for cartilage/meniscus (48.5). Significant differences were identified across the three sequences for all anatomical structures with respect to SNR and CNR findings (p-value <0.05). Isotropic TrueFISP at 3T, optimised for acquisition time, accurately detects cartilage defects, although it demonstrated the lowest contrast between cartilage and meniscus. (orig.)

  11. Stable and high order accurate difference methods for the elastic wave equation in discontinuous media

    KAUST Repository

    Duru, Kenneth

    2014-12-01

    © 2014 Elsevier Inc. In this paper, we develop a stable and systematic procedure for numerical treatment of elastic waves in discontinuous and layered media. We consider both planar and curved interfaces where media parameters are allowed to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions at layer interfaces are imposed weakly using penalties. By deriving lower bounds of the penalty strength and constructing discrete energy estimates we prove time stability. We present numerical experiments in two space dimensions to illustrate the usefulness of the proposed method for simulations involving typical interface phenomena in elastic materials. The numerical experiments verify high order accuracy and time stability.

  12. Accurate approximation of in-ecliptic trajectories for E-sail with constant pitch angle

    Science.gov (United States)

    Huo, Mingying; Mengali, Giovanni; Quarta, Alessandro A.

    2018-05-01

    Propellantless continuous-thrust propulsion systems, such as electric solar wind sails, may be successfully used for new space missions, especially those requiring high-energy orbit transfers. When the mass-to-thrust ratio is sufficiently large, the spacecraft trajectory is characterized by long flight times with a number of revolutions around the Sun. The corresponding mission analysis, especially when addressed within an optimal context, requires a significant amount of simulation effort. Analytical trajectories are therefore useful aids in a preliminary phase of mission design, even though exact solution are very difficult to obtain. The aim of this paper is to present an accurate, analytical, approximation of the spacecraft trajectory generated by an electric solar wind sail with a constant pitch angle, using the latest mathematical model of the thrust vector. Assuming a heliocentric circular parking orbit and a two-dimensional scenario, the simulation results show that the proposed equations are able to accurately describe the actual spacecraft trajectory for a long time interval when the propulsive acceleration magnitude is sufficiently small.

  13. Incorporating geostrophic wind information for improved space–time short-term wind speed forecasting

    KAUST Repository

    Zhu, Xinxin; Bowman, Kenneth P.; Genton, Marc G.

    2014-01-01

    pressure, temperature, and other meteorological variables, no improvement in forecasting accuracy was found by incorporating air pressure and temperature directly into an advanced space-time statistical forecasting model, the trigonometric direction diurnal

  14. A Modified Proportional Navigation Guidance for Accurate Target Hitting

    Directory of Open Access Journals (Sweden)

    A. Moharampour

    2010-03-01

    First, the pure proportional navigation guidance (PPNG in 3-dimensional state is explained in a new point of view. The main idea is based on the distinction between angular rate vector and rotation vector conceptions. The current innovation is based on selection of line of sight (LOS coordinates. A comparison between two available choices for LOS coordinates system is proposed. An improvement is made by adding two additional terms. First term includes a cross range compensator which is used to provide and enhance path observability, and obtain convergent estimates of state variables. The second term is new concept lead bias term, which has been calculated by assuming an equivalent acceleration along the target longitudinal axis. Simulation results indicate that the lead bias term properly provides terminal conditions for accurate target interception.

  15. On the accurate fast evaluation of finite Fourier integrals using cubic splines

    International Nuclear Information System (INIS)

    Morishima, N.

    1993-01-01

    Finite Fourier integrals based on a cubic-splines fit to equidistant data are shown to be evaluated fast and accurately. Good performance, especially on computational speed, is achieved by the optimization of the spline fit and the internal use of the fast Fourier transform (FFT) algorithm for complex data. The present procedure provides high accuracy with much shorter CPU time than a trapezoidal FFT. (author)

  16. [Accurate 3D free-form registration between fan-beam CT and cone-beam CT].

    Science.gov (United States)

    Liang, Yueqiang; Xu, Hongbing; Li, Baosheng; Li, Hongsheng; Yang, Fujun

    2012-06-01

    Because the X-ray scatters, the CT numbers in cone-beam CT cannot exactly correspond to the electron densities. This, therefore, results in registration error when the intensity-based registration algorithm is used to register planning fan-beam CT and cone-beam CT. In order to reduce the registration error, we have developed an accurate gradient-based registration algorithm. The gradient-based deformable registration problem is described as a minimization of energy functional. Through the calculus of variations and Gauss-Seidel finite difference method, we derived the iterative formula of the deformable registration. The algorithm was implemented by GPU through OpenCL framework, with which the registration time was greatly reduced. Our experimental results showed that the proposed gradient-based registration algorithm could register more accurately the clinical cone-beam CT and fan-beam CT images compared with the intensity-based algorithm. The GPU-accelerated algorithm meets the real-time requirement in the online adaptive radiotherapy.

  17. An accurate nonlinear Monte Carlo collision operator

    International Nuclear Information System (INIS)

    Wang, W.X.; Okamoto, M.; Nakajima, N.; Murakami, S.

    1995-03-01

    A three dimensional nonlinear Monte Carlo collision model is developed based on Coulomb binary collisions with the emphasis both on the accuracy and implementation efficiency. The operator of simple form fulfills particle number, momentum and energy conservation laws, and is equivalent to exact Fokker-Planck operator by correctly reproducing the friction coefficient and diffusion tensor, in addition, can effectively assure small-angle collisions with a binary scattering angle distributed in a limited range near zero. Two highly vectorizable algorithms are designed for its fast implementation. Various test simulations regarding relaxation processes, electrical conductivity, etc. are carried out in velocity space. The test results, which is in good agreement with theory, and timing results on vector computers show that it is practically applicable. The operator may be used for accurately simulating collisional transport problems in magnetized and unmagnetized plasmas. (author)

  18. Physical characterization and preliminary results of a PET system using time-of-flight for quantitative studies

    International Nuclear Information System (INIS)

    Soussaline, F.; Verrey, B.; Comar, D.; Campagnolo, R.; Bouvier, A.; Lecomte, J.L.

    1984-01-01

    A positron camera was designed to meet the needs for a high sensitivity, high resolution, dynamic imaging at high count rate, multislice system, for quantitative measurements. Actually, the goals of present positron camera design are clearly to provide accurate quantitative images of physiological or biochemical parameters with dramatically improved spatial, temporal and contrast resolutions. The use of the time-of-flight (TOF) information which produces more accurate images with fewer detected events, provides an approach to such idenfied needs. This paper first presents the physical characterization of this system, so-called TTVO1, which confirms the TOF system capabilities and main advantages on the system without use of TOF, namely: the improvement of the signal-to-noise ratio due to the better, however approximate, localization of the source position, providing an equivalent gain in sensitivity; the good elimination of accidental -or random- coincidences due to the short time-window (3 nsec for a whole body inner ring); the ability to handle very high count rates without pile up of the detectors or electronic, due to the short scintillation decay time in fast crystals such as CsF or BaF 2 (Baryum fluoride)

  19. A novel imaging technique to measure capillary-refill time: improving diagnostic accuracy for dehydration in young children with gastroenteritis.

    Science.gov (United States)

    Shavit, Itai; Brant, Rollin; Nijssen-Jordan, Cheri; Galbraith, Roger; Johnson, David W

    2006-12-01

    Assessment of dehydration in young children currently depends on clinical judgment, which is relatively inaccurate. By using digital videography, we developed a way to assess capillary-refill time more objectively. Our goal was to determine whether digitally measured capillary-refill time assesses the presence of significant dehydration (> or = 5%) in young children with gastroenteritis more accurately than conventional capillary refill and overall clinical assessment. We prospectively enrolled children with gastroenteritis, 1 month to 5 years of age, who were evaluated in a tertiary-care pediatric emergency department and judged by a triage nurse to be at least mildly dehydrated. Before any treatment, we measured the weight and digitally measured capillary-refill time of these children. Pediatric emergency physicians determined capillary-refill time by using conventional methods and degree of dehydration by overall clinical assessment by using a 7-point Likert scale. Postillness weight gain was used to estimate fluid deficit; beginning 48 hours after assessment, children were reweighed every 24 hours until 2 sequential weights differed by no more than 2%. We compared the accuracy of digitally measured capillary-refill time with conventional capillary refill and overall clinical assessment by determining sensitivities, specificities, likelihood ratios, and area under the receiver operator characteristic curves. A total of 83 patients were enrolled and had complete follow-up; 13 of these patients had significant dehydration (> or = 5% of body weight). The area under the receiver operator characteristic curves for digitally measured capillary-refill time and overall clinical assessment relative to fluid deficit ( or = 5%) were 0.99 and 0.88, respectively. Positive likelihood ratios were 11.7 for digitally measured capillary-refill time, 4.5 for conventional capillary refill, and 4.1 for overall clinical assessment. Results of this prospective cohort study suggest

  20. Improved Visualization of Gastrointestinal Slow Wave Propagation Using a Novel Wavefront-Orientation Interpolation Technique.

    Science.gov (United States)

    Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R

    2018-02-01

    High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.

  1. The X3LYP extended density functional for accurate descriptions of nonbond interactions, spin states, and thermochemical properties

    Science.gov (United States)

    Xu, Xin; Goddard, William A.

    2004-01-01

    We derive the form for an exact exchange energy density for a density decaying with Gaussian-like behavior at long range. Based on this, we develop the X3LYP (extended hybrid functional combined with Lee–Yang–Parr correlation functional) extended functional for density functional theory to significantly improve the accuracy for hydrogen-bonded and van der Waals complexes while also improving the accuracy in heats of formation, ionization potentials, electron affinities, and total atomic energies [over the most popular and accurate method, B3LYP (Becke three-parameter hybrid functional combined with Lee–Yang–Parr correlation functional)]. X3LYP also leads to a good description of dipole moments, polarizabilities, and accurate excitation energies from s to d orbitals for transition metal atoms and ions. We suggest that X3LYP will be useful for predicting ligand binding in proteins and DNA. PMID:14981235

  2. Transport Time and Preoperating Room Hemostatic Interventions Are Important: Improving Outcomes After Severe Truncal Injury.

    Science.gov (United States)

    Holcomb, John B

    2018-03-01

    Experience in the ongoing wars in Iraq and Afghanistan confirm that faster transport combined with effective prehospital interventions improves the outcomes of patients suffering hemorrhagic shock. Outcomes of patients with hemorrhagic shock and extremity bleeding have improved with widespread use of tourniquets and early balanced transfusion therapy. Conversely, civilian patients suffering truncal bleeding and shock have the same mortality (46%) over the last 20 years. To understand how to decrease this substantial mortality, one must first critically evaluate all phases of care from point of injury to definitive hemorrhage control in the operating room. Limited literature review. The peak time to death after severe truncal injury is within 30 minutes of injury. However, when adding prehospital transport time, time spent in the emergency department, followed by the time in the operating room, it currently takes 2.1 hours to achieve definitive truncal hemorrhage control. This disparity in uncontrolled truncal bleeding and time to hemorrhage control needs to be reconciled. Prehospital and emergency department whole blood transfusion and temporary truncal hemorrhage control are now possible. The importance of rapid transport, early truncal hemorrhage control and whole blood transfusion is now widely recognized. Prehospital temporary truncal hemorrhage control and whole blood transfusion should offer the best possibility of improving patient outcomes after severe truncal injury.

  3. Dynamic Travel Time Prediction Models for Buses Using Only GPS Data

    Directory of Open Access Journals (Sweden)

    Wei Fan

    2015-01-01

    Full Text Available Providing real-time and accurate travel time information of transit vehicles can be very helpful as it assists passengers in planning their trips to minimize waiting times. The purpose of this research is to develop and compare dynamic travel time prediction models which can provide accurate prediction of bus travel time in order to give real-time information at a given downstream bus stop using only global positioning system (GPS data. Historical Average (HA, Kalman Filtering (KF and Artificial Neural Network (ANN models are considered and developed in this paper. A case has been studied by making use of the three models. Promising results are obtained from the case study, indicating that the models can be used to implement an Advanced Public Transport System. The implementation of this system could assist transit operators in improving the reliability of bus services, thus attracting more travelers to transit vehicles and helping relieve congestion. The performances of the three models were assessed and compared with each other under two criteria: overall prediction accuracy and robustness. It was shown that the ANN outperformed the other two models in both aspects. In conclusion, it is shown that bus travel time information can be reasonably provided using only arrival and departure time information at stops even in the absence of traffic-stream data.

  4. When Is Network Lasso Accurate?

    Directory of Open Access Journals (Sweden)

    Alexander Jung

    2018-01-01

    Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.

  5. Improving Elementary School Students' Understanding of Historical Time: Effects of Teaching with "Timewise"

    Science.gov (United States)

    de Groot-Reuvekamp, Marjan; Ros, Anje; van Boxtel, Carla

    2018-01-01

    The teaching of historical time is an important aspect in elementary school curricula. This study focuses on the effects of a curriculum intervention with "Timewise," a teaching approach developed to improve students' understanding of historical time using timelines as a basis with which students can develop their understanding of…

  6. Time to symptom improvement using elimination diets in non-IgE-mediated gastrointestinal food allergies.

    Science.gov (United States)

    Lozinsky, Adriana Chebar; Meyer, Rosan; De Koker, Claire; Dziubak, Robert; Godwin, Heather; Reeve, Kate; Dominguez Ortega, Gloria; Shah, Neil

    2015-08-01

    The prevalence of food allergy has increased in recent decades, and there is paucity of data on time to symptom improvement using elimination diets in non-Immunoglobulin E (IgE)-mediated food allergies. We therefore aimed to assess the time required to improvement of symptoms using a symptom questionnaire for children with non-IgE-mediated food allergies on an elimination diet. A prospective observational study was performed on patients with non-IgE-mediated gastrointestinal food allergies on an elimination diet, who completed a questionnaire that includes nine evidence-based food allergic symptoms before and after the exclusion diet. The questionnaire measured symptoms individually from 0 (no symptom) to 5 (most severe) and collectively from 0 to 45. Children were only enrolled in the study if collectively symptoms improved with the dietary elimination within 4 or 8 weeks. Data from 131 patients were analysed including 90 boys with a median age of 21 months [IQR: 7 to 66]. Based on the symptom questionnaire, 129 patients (98.4%) improved after 4-week elimination diet and only two patients improved after 8 weeks. A statistically significant difference before and after commencing the elimination diet was seen in all nine recorded symptoms (all p < 0.001), and in the median of overall score (p < 0.001). This is the first study attempting to establish time to improve after commencing the diet elimination. Almost all children in this study improved within 4 weeks of following the elimination diet, under dietary supervision. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Improved automation of dissolved organic carbon sampling for organic-rich surface waters.

    Science.gov (United States)

    Grayson, Richard P; Holden, Joseph

    2016-02-01

    In-situ UV-Vis spectrophotometers offer the potential for improved estimates of dissolved organic carbon (DOC) fluxes for organic-rich systems such as peatlands because they are able to sample and log DOC proxies automatically through time at low cost. In turn, this could enable improved total carbon budget estimates for peatlands. The ability of such instruments to accurately measure DOC depends on a number of factors, not least of which is how absorbance measurements relate to DOC and the environmental conditions. Here we test the ability of a S::can Spectro::lyser™ for measuring DOC in peatland streams with routinely high DOC concentrations. Through analysis of the spectral response data collected by the instrument we have been able to accurately measure DOC up to 66 mg L(-1), which is more than double the original upper calibration limit for this particular instrument. A linear regression modelling approach resulted in an accuracy >95%. The greatest accuracy was achieved when absorbance values for several different wavelengths were used at the same time in the model. However, an accuracy >90% was achieved using absorbance values for a single wavelength to predict DOC concentration. Our calculations indicated that, for organic-rich systems, in-situ measurement with a scanning spectrophotometer can improve fluvial DOC flux estimates by 6 to 8% compared with traditional sampling methods. Thus, our techniques pave the way for improved long-term carbon budget calculations from organic-rich systems such as peatlands. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Genomic inference accurately predicts the timing and severity of a recent bottleneck in a non-model insect population

    Science.gov (United States)

    McCoy, Rajiv C.; Garud, Nandita R.; Kelley, Joanna L.; Boggs, Carol L.; Petrov, Dmitri A.

    2015-01-01

    The analysis of molecular data from natural populations has allowed researchers to answer diverse ecological questions that were previously intractable. In particular, ecologists are often interested in the demographic history of populations, information that is rarely available from historical records. Methods have been developed to infer demographic parameters from genomic data, but it is not well understood how inferred parameters compare to true population history or depend on aspects of experimental design. Here we present and evaluate a method of SNP discovery using RNA-sequencing and demographic inference using the program δaδi, which uses a diffusion approximation to the allele frequency spectrum to fit demographic models. We test these methods in a population of the checkerspot butterfly Euphydryas gillettii. This population was intentionally introduced to Gothic, Colorado in 1977 and has since experienced extreme fluctuations including bottlenecks of fewer than 25 adults, as documented by nearly annual field surveys. Using RNA-sequencing of eight individuals from Colorado and eight individuals from a native population in Wyoming, we generate the first genomic resources for this system. While demographic inference is commonly used to examine ancient demography, our study demonstrates that our inexpensive, all-in-one approach to marker discovery and genotyping provides sufficient data to accurately infer the timing of a recent bottleneck. This demographic scenario is relevant for many species of conservation concern, few of which have sequenced genomes. Our results are remarkably insensitive to sample size or number of genomic markers, which has important implications for applying this method to other non-model systems. PMID:24237665

  9. Achieving target voriconazole concentrations more accurately in children and adolescents.

    Science.gov (United States)

    Neely, Michael; Margol, Ashley; Fu, Xiaowei; van Guilder, Michael; Bayard, David; Schumitzky, Alan; Orbach, Regina; Liu, Siyu; Louie, Stan; Hope, William

    2015-01-01

    Despite the documented benefit of voriconazole therapeutic drug monitoring, nonlinear pharmacokinetics make the timing of steady-state trough sampling and appropriate dose adjustments unpredictable by conventional methods. We developed a nonparametric population model with data from 141 previously richly sampled children and adults. We then used it in our multiple-model Bayesian adaptive control algorithm to predict measured concentrations and doses in a separate cohort of 33 pediatric patients aged 8 months to 17 years who were receiving voriconazole and enrolled in a pharmacokinetic study. Using all available samples to estimate the individual Bayesian posterior parameter values, the median percent prediction bias relative to a measured target trough concentration in the patients was 1.1% (interquartile range, -17.1 to 10%). Compared to the actual dose that resulted in the target concentration, the percent bias of the predicted dose was -0.7% (interquartile range, -7 to 20%). Using only trough concentrations to generate the Bayesian posterior parameter values, the target bias was 6.4% (interquartile range, -1.4 to 14.7%; P = 0.16 versus the full posterior parameter value) and the dose bias was -6.7% (interquartile range, -18.7 to 2.4%; P = 0.15). Use of a sample collected at an optimal time of 4 h after a dose, in addition to the trough concentration, resulted in a nonsignificantly improved target bias of 3.8% (interquartile range, -13.1 to 18%; P = 0.32) and a dose bias of -3.5% (interquartile range, -18 to 14%; P = 0.33). With the nonparametric population model and trough concentrations, our control algorithm can accurately manage voriconazole therapy in children independently of steady-state conditions, and it is generalizable to any drug with a nonparametric pharmacokinetic model. (This study has been registered at ClinicalTrials.gov under registration no. NCT01976078.). Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  10. Improvements in longwall downtime analysis and fault identification

    Energy Technology Data Exchange (ETDEWEB)

    Daniel Bongers [CRCMining (Australia)

    2006-12-15

    In this project we have developed a computer program for recording detailed information relating to face equipment downtime in longwall mining operations. This software is intended to replace the current manual recording of delay information, which has been proven to be inaccurate. The software developed is intended to be operated from the maingate computer. Users are provided with a simple user interface requesting that nature of each delay in production, which is time-stamped in alignment with the SCADA system, removing the need for operators to estimate the start time and duration of each delay. Each instance of non-production is recorded to a database, which may be accessed by surface computers, removing the need for transcribing of the deputy's report into the delay database. An additional suggestive element has been developed, based on sophisticated fault detection technology, which reduces the data input required by operators, and provides a basis for the implementation of real-time fault detection. Both the basic recording software and the suggestive element offer improvements in efficiency and accuracy to longwall operations. More accurate data allows improved maintenance planning and improved measures of operational KPIs. The suggestive element offers the potential for rapid fault diagnosis, and potentially delay forecasting, which may be used to reduce lost time associated with machine downtime.

  11. Improving multi-GNSS ultra-rapid orbit determination for real-time precise point positioning

    Science.gov (United States)

    Li, Xingxing; Chen, Xinghan; Ge, Maorong; Schuh, Harald

    2018-03-01

    Currently, with the rapid development of multi-constellation Global Navigation Satellite Systems (GNSS), the real-time positioning and navigation are undergoing dramatic changes with potential for a better performance. To provide more precise and reliable ultra-rapid orbits is critical for multi-GNSS real-time positioning, especially for the three merging constellations Beidou, Galileo and QZSS which are still under construction. In this contribution, we present a five-system precise orbit determination (POD) strategy to fully exploit the GPS + GLONASS + BDS + Galileo + QZSS observations from CDDIS + IGN + BKG archives for the realization of hourly five-constellation ultra-rapid orbit update. After adopting the optimized 2-day POD solution (updated every hour), the predicted orbit accuracy can be obviously improved for all the five satellite systems in comparison to the conventional 1-day POD solution (updated every 3 h). The orbit accuracy for the BDS IGSO satellites can be improved by about 80, 45 and 50% in the radial, cross and along directions, respectively, while the corresponding accuracy improvement for the BDS MEO satellites reaches about 50, 20 and 50% in the three directions, respectively. Furthermore, the multi-GNSS real-time precise point positioning (PPP) ambiguity resolution has been performed by using the improved precise satellite orbits. Numerous results indicate that combined GPS + BDS + GLONASS + Galileo (GCRE) kinematic PPP ambiguity resolution (AR) solutions can achieve the shortest time to first fix (TTFF) and highest positioning accuracy in all coordinate components. With the addition of the BDS, GLONASS and Galileo observations to the GPS-only processing, the GCRE PPP AR solution achieves the shortest average TTFF of 11 min with 7{°} cutoff elevation, while the TTFF of GPS-only, GR, GE and GC PPP AR solution is 28, 15, 20 and 17 min, respectively. As the cutoff elevation increases, the reliability and accuracy of GPS-only PPP AR solutions

  12. Exploratory data analysis of acceleration signals to select light-weight and accurate features for real-time activity recognition on smartphones.

    Science.gov (United States)

    Khan, Adil Mehmood; Siddiqi, Muhammad Hameed; Lee, Seok-Won

    2013-09-27

    Smartphone-based activity recognition (SP-AR) recognizes users' activities using the embedded accelerometer sensor. Only a small number of previous works can be classified as online systems, i.e., the whole process (pre-processing, feature extraction, and classification) is performed on the device. Most of these online systems use either a high sampling rate (SR) or long data-window (DW) to achieve high accuracy, resulting in short battery life or delayed system response, respectively. This paper introduces a real-time/online SP-AR system that solves this problem. Exploratory data analysis was performed on acceleration signals of 6 activities, collected from 30 subjects, to show that these signals are generated by an autoregressive (AR) process, and an accurate AR-model in this case can be built using a low SR (20 Hz) and a small DW (3 s). The high within class variance resulting from placing the phone at different positions was reduced using kernel discriminant analysis to achieve position-independent recognition. Neural networks were used as classifiers. Unlike previous works, true subject-independent evaluation was performed, where 10 new subjects evaluated the system at their homes for 1 week. The results show that our features outperformed three commonly used features by 40% in terms of accuracy for the given SR and DW.

  13. Automatical and accurate segmentation of cerebral tissues in fMRI dataset with combination of image processing and deep learning

    Science.gov (United States)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.

  14. Can Real-Time Data Also Be Climate Quality?

    Science.gov (United States)

    Brewer, M.; Wentz, F. J.

    2015-12-01

    GMI, AMSR-2 and WindSat herald a new era of highly accurate and timely microwave data products. Traditionally, there has been a large divide between real-time and re-analysis data products. What if these completely separate processing systems could be merged? Through advanced modeling and physically based algorithms, Remote Sensing Systems (RSS) has narrowed the gap between real-time and research-quality. Satellite microwave ocean products have proven useful for a wide array of timely Earth science applications. Through cloud SST capabilities have enormously benefited tropical cyclone forecasting and day to day fisheries management, to name a few. Oceanic wind vectors enhance operational safety of shipping and recreational boating. Atmospheric rivers are of import to many human endeavors, as are cloud cover and knowledge of precipitation events. Some activities benefit from both climate and real-time operational data used in conjunction. RSS has been consistently improving microwave Earth Science Data Records (ESDRs) for several decades, while making near real-time data publicly available for semi-operational use. These data streams have often been produced in 2 stages: near real-time, followed by research quality final files. Over the years, we have seen this time delay shrink from months or weeks to mere hours. As well, we have seen the quality of near real-time data improve to the point where the distinction starts to blur. We continue to work towards better and faster RFI filtering, adaptive algorithms and improved real-time validation statistics for earlier detection of problems. Can it be possible to produce climate quality data in real-time, and what would the advantages be? We will try to answer these questions…

  15. Sustained reductions in time to antibiotic delivery in febrile immunocompromised children: results of a quality improvement collaborative.

    Science.gov (United States)

    Dandoy, Christopher E; Hariharan, Selena; Weiss, Brian; Demmel, Kathy; Timm, Nathan; Chiarenzelli, Janis; Dewald, Mary Katherine; Kennebeck, Stephanie; Langworthy, Shawna; Pomales, Jennifer; Rineair, Sylvia; Sandfoss, Erin; Volz-Noe, Pamela; Nagarajan, Rajaram; Alessandrini, Evaline

    2016-02-01

    Timely delivery of antibiotics to febrile immunocompromised (F&I) paediatric patients in the emergency department (ED) and outpatient clinic reduces morbidity and mortality. The aim of this quality improvement initiative was to increase the percentage of F&I patients who received antibiotics within goal in the clinic and ED from 25% to 90%. Using the Model of Improvement, we performed Plan-Do-Study-Act cycles to design, test and implement high-reliability interventions to decrease time to antibiotics. Pre-arrival interventions were tested and implemented, followed by post-arrival interventions in the ED. Many processes were spread successfully to the outpatient clinic. The Chronic Care Model was used, in addition to active family engagement, to inform and improve processes. The study period was from January 2010 to January 2015. Pre-arrival planning improved our F&I time to antibiotics in the ED from 137 to 88 min. This was sustained until October 2012, when further interventions including a pre-arrival huddle decreased the median time to antibiotics within 60 min to >90%. In September 2014, we implemented a rapid response team to improve reliable venous access in the ED, which increased our mean percentage of patients receiving timely antibiotics to its highest rate (95%). This stepwise approach with pre-arrival planning using the Chronic Care Model, followed by standardisation of processes, created a sustainable improvement of timely antibiotic delivery in F&I patients. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  16. An efficient discontinuous Galerkin finite element method for highly accurate solution of maxwell equations

    KAUST Repository

    Liu, Meilin

    2012-08-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.

  17. An efficient discontinuous Galerkin finite element method for highly accurate solution of maxwell equations

    KAUST Repository

    Liu, Meilin; Sirenko, Kostyantyn; Bagci, Hakan

    2012-01-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.

  18. Can free energy calculations be fast and accurate at the same time? Binding of low-affinity, non-peptide inhibitors to the SH2 domain of the src protein

    Science.gov (United States)

    Chipot, Christophe; Rozanska, Xavier; Dixit, Surjit B.

    2005-11-01

    The usefulness of free-energy calculations in non-academic environments, in general, and in the pharmaceutical industry, in particular, is a long-time debated issue, often considered from the angle of cost/performance criteria. In the context of the rational drug design of low-affinity, non-peptide inhibitors to the SH2 domain of the pp60src tyrosine kinase, the continuing difficulties encountered in an attempt to obtain accurate free-energy estimates are addressed. free-energy calculations can provide a convincing answer, assuming that two key-requirements are fulfilled: (i) thorough sampling of the configurational space is necessary to minimize the statistical error, hence raising the question: to which extent can we sacrifice the computational effort, yet without jeopardizing the precision of the free-energy calculation? (ii) the sensitivity of binding free-energies to the parameters utilized imposes an appropriate parametrization of the potential energy function, especially for non-peptide molecules that are usually poorly described by multipurpose macromolecular force fields. Employing the free-energy perturbation method, accurate ranking, within ±0.7 kcal/mol, is obtained in the case of four non-peptide mimes of a sequence recognized by the pp60src SH2 domain.

  19. Recent developments in KTF. Code optimization and improved numerics

    International Nuclear Information System (INIS)

    Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin

    2012-01-01

    The rapid increase of computer power in the last decade facilitated the development of high fidelity simulations in nuclear engineering allowing a more realistic and accurate optimization as well as safety assessment of reactor cores and power plants compared to the legacy codes. Thermal hydraulic subchannel codes together with time dependent neutron transport codes are the options of choice for an accurate prediction of local safety parameters. Moreover, fast running codes with the best physical models are needed for high fidelity coupled thermal hydraulic / neutron kinetic solutions. Hence at KIT, different subchannel codes such as SUBCHANFLOW and KTF are being improved, validated and coupled with different neutron kinetics solutions. KTF is a subchannel code developed for best-estimate analysis of both Pressurized Water Reactor (PWR) and BWR. It is based on the Pennsylvania State University (PSU) version of COBRA-TF (Coolant Boling in Rod Arrays Two Fluids) named CTF. In this paper, the investigations devoted to the enhancement of the code numeric and informatics structure are presented and discussed. By some examples the gain on code speed-up will be demonstrated and finally an outlook of further activities concentrated on the code improvements will be given. (orig.)

  20. Recent developments in KTF. Code optimization and improved numerics

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin [Karlsruhe Institute of Technology (KIT) (Germany). Inst. for Neutron Physics and Reactor Technology (INR)

    2012-11-01

    The rapid increase of computer power in the last decade facilitated the development of high fidelity simulations in nuclear engineering allowing a more realistic and accurate optimization as well as safety assessment of reactor cores and power plants compared to the legacy codes. Thermal hydraulic subchannel codes together with time dependent neutron transport codes are the options of choice for an accurate prediction of local safety parameters. Moreover, fast running codes with the best physical models are needed for high fidelity coupled thermal hydraulic / neutron kinetic solutions. Hence at KIT, different subchannel codes such as SUBCHANFLOW and KTF are being improved, validated and coupled with different neutron kinetics solutions. KTF is a subchannel code developed for best-estimate analysis of both Pressurized Water Reactor (PWR) and BWR. It is based on the Pennsylvania State University (PSU) version of COBRA-TF (Coolant Boling in Rod Arrays Two Fluids) named CTF. In this paper, the investigations devoted to the enhancement of the code numeric and informatics structure are presented and discussed. By some examples the gain on code speed-up will be demonstrated and finally an outlook of further activities concentrated on the code improvements will be given. (orig.)

  1. Real-Time Mobile Device-Assisted Chest Compression During Cardiopulmonary Resuscitation.

    Science.gov (United States)

    Sarma, Satyam; Bucuti, Hakiza; Chitnis, Anurag; Klacman, Alex; Dantu, Ram

    2017-07-15

    Prompt administration of high-quality cardiopulmonary resuscitation (CPR) is a key determinant of survival from cardiac arrest. Strategies to improve CPR quality at point of care could improve resuscitation outcomes. We tested whether a low cost and scalable mobile phone- or smart watch-based solution could provide accurate measures of compression depth and rate during simulated CPR. Fifty health care providers (58% intensive care unit nurses) performed simulated CPR on a calibrated training manikin (Resusci Anne, Laerdal) while wearing both devices. Subjects received real-time audiovisual feedback from each device sequentially. Primary outcome was accuracy of compression depth and rate compared with the calibrated training manikin. Secondary outcome was improvement in CPR quality as defined by meeting both guideline-recommend compression depth (5 to 6 cm) and rate (100 to 120/minute). Compared with the training manikin, typical error for compression depth was mobile device feedback (60% vs 50%; p = 0.3). Sessions that did not meet guideline recommendations failed primarily because of inadequate compression depth (46 ± 2 mm). In conclusion, a mobile device application-guided CPR can accurately track compression depth and rate during simulation in a practice environment in accordance with resuscitation guidelines. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A third order accurate Lagrangian finite element scheme for the computation of generalized molecular stress function fluids

    DEFF Research Database (Denmark)

    Fasano, Andrea; Rasmussen, Henrik K.

    2017-01-01

    A third order accurate, in time and space, finite element scheme for the numerical simulation of three- dimensional time-dependent flow of the molecular stress function type of fluids in a generalized formu- lation is presented. The scheme is an extension of the K-BKZ Lagrangian finite element me...

  3. A new accurate quadratic equation model for isothermal gas chromatography and its comparison with the linear model

    Science.gov (United States)

    Wu, Liejun; Chen, Maoxue; Chen, Yongli; Li, Qing X.

    2013-01-01

    The gas holdup time (tM) is a dominant parameter in gas chromatographic retention models. The difference equation (DE) model proposed by Wu et al. (J. Chromatogr. A 2012, http://dx.doi.org/10.1016/j.chroma.2012.07.077) excluded tM. In the present paper, we propose that the relationship between the adjusted retention time tRZ′ and carbon number z of n-alkanes follows a quadratic equation (QE) when an accurate tM is obtained. This QE model is the same as or better than the DE model for an accurate expression of the retention behavior of n-alkanes and model applications. The QE model covers a larger range of n-alkanes with better curve fittings than the linear model. The accuracy of the QE model was approximately 2–6 times better than the DE model and 18–540 times better than the LE model. Standard deviations of the QE model were approximately 2–3 times smaller than those of the DE model. PMID:22989489

  4. Accurately controlled sequential self-folding structures by polystyrene film

    Science.gov (United States)

    Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse

    2017-08-01

    Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.

  5. Measuring Accurate Body Parameters of Dressed Humans with Large-Scale Motion Using a Kinect Sensor

    Directory of Open Access Journals (Sweden)

    Sidan Du

    2013-08-01

    Full Text Available Non-contact human body measurement plays an important role in surveillance, physical healthcare, on-line business and virtual fitting. Current methods for measuring the human body without physical contact usually cannot handle humans wearing clothes, which limits their applicability in public environments. In this paper, we propose an effective solution that can measure accurate parameters of the human body with large-scale motion from a Kinect sensor, assuming that the people are wearing clothes. Because motion can drive clothes attached to the human body loosely or tightly, we adopt a space-time analysis to mine the information across the posture variations. Using this information, we recover the human body, regardless of the effect of clothes, and measure the human body parameters accurately. Experimental results show that our system can perform more accurate parameter estimation on the human body than state-of-the-art methods.

  6. Using Quality Improvement Methods and Time-Driven Activity-Based Costing to Improve Value-Based Cancer Care Delivery at a Cancer Genetics Clinic.

    Science.gov (United States)

    Tan, Ryan Y C; Met-Domestici, Marie; Zhou, Ke; Guzman, Alexis B; Lim, Soon Thye; Soo, Khee Chee; Feeley, Thomas W; Ngeow, Joanne

    2016-03-01

    To meet increasing demand for cancer genetic testing and improve value-based cancer care delivery, National Cancer Centre Singapore restructured the Cancer Genetics Service in 2014. Care delivery processes were redesigned. We sought to improve access by increasing the clinic capacity of the Cancer Genetics Service by 100% within 1 year without increasing direct personnel costs. Process mapping and plan-do-study-act (PDSA) cycles were used in a quality improvement project for the Cancer Genetics Service clinic. The impact of interventions was evaluated by tracking the weekly number of patient consultations and access times for appointments between April 2014 and May 2015. The cost impact of implemented process changes was calculated using the time-driven activity-based costing method. Our study completed two PDSA cycles. An important outcome was achieved after the first cycle: The inclusion of a genetic counselor increased clinic capacity by 350%. The number of patients seen per week increased from two in April 2014 (range, zero to four patients) to seven in November 2014 (range, four to 10 patients). Our second PDSA cycle showed that manual preappointment reminder calls reduced the variation in the nonattendance rate and contributed to a further increase in patients seen per week to 10 in May 2015 (range, seven to 13 patients). There was a concomitant decrease in costs of the patient care cycle by 18% after both PDSA cycles. This study shows how quality improvement methods can be combined with time-driven activity-based costing to increase value. In this paper, we demonstrate how we improved access while reducing costs of care delivery. Copyright © 2016 by American Society of Clinical Oncology.

  7. Automatic emissive probe apparatus for accurate plasma and vacuum space potential measurements

    Science.gov (United States)

    Jianquan, LI; Wenqi, LU; Jun, XU; Fei, GAO; Younian, WANG

    2018-02-01

    We have developed an automatic emissive probe apparatus based on the improved inflection point method of the emissive probe for accurate measurements of both plasma potential and vacuum space potential. The apparatus consists of a computer controlled data acquisition card, a working circuit composed by a biasing unit and a heating unit, as well as an emissive probe. With the set parameters of the probe scanning bias, the probe heating current and the fitting range, the apparatus can automatically execute the improved inflection point method and give the measured result. The validity of the automatic emissive probe apparatus is demonstrated in a test measurement of vacuum potential distribution between two parallel plates, showing an excellent accuracy of 0.1 V. Plasma potential was also measured, exhibiting high efficiency and convenient use of the apparatus for space potential measurements.

  8. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  9. Improving Door-to-balloon Time by Decreasing Door-to-ECG time for Walk-in STEMI Patients

    Directory of Open Access Journals (Sweden)

    Coyne, Christopher J.

    2014-12-01

    Full Text Available Introduction: The American Heart Association/American College of Cardiology guidelines recommend rapid door-to-electrocardiography (ECG times for patients with ST-segment elevation myocardial infarction (STEMI. Previous quality improvement research at our institution revealed that we were not meeting this benchmark for walk-in STEMI patients. The objective is to investigate whether simple, directed changes in the emergency department (ED triage process for potential cardiac patients could decrease door-to-ECG times and secondarily door-to-balloon times. Methods: We conducted an interventional study at a large, urban, public teaching hospital from April 2010 to June 2012. All patients who walked into the ED with a confirmed STEMI were enrolled in the study. The primary intervention involved creating a chief complaint-based “cardiac triage” designation that streamlined the evaluation of potential cardiac patients. A secondary intervention involved moving our ECG technician and ECG station to our initial triage area. The primary outcome measure was door-to-ECG time and the secondary outcome measure was door-to-balloon time. Results: We enrolled 91 walk-in STEMI patients prior to the intervention period and 141 patients after the invention. We observed statistically significant reductions in door-to-ECG time (43±93 to 30±72 minutes, median 23 to 14 minutes p<0.01, ECG-to-activation time (87±134 to 52±82 minutes, median 43 to 31 minutes p<0.01, and door-to-balloon time (134±146 to 84±40 minutes, median 85 -75 minutes p=0.03. Conclusion: By creating a chief complaint-based cardiac triage protocol and by streamlining ECG completion, walk-in STEMI patients are systematically processed through the ED. This is not only associated with a decrease in door-to-balloon time, but also a decrease in the variability of the time sensitive intervals of door-to-ECG and ECG-to-balloon time. [West J Emerg Med. 2015;16(1:184–189.

  10. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  11. ISO 14001: time for improvements?

    DEFF Research Database (Denmark)

    Jørgensen, Tine Herreborg

    2007-01-01

    The aim of this paper is to discuss a number of issues related to ISO 14001:2004, the international standard for Environmental Management Systems (EMS) with the purpose of improving the next edition in order to recognise and reflect new recognitions in approaches to pollution prevention. A case...

  12. [Real-time feedback systems for improvement of resuscitation quality].

    Science.gov (United States)

    Lukas, R P; Van Aken, H; Engel, P; Bohn, A

    2011-07-01

    The quality of chest compression is a determinant of survival after cardiac arrest. Therefore, the European Resuscitation Council (ERC) 2010 guidelines on resuscitation strongly focus on compression quality. Despite its impact on survival, observational studies have shown that chest compression quality is not reached by professional rescue teams. Real-time feedback devices for resuscitation are able to measure chest compression during an ongoing resuscitation attempt through a sternal sensor equipped with a motion and pressure detection system. In addition to the electrocardiograph (ECG) ventilation can be detected by transthoracic impedance monitoring. In cases of quality deviation, such as shallow chest compression depth or hyperventilation, feedback systems produce visual or acoustic alarms. Rescuers can thereby be supported and guided to the requested quality in chest compression and ventilation. Feedback technology is currently available both as a so-called stand-alone device and as an integrated feature in a monitor/defibrillator unit. Multiple studies have demonstrated sustainable enhancement in the education of resuscitation due to the use of real-time feedback technology. There is evidence that real-time feedback for resuscitation combined with training and debriefing strategies can improve both resuscitation quality and patient survival. Chest compression quality is an independent predictor for survival in resuscitation and should therefore be measured and documented in further clinical multicenter trials.

  13. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate.

    Directory of Open Access Journals (Sweden)

    Abel B Minyoo

    2015-12-01

    Full Text Available In this study we show that incentives (dog collars and owner wristbands are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey. Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere.

  14. Incentives Increase Participation in Mass Dog Rabies Vaccination Clinics and Methods of Coverage Estimation Are Assessed to Be Accurate

    Science.gov (United States)

    Steinmetz, Melissa; Czupryna, Anna; Bigambo, Machunde; Mzimbiri, Imam; Powell, George; Gwakisa, Paul

    2015-01-01

    In this study we show that incentives (dog collars and owner wristbands) are effective at increasing owner participation in mass dog rabies vaccination clinics and we conclude that household questionnaire surveys and the mark-re-sight (transect survey) method for estimating post-vaccination coverage are accurate when all dogs, including puppies, are included. Incentives were distributed during central-point rabies vaccination clinics in northern Tanzania to quantify their effect on owner participation. In villages where incentives were handed out participation increased, with an average of 34 more dogs being vaccinated. Through economies of scale, this represents a reduction in the cost-per-dog of $0.47. This represents the price-threshold under which the cost of the incentive used must fall to be economically viable. Additionally, vaccination coverage levels were determined in ten villages through the gold-standard village-wide census technique, as well as through two cheaper and quicker methods (randomized household questionnaire and the transect survey). Cost data were also collected. Both non-gold standard methods were found to be accurate when puppies were included in the calculations, although the transect survey and the household questionnaire survey over- and under-estimated the coverage respectively. Given that additional demographic data can be collected through the household questionnaire survey, and that its estimate of coverage is more conservative, we recommend this method. Despite the use of incentives the average vaccination coverage was below the 70% threshold for eliminating rabies. We discuss the reasons and suggest solutions to improve coverage. Given recent international targets to eliminate rabies, this study provides valuable and timely data to help improve mass dog vaccination programs in Africa and elsewhere. PMID:26633821

  15. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    Science.gov (United States)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  16. Development of long-wavelength-emitting scintillators with improved decay time characteristics

    International Nuclear Information System (INIS)

    Franks, L.A.; Lutz, S.; Lyons, P.B.

    1978-01-01

    Progress is reported from efforts to develop radiation-to-light converters suitable for use with optical fibers as they are applied to the diagnostics of transient nuclear phenomena. Liquid and plastic fluors have been prepared which emit in the 550- to 600-nm region. Ternary liquid systems with decay times as short as 1.3 ns at 560 nm and plastic fluors with decay times less than 3 ns at 560 nm are reported. Other liquid and plastic fluors are reported with improved emission characteristics in the region of 600 nm. Conversion efficiences, on a pulse amplitude basis, are generally lower than that of a commercially available 570 nm-16 ns plastic fluor

  17. Quality metric for accurate overlay control in <20nm nodes

    Science.gov (United States)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  18. Do wavelet filters provide more accurate estimates of reverberation times at low frequencies

    DEFF Research Database (Denmark)

    Sobreira Seoane, Manuel A.; Pérez Cabo, David; Agerkvist, Finn T.

    2016-01-01

    It has been amply demonstrated in the literature that it is not possible to measure acoustic decays without significant errors for low BT values (narrow filters and or low reverberation times). Recently, it has been shown how the main source of distortion in the time envelope of the acoustic deca...

  19. Improved wave functions for large-N expansions

    International Nuclear Information System (INIS)

    Imbo, T.; Sukhatme, U.

    1985-01-01

    Existing large-N expansions of radial wave functions phi/sub n/,l(r) are only accurate near the minimum of the effective potential. Within the framework of the shifted 1/N expansion, we use known analytic results to motivate a simple modification so that the improved wave functions are accurate over a wide range of r and any choice of quantum numbers n and l. It is shown that these wave functions yield simple and accurate analytic expressions for certain quantities of interest in quarkonium physics

  20. Implantation, evaluation and improvement of the diffusion code package developed by the RIS0 Research Center

    International Nuclear Information System (INIS)

    Koide, M.C.M.

    1983-01-01

    The evaluation and improvement of the diffusion code package developed by the RIS0 Research Center of Denmark have been performed. The improvements made in the package consisted in the presentation of their manuals. In order to reduce the process time of the codes an analitical boundary condition capable of representing the effects of the baffle and the reflector on the flux distribution has been calculated. Such boundary condition was obtained using a one-dimensional medium in the framework of the two group diffusion theory. The results showed that the application of this boundary condition produces very accurate results and an appreciable economy of processing time. (author) [pt

<