WorldWideScience

Sample records for saturation correction method

  1. Test of Scintillometer Saturation Correction Methods Using Field Experimental Data

    NARCIS (Netherlands)

    Kleissl, J.; Hartogensis, O.K.; Gomez, J.D.

    2010-01-01

    Saturation of large aperture scintillometer (LAS) signals can result in sensible heat flux measurements that are biased low. A field study with LASs of different aperture sizes and path lengths was performed to investigate the onset of, and corrections for, signal saturation. Saturation already

  2. Correcting reaction rates measured by saturation-transfer magnetic resonance spectroscopy

    Science.gov (United States)

    Gabr, Refaat E.; Weiss, Robert G.; Bottomley, Paul A.

    2008-04-01

    Off-resonance or spillover irradiation and incomplete saturation can introduce significant errors in the estimates of chemical rate constants measured by saturation-transfer magnetic resonance spectroscopy (MRS). Existing methods of correction are effective only over a limited parameter range. Here, a general approach of numerically solving the Bloch-McConnell equations to calculate exchange rates, relaxation times and concentrations for the saturation-transfer experiment is investigated, but found to require more measurements and higher signal-to-noise ratios than in vivo studies can practically afford. As an alternative, correction formulae for the reaction rate are provided which account for the expected parameter ranges and limited measurements available in vivo. The correction term is a quadratic function of experimental measurements. In computer simulations, the new formulae showed negligible bias and reduced the maximum error in the rate constants by about 3-fold compared to traditional formulae, and the error scatter by about 4-fold, over a wide range of parameters for conventional saturation transfer employing progressive saturation, and for the four-angle saturation-transfer method applied to the creatine kinase (CK) reaction in the human heart at 1.5 T. In normal in vivo spectra affected by spillover, the correction increases the mean calculated forward CK reaction rate by 6-16% over traditional and prior correction formulae.

  3. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit stat...

  4. Generalized subspace correction methods

    Energy Technology Data Exchange (ETDEWEB)

    Kolm, P. [Royal Institute of Technology, Stockholm (Sweden); Arbenz, P.; Gander, W. [Eidgenoessiche Technische Hochschule, Zuerich (Switzerland)

    1996-12-31

    A fundamental problem in scientific computing is the solution of large sparse systems of linear equations. Often these systems arise from the discretization of differential equations by finite difference, finite volume or finite element methods. Iterative methods exploiting these sparse structures have proven to be very effective on conventional computers for a wide area of applications. Due to the rapid development and increasing demand for the large computing powers of parallel computers, it has become important to design iterative methods specialized for these new architectures.

  5. Graph-analytical method for determining saturation in oil formations

    Energy Technology Data Exchange (ETDEWEB)

    Ramazanova, E.E.; Fedortsov, V.K.; Ismaylov, K.K.

    1980-01-01

    Factual material is generalized for a large number of oil fields of the Soviet Union for which probability-statistical models have been selected. A graph-analytical method is developed for determining the saturation pressure of oil by gas.

  6. Near-Saturation Single-Photon Avalanche Diode Afterpulse and Sensitivity Correction Scheme for the LHC Longitudinal density Monitor

    CERN Document Server

    Bravin, E; Palm, M

    2014-01-01

    Single-Photon Avalanche Diodes (SPADs) monitor the longitudinal density of the LHC beams by measuring the temporal distribution of synchrotron radiation. The relative population of nominally empty RF-buckets (satellites or ghosts) with respect to filled bunches is a key figure for the luminosity calibration of the LHC experiments. Since afterpulsing from a main bunch avalanche can be as high as, or higher than, the signal from satellites or ghosts, an accurate correction algorithm is needed. Furthermore, to reduce the integration time, the amount of light sent to the SPAD is enough so that pile-up effects and afterpulsing cannot be neglected. The SPAD sensitivity has also been found to vary at the end of the active quenching phase. We present a method to characterize and correct for SPAD deadtime, afterpulsing and sensitivity variation near saturation, together with laboratory benchmarking.

  7. A Single Image Dehazing Method Using Average Saturation Prior

    Directory of Open Access Journals (Sweden)

    Zhenfei Gu

    2017-01-01

    Full Text Available Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP, which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.

  8. A Design Method of Saturation Test Image Based on CIEDE2000

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2012-01-01

    Full Text Available In order to generate color test image consistent with human perception in aspect of saturation, lightness, and hue of image, we propose a saturation test image design method based on CIEDE2000 color difference formula. This method exploits the subjective saturation parameter C′ of CIEDE2000 to get a series of test images with different saturation but same lightness and hue. It is found experimentally that the vision perception has linear relationship with the saturation parameter C′. This kind of saturation test image has various applications, such as in the checking of color masking effect in visual experiments and the testing of the visual effects of image similarity component.

  9. Off-Angle Iris Correction Methods

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Thompson, Joseph T [ORNL; Karakaya, Mahmut [ORNL; Boehnen, Chris Bensing [ORNL

    2016-01-01

    In many real world iris recognition systems obtaining consistent frontal images is problematic do to inexperienced or uncooperative users, untrained operators, or distracting environments. As a result many collected images are unusable by modern iris matchers. In this chapter we present four methods for correcting off-angle iris images to appear frontal which makes them compatible with existing iris matchers. The methods include an affine correction, a retraced model of the human eye, measured displacements, and a genetic algorithm optimized correction. The affine correction represents a simple way to create an iris image that appears frontal but it does not account for refractive distortions of the cornea. The other method account for refraction. The retraced model simulates the optical properties of the cornea. The other two methods are data driven. The first uses optical flow to measure the displacements of the iris texture when compared to frontal images of the same subject. The second uses a genetic algorithm to learn a mapping that optimizes the Hamming Distance scores between off-angle and frontal images. In this paper we hypothesize that the biological model presented in our earlier work does not adequately account for all variations in eye anatomy and therefore the two data-driven approaches should yield better performance. Results are presented using the commercial VeriEye matcher that show that the genetic algorithm method clearly improves over prior work and makes iris recognition possible up to 50 degrees off-angle.

  10. Dead time corrections using the backward extrapolation method

    Energy Technology Data Exchange (ETDEWEB)

    Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)

    2017-05-11

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.

  11. Model correction factor method for system analysis

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Johannesen, Johannes M.

    2000-01-01

    The Model Correction Factor Method is an intelligent response surface method based on simplifiedmodeling. MCFM is aimed for reliability analysis in case of a limit state defined by an elaborate model. Herein it isdemonstrated that the method is applicable for elaborate limit state surfaces on which...... severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... surface than existing in the idealized model....

  12. Direct anharmonic correction method by molecular dynamics

    Science.gov (United States)

    Liu, Zhong-Li; Li, Rui; Zhang, Xiu-Lu; Qu, Nuo; Cai, Ling-Cang

    2017-04-01

    The quick calculation of accurate anharmonic effects of lattice vibrations is crucial to the calculations of thermodynamic properties, the construction of the multi-phase diagram and equation of states of materials, and the theoretical designs of new materials. In this paper, we proposed a direct free energy interpolation (DFEI) method based on the temperature dependent phonon density of states (TD-PDOS) reduced from molecular dynamics simulations. Using the DFEI method, after anharmonic free energy corrections we reproduced the thermal expansion coefficients, the specific heat, the thermal pressure, the isothermal bulk modulus, and the Hugoniot P- V- T relationships of Cu easily and accurately. The extensive tests on other materials including metal, alloy, semiconductor and insulator also manifest that the DFEI method can easily uncover the rest anharmonicity that the quasi-harmonic approximation (QHA) omits. It is thus evidenced that the DFEI method is indeed a very efficient method used to conduct anharmonic effect corrections beyond QHA. More importantly it is much more straightforward and easier compared to previous anharmonic methods.

  13. Methods of integral saturation elimination in automatic regulation systems with PID-controllers

    Directory of Open Access Journals (Sweden)

    Дмитро Олегович Кроніковський

    2014-11-01

    Full Text Available The effect of integral saturation that reduces the quality of regulation appears with classic PID-controller usage in the real conditions.The pulp dryer of sugar factory as the real object of automation is considered. The impact of integral saturation is demonstrated based on the control processes modeling in pulp dryer. A modern methods to eliminate the integral saturation are considered. 

  14. Generalized Density-Corrected Model for Gas Diffusivity in Variably Saturated Soils

    DEFF Research Database (Denmark)

    Chamindu, Deepagoda; Møldrup, Per; Schjønning, Per

    2011-01-01

    models. The GDC model was further extended to describe two-region (bimodal) soils and could describe and predict Dp/Do well for both different soil aggregate size fractions and variably compacted volcanic ash soils. A possible use of the new GDC model is engineering applications such as the design......Accurate predictions of the soil-gas diffusivity (Dp/Do, where Dp is the soil-gas diffusion coefficient and Do is the diffusion coefficient in free air) from easily measureable parameters like air-filled porosity (ε) and soil total porosity (φ) are valuable when predicting soil aeration...... and the emission of greenhouse gases and gaseous-phase contaminants from soils. Soil type (texture) and soil density (compaction) are two key factors controlling gas diffusivity in soils. We extended a recently presented density-corrected Dp(ε)/Do model by letting both model parameters (α and β) be interdependent...

  15. Finite analytic method for modeling variably saturated flows.

    Science.gov (United States)

    Zhang, Zaiyong; Wang, Wenke; Gong, Chengcheng; Yeh, Tian-Chyi Jim; Wang, Zhoufeng; Wang, Yu-Li; Chen, Li

    2017-11-13

    This paper develops a finite analytic method (FAM) for solving the two-dimensional Richards' equation. The FAM incorporates the analytic solution in local elements to formulate the algebraic representation of the partial differential equation of unsaturated flow so as to effectively control both numerical oscillation and dispersion. The FAM model is then verified using four examples, in which the numerical solutions are compared with analytical solutions, solutions from VSAFT2, and observational data from a field experiment. These numerical experiments show that the method is not only accurate but also efficient, when compared with other numerical methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. A new automatic baseline correction method based on iterative method

    Science.gov (United States)

    Bao, Qingjia; Feng, Jiwen; Chen, Fang; Mao, Wenping; Liu, Zao; Liu, Kewen; Liu, Chaoyang

    2012-05-01

    A new automatic baseline correction method for Nuclear Magnetic Resonance (NMR) spectra is presented. It is based on an improved baseline recognition method and a new iterative baseline modeling method. The presented baseline recognition method takes advantages of three baseline recognition algorithms in order to recognize all signals in spectra. While in the iterative baseline modeling method, besides the well-recognized baseline points in signal-free regions, the 'quasi-baseline points' in the signal-crowded regions are also identified and then utilized to improve robustness by preventing the negative regions. The experimental results on both simulated data and real metabolomics spectra with over-crowded peaks show the efficiency of this automatic method.

  17. A new experimental method to determine the saturation voltage of a small-geometry MOSFET

    Science.gov (United States)

    Jang, Wen-Yueh; Wu, Chung-Yu; Wu, Hong-Jen

    1988-09-01

    A new extraction method which determines the saturation voltage of a small-geometry MOSFET directly from the measured data is proposed and investigated. In this method, a special function G is formed and the drain-source saturation voltage is identified as the voltage of the peak point in a plot of G vs the drain-source voltage. Since the method is based on a general device theory, it is virtually independent of any device model and quite versatile and applicable for all MOSFETs. In addition, no given device parameters or iterations are required in the method. To verify the new method, SPICE MOS models are used as a calculation example. Moreover, the method is also applied to various fabricated MOSFETs to determine the saturation voltage. It is found that the saturation voltage can be definitely determined without ambiguity and the determined saturation voltage is quite close to that from the optimal extractions. Thus the method can be incorporated into the parameter extraction and the device modeling for small-geometry MOSFETs.

  18. Saturated hydraulic conductivity as parameter for modeling applications - comparison of determination methods

    Science.gov (United States)

    Weninger, Thomas; Kreiselmeier, Janis; Chandrasekhar, Parvarthy; Julich, Stefan; Feger, Karl-Heinz; Schwärzel, Kai; Schwen, Andreas

    2017-04-01

    Saturated hydraulic conductivity is broadly used to parametrize physical characteristics of soil. Many methods for its determination have been developed, but still no standard has been established. For the interpretation of results it has to be considered that different methods yield varying results. In this study, values for saturated hydraulic conductivity were measured directly by the falling head lab-method as well as derived indirectly by model fitting to data from hood-infiltrometer experiments in the field and evaporation experiments in the lab. Successive sampling of the exactly same soil body for all three methods ensured the highest possible comparability. Additional physical soil parameters were measured and tested for their suitability as predictors in pedotransfer functions. The experiments were conducted all through the vegetation period 2016 at 4 sites in Lower Austria and Saxony, Germany. Sampled soils had a sandy loam or loamy silt texture and were cultivated with regionally common annual field crops. Subsequently, the results were evaluated with regard to their further use as key parameter in the expression of hydraulic soil properties. Significant differences were found between the evaporation method and the two other methods, where the former underestimated the saturated conductivity considerably. Consequently, an appropriate procedure for the determination of saturated hydraulic conductivity was formulated which combines results of hood infiltrometry and falling head method.

  19. Resolution methods in proving the program correctness

    Directory of Open Access Journals (Sweden)

    Markoski Branko

    2007-01-01

    Full Text Available Program testing determines whether its behavior matches the specification, and also how it behaves in different exploitation conditions. Proving of program correctness is reduced to finding a proof for assertion that given sequence of formulas represents derivation within a formal theory of special predicted calculus. A well-known variant of this conception is described: correctness based on programming logic rules. It is shown that programming logic rules may be used in automatic resolution procedure. Illustrative examples are given, realized in prolog-like LP-language (with no restrictions to Horn's clauses and without the final failure. Basic information on LP-language are also given. It has been shown how a Pascal-program is being executed in LP-system proffer.

  20. A graphic-analytical method for determining saturation pressure in oil deposits

    Energy Technology Data Exchange (ETDEWEB)

    Ramazanova, E.E.; Federtsov, V.K.; Ismaylov, K.K.

    1980-01-01

    This article summarizes factual material concerning a large number of oil deposits in the Soviet Union, and selections are made of statistical-probability models for these deposits. A graphic-analytical method is developed for determining involved in a gas saturation of oil.

  1. Paneling methods with vorticity effects and corrections for nonlinear compressibility

    Science.gov (United States)

    Dillenius, Marnix F. E.; Allen, Jerry M.

    1992-01-01

    Supersonic panel methods and axisymmetric body-modeling singularity methods are presently combined with corrections for nonlinear flow phenomena to a complete missile, its airbreathing inlets, and wing-body combinations. The computer code LRCDM2 is used as an illustrative example of the methods in question. Attention is given to a preliminary method which employs panels to estimate additive drag and lift acting on supersonic rectangular inlets, as well as to the method used to correct off-body flowfields for the presence of a shock. Examples of missile applications of these methods with the appropriate nonlinear corrections are presented.

  2. A method for the correction of drift in movement analysis.

    Science.gov (United States)

    Stokes, V P

    1991-04-01

    A method is proposed for the correction of drift over cyclic three-dimensional kinematic data during treadmill locomotion. An adaptive least-squares drift correction algorithm (ALSDC) is developed from the operational definition of no drift. This method includes automatic selection of least-squares polynomial degree and sequential processing of large data sets.

  3. Three methods for correction of astigmatism during phacoemulsification

    Directory of Open Access Journals (Sweden)

    Hossein Mohammad-Rabei

    2016-01-01

    Conclusion: There was no significant difference in astigmatism reduction among the three methods of astigmatism correction during phacoemulsification. Each of these methods can be used at the discretion of the surgeon.

  4. CIDRE: an illumination-correction method for optical microscopy.

    Science.gov (United States)

    Smith, Kevin; Li, Yunpeng; Piccinini, Filippo; Csucs, Gabor; Balazs, Csaba; Bevilacqua, Alessandro; Horvath, Peter

    2015-05-01

    Uneven illumination affects every image acquired by a microscope. It is often overlooked, but it can introduce considerable bias to image measurements. The most reliable correction methods require special reference images, and retrospective alternatives do not fully model the correction process. Our approach overcomes these issues for most optical microscopy applications without the need for reference images.

  5. Comparison of Minimally and More Invasive Methods of Determining Mixed Venous Oxygen Saturation.

    Science.gov (United States)

    Smit, Marli; Levin, Andrew I; Coetzee, Johan F

    2016-04-01

    To investigate the accuracy of a minimally invasive, 2-step, lookup method for determining mixed venous oxygen saturation compared with conventional techniques. Single-center, prospective, nonrandomized, pilot study. Tertiary care hospital, university setting. Thirteen elective cardiac and vascular surgery patients. All participants received intra-arterial and pulmonary artery catheters. Minimally invasive oxygen consumption and cardiac output were measured using a metabolic module and lithium-calibrated arterial waveform analysis (LiDCO; LiDCO, London), respectively. For the minimally invasive method, Step 1 involved these minimally invasive measurements, and arterial oxygen content was entered into the Fick equation to calculate mixed venous oxygen content. Step 2 used an oxyhemoglobin curve spreadsheet to look up mixed venous oxygen saturation from the calculated mixed venous oxygen content. The conventional "invasive" technique used pulmonary artery intermittent thermodilution cardiac output, direct sampling of mixed venous and arterial blood, and the "reverse-Fick" method of calculating oxygen consumption. LiDCO overestimated thermodilution cardiac output by 26%. Pulmonary artery catheter-derived oxygen consumption underestimated metabolic module measurements by 27%. Mixed venous oxygen saturation differed between techniques; the calculated values underestimated the direct measurements by between 12% to 26.3%, this difference being statistically significant. The magnitude of the differences between the minimally invasive and invasive techniques was too great for the former to act as a surrogate of the latter and could adversely affect clinical decision making. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Passive background correction method for spatially resolved detection

    Science.gov (United States)

    Schmitt, Randal L [Tijeras, NM; Hargis, Jr., Philip J.

    2011-05-10

    A method for passive background correction during spatially or angularly resolved detection of emission that is based on the simultaneous acquisition of both the passive background spectrum and the spectrum of the target of interest.

  7. An Ensemble Method for Spelling Correction in Consumer Health Questions.

    Science.gov (United States)

    Kilicoglu, Halil; Fiszman, Marcelo; Roberts, Kirk; Demner-Fushman, Dina

    2015-01-01

    Orthographic and grammatical errors are a common feature of informal texts written by lay people. Health-related questions asked by consumers are a case in point. Automatic interpretation of consumer health questions is hampered by such errors. In this paper, we propose a method that combines techniques based on edit distance and frequency counts with a contextual similarity-based method for detecting and correcting orthographic errors, including misspellings, word breaks, and punctuation errors. We evaluate our method on a set of spell-corrected questions extracted from the NLM collection of consumer health questions. Our method achieves a F1 score of 0.61, compared to an informed baseline of 0.29, achieved using ESpell, a spelling correction system developed for biomedical queries. Our results show that orthographic similarity is most relevant in spelling error correction in consumer health questions and that frequency and contextual information are complementary to orthographic features.

  8. Automated general temperature correction method for dielectric soil moisture sensors

    Science.gov (United States)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a

  9. Proposal on dynamic correction method for resonance ionization mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Noto, Takuma, E-mail: noto.takuma@d.mbox.nagoya-u.ac.jp; Tomita, Hideki [Nagoya University, Department of Quantum Engineering (Japan); Richter, Sven; Schneider, Fabian; Wendt, Klaus [Johannes Gutenberg University Mainz, Institute of Physics (Germany); Iguchi, Tetsuo; Kawarabayashi, Jun [Nagoya University, Department of Quantum Engineering (Japan)

    2013-04-15

    For high precision and accuracy in isotopic ratio measurement of transuranic elements using laser ablation assisted resonance ionization mass spectrometry, a dynamic correction method based on correlation of ion signals with energy and timing of each laser pulse was proposed. The feasibility of this dynamic correction method was investigated through the use of a programmable electronics device for fast acquisition of the energy and timing of each laser pulse.

  10. Metric-based method of software requirements correctness improvement

    Directory of Open Access Journals (Sweden)

    Yaremchuk Svitlana

    2017-01-01

    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

  11. CUDA accelerated method for motion correction in MR PROPELLER imaging.

    Science.gov (United States)

    Feng, Chaolu; Yang, Jingzhu; Zhao, Dazhe; Liu, Jiren

    2013-10-01

    In PROPELLER, raw data are collected in N strips, each locating at the center of k-space and consisting of Mx sampling points in frequency encoding direction and L lines in phase encoding direction. Phase correction, rotation correction, and translation correction are used to remove artifacts caused by physiological motion and physical movement, but their time complexities reach O(Mx×Mx×L×N), O(N×RA×Mx×L×(Mx×L+RN×RN)), and O(N×(RN×RN+Mx×L)) where RN×RN is the coordinate space each strip gridded onto and RA denotes the rotation range. A CUDA accelerated method is proposed in this paper to improve their performances. Although our method is implemented on a general PC with Geforce 8800GT and Intel Core(TM)2 E6550 2.33GHz, it can directly run on more modern GPUs and achieve a greater speedup ratio without being changed. Experiments demonstrate that (1) our CUDA accelerated phase correction achieves exactly the same result with the non-accelerated implementation, (2) the results of our CUDA accelerated rotation correction and translation correction have only slight differences with those of their non-accelerated implementation, (3) images reconstructed from the motion correction results of CUDA accelerated methods proposed in this paper satisfy the clinical requirements, and (4) the speed up ratio is close to 6.5. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. A corrected solid boundary treatment method for Smoothed Particle Hydrodynamics

    Science.gov (United States)

    Chen, Yun-sai; Zheng, Xing; Jin, Shan-qin; Duan, Wen-yang

    2017-04-01

    Smoothed Particle Hydrodynamics method (SPH) has a good adaptability for simulating of free surface flow problems. However, there are some shortcomings of SPH which are still in open discussion. This paper presents a corrected solid boundary handling method for weakly compressible SPH. This improved method is very helpful for numerical stability and pressure distribution. Compared with other solid boundary handling methods, this corrected method is simpler for virtual ghost particle interpolation and the ghost particle evaluation relationship is clearer. Several numerical tests are given, like dam breaking, solitary wave impact and sloshing tank waves. The results show that the corrected solid boundary processing method can recover the spurious oscillations of pressure distribution when simulating the problems with complex geometry boundary.

  13. Correction method for influence of tissue scattering for sidestream dark-field oximetry using multicolor LEDs

    Science.gov (United States)

    Kurata, Tomohiro; Oda, Shigeto; Kawahira, Hiroshi; Haneishi, Hideaki

    2016-12-01

    We have previously proposed an estimation method of intravascular oxygen saturation (SO_2) from the images obtained by sidestream dark-field (SDF) imaging (we call it SDF oximetry) and we investigated its fundamental characteristics by Monte Carlo simulation. In this paper, we propose a correction method for scattering by the tissue and performed experiments with turbid phantoms as well as Monte Carlo simulation experiments to investigate the influence of the tissue scattering in the SDF imaging. In the estimation method, we used modified extinction coefficients of hemoglobin called average extinction coefficients (AECs) to correct the influence from the bandwidth of the illumination sources, the imaging camera characteristics, and the tissue scattering. We estimate the scattering coefficient of the tissue from the maximum slope of pixel value profile along a line perpendicular to the blood vessel running direction in an SDF image and correct AECs using the scattering coefficient. To evaluate the proposed method, we developed a trial SDF probe to obtain three-band images by switching multicolor light-emitting diodes and obtained the image of turbid phantoms comprised of agar powder, fat emulsion, and bovine blood-filled glass tubes. As a result, we found that the increase of scattering by the phantom body brought about the decrease of the AECs. The experimental results showed that the use of suitable values for AECs led to more accurate SO_2 estimation. We also confirmed the validity of the proposed correction method to improve the accuracy of the SO_2 estimation.

  14. System and method for generating motion corrected tomographic images

    Science.gov (United States)

    Gleason, Shaun S [Knoxville, TN; Goddard, Jr., James S.

    2012-05-01

    A method and related system for generating motion corrected tomographic images includes the steps of illuminating a region of interest (ROI) to be imaged being part of an unrestrained live subject and having at least three spaced apart optical markers thereon. Simultaneous images are acquired from a first and a second camera of the markers from different angles. Motion data comprising 3D position and orientation of the markers relative to an initial reference position is then calculated. Motion corrected tomographic data obtained from the ROI using the motion data is then obtained, where motion corrected tomographic images obtained therefrom.

  15. Simple spectral stray light correction method for array spectroradiometers

    Science.gov (United States)

    Zong, Yuqin; Brown, Steven W.; Johnson, B. Carol; Lykke, Keith R.; Ohno, Yoshi

    2006-02-01

    A simple, practical method has been developed to correct a spectroradiometer's response for measurement errors arising from the instrument's spectral stray light. By characterizing the instrument's response to a set of monochromatic laser sources that cover the instrument's spectral range, one obtains a spectral stray light signal distribution matrix that quantifies the magnitude of the spectral stray light signal within the instrument. By use of these data, a spectral stray light correction matrix is derived and the instrument's response can be corrected with a simple matrix multiplication. The method has been implemented and validated with a commercial CCD-array spectrograph. Spectral stray light errors after the correction was applied were reduced by 1-2 orders of magnitude to a level of approximately 10-5 for a broadband source measurement, equivalent to less than one count of the 15-bit-resolution instrument. This method is fast enough to be integrated into an instrument's software to perform real-time corrections with minimal effect on acquisition speed. Using instruments that have been corrected for spectral stray light, we expect significant reductions in overall measurement uncertainties in many applications in which spectrometers are commonly used, including radiometry, colorimetry, photometry, and biotechnology.

  16. An introduction to the locally-corrected Nystrom method

    CERN Document Server

    Peterson, Andrew; Balanis, Constantine

    2010-01-01

    This lecture provides a tutorial introduction to the Nyström and locally-corrected Nyström methods when used for the numerical solutions of the common integral equations of two-dimensional electromagnetic fields. These equations exhibit kernel singularities that complicate their numerical solution. Classical and generalized Gaussian quadrature rules are reviewed. The traditional Nyström method is summarized, and applied to the magnetic field equation for illustration. To obtain high order accuracy in the numerical results, the locally-corrected Nyström method is developed and applied to both t

  17. SPECT deadtime count loss correction using monitor source method

    Directory of Open Access Journals (Sweden)

    Wendy Siman

    2014-03-01

    Full Text Available Purpose: Deadtime-count-loss (DTloss correction using monitor source (MS requires: 1 uniform fractional DTloss across FOV, 2 high statistics MS images both with & without the object. The aims are validating condition 1 and developing a practical protocol that satisfies conditions 2 with minimal additional study duration.Methods and Materials: SPECT images of non-uniform phantoms (4GBq 99mTc along with MS (20MBq 99mTc attached to each detector were acquired multiple times over 48 hours in photopeak and scatter energy window (EW using Siemens-SymbiaS and GE-D670. Planar images of the MS alone were acquired. Photopeak counts for the MS ROIs were > 100kcts. Fractional DTloss uniformity across the FOV was evaluated by correlating count rates in different ROIs on projection images at different DTloss levels. The correction factor for each SPECT projection at every time point was calculated as the ratio of time-corrected MS count rates with & without the phantom.The DTloss-corrected projections for each SPECT acquisition were decay corrected to one time point. The correction accuracy was assessed against DTloss estimated by paralyzable model. The accuracy of projection-based DTloss correction for SPECT was evaluated. A method to model projection DTloss based on a subset of measured projection DTloss was investigated. The relation of DTloss between photopeak and scatter EW was explored.Results: The fractional DTloss was uniform across the FOV (r > 0.99, validating condition 1. The MS method was accurate to > 99% for planar and SPECT. Measured DTloss from 3-to-5 projections/detector may be used to estimate DTloss with accuracy > 98% for all SPECT projections by modeling DTloss with measured projection rate. The correction factor in photopeak and scatter EW are equivalent with > 99% agreement.Conclusion: MS method can accurately correct planar and SPECT DTloss. Sparse sampling of the projection DTloss allows acquiring MS counts with high statistics with

  18. Effect of hydrocarbon to nuclear magnetic resonance (NMR) logging in tight sandstone reservoirs and method for hydrocarbon correction

    Science.gov (United States)

    Xiao, Liang; Mao, Zhi-qiang; Xie, Xiu-hong

    2017-04-01

    , 100ms, 300ms and 1000ms are first used to separate the T2 distributions of the residual oil saturation as 8 parts, and 8 pore components percentage compositions are calculated, second, an optimal T2 cutoff is determined to cut the T2 spectra of fully brine saturated conditions into two parts, the left parts (with short T2 time) represent to the irreducible water, and they do not need to be corrected, only the shape for the right parts of the T2 spectra needed to be corrected. Third the relationships among the amplitudes corresponding to the T2 times large than the optimal T2 cut off and 8 pore components percentage compositions are established, and they are used to predict corrected T2 amplitudes from NMR logging under residual oil saturation. Finally, the amplitudes corresponding to the left parts and the estimated amplitudes are spliced as the corrected NMR amplitudes, and a corrected T2 spectrum can be obtained. The reliability of this method is verified by comparing the corrected results and the experimental measurements. This method is extended to field application, fully water saturated T2 distributions are extracted from field NMR logging, and they are used to precisely evaluate hydrocarbon-bearing formations pore structure.

  19. Correcting Congenital Talipes Equinovarus in Children Using Three Different Corrective Methods

    Science.gov (United States)

    Chen, Wei; Pu, Fang; Yang, Yang; Yao, Jie; Wang, Lizhen; Liu, Hong; Fan, Yubo

    2015-01-01

    Abstract Equinus, varus, cavus, and adduction are typical signs of congenital talipes equinovarus (CTEV). Forefoot adduction remains a difficulty from using previous corrective methods. This study aims to develop a corrective method to reduce the severity of forefoot adduction of CTEV children with moderate deformities during their walking age. The devised method was compared with 2 other common corrective methods to evaluate its effectiveness. A Dennis Brown (DB) splint, DB splint with orthopedic shoes (OS), and forefoot abduct shoes (FAS) with OS were, respectively, applied to 15, 20, and 18 CTEV children with moderate deformities who were scored at their first visit according to the Diméglio classification. The mean follow-up was 44 months and the orthoses were changed as the children grew. A 3D scanner and a high-resolution pedobarograph were used to record morphological characteristics and plantar pressure distribution. One-way MAVONA analysis was used to compare the bimalleolar angle, bean–shape ratio, and pressure ratios in each study group. There were significant differences in the FAS+OS group compared to the DB and DB+OS groups (P < 0.05) for most measurements. The most salient differences were as follows: the FAS+OS group had a significantly greater bimalleolar angle (P < 0.05) and lower bean–shape ratio (P < 0.01) than the other groups; the DB+OS and FAS+OS groups had higher heel/forefoot and heel/LMF ratios (P < 0.01 and P < 0.001) than the DB group. FAS are critical for correcting improper forefoot adduction and OS are important for the correction of equinus and varus in moderately afflicted CTEV children. This study suggests that the use of FAS+OS may improve treatment outcomes for moderate CTEV children who do not show signs of serious torsional deformity. PMID:26181538

  20. Correcting Congenital Talipes Equinovarus in Children Using Three Different Corrective Methods: A Consort Study.

    Science.gov (United States)

    Chen, Wei; Pu, Fang; Yang, Yang; Yao, Jie; Wang, Lizhen; Liu, Hong; Fan, Yubo

    2015-07-01

    Equinus, varus, cavus, and adduction are typical signs of congenital talipes equinovarus (CTEV). Forefoot adduction remains a difficulty from using previous corrective methods. This study aims to develop a corrective method to reduce the severity of forefoot adduction of CTEV children with moderate deformities during their walking age. The devised method was compared with 2 other common corrective methods to evaluate its effectiveness. A Dennis Brown (DB) splint, DB splint with orthopedic shoes (OS), and forefoot abduct shoes (FAS) with OS were, respectively, applied to 15, 20, and 18 CTEV children with moderate deformities who were scored at their first visit according to the Diméglio classification. The mean follow-up was 44 months and the orthoses were changed as the children grew. A 3D scanner and a high-resolution pedobarograph were used to record morphological characteristics and plantar pressure distribution. One-way MAVONA analysis was used to compare the bimalleolar angle, bean-shape ratio, and pressure ratios in each study group. There were significant differences in the FAS+OS group compared to the DB and DB+OS groups (P < 0.05) for most measurements. The most salient differences were as follows: the FAS+OS group had a significantly greater bimalleolar angle (P < 0.05) and lower bean-shape ratio (P < 0.01) than the other groups; the DB+OS and FAS+OS groups had higher heel/forefoot and heel/LMF ratios (P < 0.01 and P < 0.001) than the DB group. FAS are critical for correcting improper forefoot adduction and OS are important for the correction of equinus and varus in moderately afflicted CTEV children. This study suggests that the use of FAS+OS may improve treatment outcomes for moderate CTEV children who do not show signs of serious torsional deformity.

  1. A vibration correction method for free-fall absolute gravimeters

    Science.gov (United States)

    Qian, J.; Wang, G.; Wu, K.; Wang, L. J.

    2018-02-01

    An accurate determination of gravitational acceleration, usually approximated as 9.8 m s‑2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.

  2. Human motion correction and representation method from motion camera

    Directory of Open Access Journals (Sweden)

    Hong-Bo Zhang

    2017-06-01

    Full Text Available Motion estimation is a basic issue for many computer vision tasks, such as human–computer interaction, motion objection detection and intelligent robot. In many practical scenes, the object movement goes with camera motion. Generally, motion descriptors directly based on optical flow are inaccurate and have low discrimination power. To this end, a novel motion correction method is proposed and a novel motion feature descriptor called the motion difference histogram (MDH for recognising human action is proposed in this study. Motion estimation results are corrected by background motion estimation and MDH encodes the motion difference between the background and the objects. Experimental results on video shot with camera motion show that the proposed motion correction method is effective and the recognition accuracy of MDH is better than that of the state-of-the-art motion descriptor.

  3. An efficient optimization method to improve the measuring accuracy of oxygen saturation by using triangular wave optical signal

    Science.gov (United States)

    Li, Gang; Yu, Yue; Zhang, Cui; Lin, Ling

    2017-09-01

    The oxygen saturation is one of the important parameters to evaluate human health. This paper presents an efficient optimization method that can improve the accuracy of oxygen saturation measurement, which employs an optical frequency division triangular wave signal as the excitation signal to obtain dynamic spectrum and calculate oxygen saturation. In comparison to the traditional method measured RMSE (root mean square error) of SpO2 which is 0.1705, this proposed method significantly reduced the measured RMSE which is 0.0965. It is notable that the accuracy of oxygen saturation measurement has been improved significantly. The method can simplify the circuit and bring down the demand of elements. Furthermore, it has a great reference value on improving the signal to noise ratio of other physiological signals.

  4. Analysis of Influence Factors on Nuclear magnetic Resonance Measurement and Correction Method in Igneous Rocks

    Science.gov (United States)

    Tan, M.

    2016-12-01

    Nuclear magnetic resonance (NMR) logging has significant advantages in reservoir identification, fluid typing, and calculation of porosity and permeability in sedimentary rocks. However, NMR well logging porosity was badly underestimated, which limited the application of NMR logging technology. Therefore, it is necessary to analyze the influence factors on NMR measurement and study the correction method and logging interpretation model in igneous rocks. Based on the characteristics of igneous rocks, influence factors are investigated from two aspects of NMR experimental and theoretical perspective. The laboratory analysis indicates that the relative error between NMR porosity and core porosity generally increases from acid tuff and rhyolite, intermediate andesite and granite-porphyry, and falic basalt. Moreover, NMR porosity relative error increases with paramagnetic substance such as iron, manganese and nickel concentrations increasing. Theoretically, NMR relaxation mechanism indicates that the magnetic susceptibility leads to additional internal magnetic field gradient, which makes T2relaxation distribution shift forward and the porosity relative error also generally increases with the magnetic susceptibility rising. Therefore, the paramagnetic substance and high magnetic susceptibility lead to the NMR porosity underestimation. How to correct the NMR measurement in igneous rocks is another key problem. In view of the above analysis, the empirical formula of NMR porosity correction was built based on paramagnetic element content by using multiple regression. Moreover, we considered the influences of diffusion relaxation and improved the inversion algorithm, and the T2distribution was subsequently corrected. The simulation experiment proves the correction method effective. In case study, NMR logging data was reprocessed by this new method, and the NMR porosity and permeability match well with core laboratory measurements, and the corrected T2 distributions shift

  5. A Hold-out method to correct PCA variance inflation

    DEFF Research Database (Denmark)

    Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai

    2012-01-01

    In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...

  6. Stochastic methods for light propagation and recurrent scattering in saturated and nonsaturated atomic ensembles

    Science.gov (United States)

    Lee, Mark D.; Jenkins, Stewart D.; Ruostekoski, Janne

    2016-06-01

    We derive equations for the strongly coupled system of light and dense atomic ensembles. The formalism includes an arbitrary internal-level structure for the atoms and is not restricted to weak excitation of atoms by light. In the low-light-intensity limit for atoms with a single electronic ground state, the full quantum field-theoretical representation of the model can be solved exactly by means of classical stochastic electrodynamics simulations for stationary atoms that represent cold atomic ensembles. Simulations for the optical response of atoms in a quantum degenerate regime require one to synthesize a stochastic ensemble of atomic positions that generates the corresponding quantum statistical position correlations between the atoms. In the case of multiple ground levels or at light intensities where saturation becomes important, the classical simulations require approximations that neglect quantum fluctuations between the levels. We show how the model is extended to incorporate corrections due to quantum fluctuations that result from virtual scattering processes. In the low-light-intensity limit, we illustrate the simulations in a system of atoms in a Mott-insulator state in a two-dimensional optical lattice, where recurrent scattering of light induces strong interatomic correlations. These correlations result in collective many-atom subradiant and superradiant states and a strong dependence of the response on the spatial confinement within the lattice sites.

  7. Three Methods for Correction of Astigmatism during Phacoemulsification

    Science.gov (United States)

    Mohammad-Rabei, Hossein; Mohammad-Rabei, Elham; Espandar, Goldis; Javadi, Mohammad Ali; Jafarinasab, Mohammad Reza; Hashemian, Seyed Javad; Feizi, Sepehr

    2016-01-01

    Purpose: To compare the safety and efficacy of three methods for correcting pre-existing astigmatism during phacoemulsification. Methods: This prospective, comparative, non-randomized study was conducted from March 2010 to January 2011, and included patients with keratometric astigmatism ≥1.25 D undergoing cataract surgery. Astigmatism was corrected using the following approaches: limbal relaxing incisions (LRI) on the steep meridian, extension and suturing of the phaco incision created at the steep meridian (extended-on-axis incision, EOAI), and toric intraocular lens (tIOL) implantation. Keratometric and refractive astigmatism were evaluated 1, 8, and 24 weeks postoperatively. Results: Eighty-three eyes of 72 patients (35 male and 37 female) with mean age of 62.4 ± 14.3 (range, 41-86) years were enrolled. The astigmatism was corrected by using the LRI, EOAI and tIOL implantation methods in 17, 33 and 33 eyes, respectively. Postoperative uncorrected distance visual acuity (UDVA) was significantly improved in all three groups. The difference in postoperative UDVA was not statistically significant among the study groups throughout follow-up except at week 24, when UCVA was significantly better in the tIOL group as compared to the EOAI group (P = 0.024). There is no statistically significant difference of correction index and index of success between three groups at week 24 (P = 0.085 and P = 0.085 respectively). Conclusion: There was no significant difference in astigmatism reduction among the three methods of astigmatism correction during phacoemulsification. Each of these methods can be used at the discretion of the surgeon. PMID:27413496

  8. CORRECTING ACCOUNTING RESULTS OF TENSIONS USING FEM BY HSS METHOD

    Directory of Open Access Journals (Sweden)

    D. O. Bannikov

    2011-05-01

    Full Text Available The usage of the Hot Spot Stress (HSS method by means of linear surface extrapolation (LSE approach was analyzed for the correction of results of the Finite-Element Method (FEM in case of singularity of stresses. The given examples of structures and testing examples were computed on the base of design-and-computation software SCAD for Windows (version 11.3.

  9. Correction method for line extraction in vision measurement.

    Directory of Open Access Journals (Sweden)

    Mingwei Shao

    Full Text Available Over-exposure and perspective distortion are two of the main factors underlying inaccurate feature extraction. First, based on Steger's method, we propose a method for correcting curvilinear structures (lines extracted from over-exposed images. A new line model based on the Gaussian line profile is developed, and its description in the scale space is provided. The line position is analytically determined by the zero crossing of its first-order derivative, and the bias due to convolution with the normal Gaussian kernel function is eliminated on the basis of the related description. The model considers over-exposure features and is capable of detecting the line position in an over-exposed image. Simulations and experiments show that the proposed method is not significantly affected by the exposure level and is suitable for correcting lines extracted from an over-exposed image. In our experiments, the corrected result is found to be more precise than the uncorrected result by around 45.5%. Second, we analyze perspective distortion, which is inevitable during line extraction owing to the projective camera model. The perspective distortion can be rectified on the basis of the bias introduced as a function of related parameters. The properties of the proposed model and its application to vision measurement are discussed. In practice, the proposed model can be adopted to correct line extraction according to specific requirements by employing suitable parameters.

  10. Different partial volume correction methods lead to different conclusions

    DEFF Research Database (Denmark)

    Greve, Douglas N; Salat, David H; Bowen, Spencer L

    2016-01-01

    A cross-sectional group study of the effects of aging on brain metabolism as measured with (18)F-FDG-PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) using...... of cortical regions showing a decrease with age. Simulations showed that NoPVC had significant bias that made the age effect on metabolism appear to be much larger and more significant than it is. MZ was found to be the same as NoPVC for liberal brain masks; for conservative brain masks, MZ showed few areas...... a substantial loss in the power to detect age-related changes. This diversity of results reflects the literature on the metabolism of aging and suggests that extreme care should be taken when applying PVC or interpreting results that have been corrected for partial volume effects. Using the SGTM, significant...

  11. Suggested Methods for Preventing Core Saturation Instability in HVDC Transmission Systems

    Energy Technology Data Exchange (ETDEWEB)

    Norheim, Ian

    2002-07-01

    In this thesis a study of the HVDC related phenomenon core saturation instability and methods to prevent this phenomenon is performed. It is reason to believe that this phenomenon caused disconnection of the Skagerrak HVDC link 10 August 1993. Internationally, core saturation instability has been reported at several HVDC schemes and thorough complex studies of the phenomenon has been performed. This thesis gives a detailed description of the phenomenon and suggest some interesting methods to prevent the development of it. Core saturation instability and its consequences can be described in a simplified way as follows: It is now assumed that a fundamental harmonic component is present in the DC side current. Due to the coupling between the AC side and the DC side of the HVDC converter, a subsequent second harmonic positive-sequence current and DC currents will be generated on the AC side. The DC currents will cause saturation in the converter transformers. This will cause the magnetizing current to also have a second harmonic positive-sequence component. If a high second harmonic impedance is seen from the commutation bus, a high positive-sequence second harmonic component will be present in the commutation voltages. This will result in a relatively high fundamental frequency component in the DC side voltage. If the fundamental frequency impedance at the DC side is relatively low the fundamental component in the DC side current may become larger than it originally was. In addition the HVDC control system may contribute to the fundamental frequency component in the DC side voltage, and in this way cause a system even more sensitive to core saturation instability. The large magnetizing currents that eventually will flow on the AC side cause large zero-sequence currents in the neutral conductors of the AC transmission lines connected to the HVDC link. This may result in disconnection of the lines. Alternatively, the harmonics in the large magnetizing currents may cause

  12. A Spectral Deferred Correction Method for Fractional Differential Equations

    Directory of Open Access Journals (Sweden)

    Jia Xin

    2013-01-01

    Full Text Available A spectral deferred correction method is presented for the initial value problems of fractional differential equations (FDEs with Caputo derivative. This method is constructed based on the residual function and the error equation deduced from Volterra integral equations equivalent to the FDEs. The proposed method allows that one can use a relatively few nodes to obtain the high accuracy numerical solutions of FDEs without the penalty of a huge computational cost due to the nonlocality of Caputo derivative. Finally, preliminary numerical experiments are given to verify the efficiency and accuracy of this method.

  13. A rigid motion correction method for helical computed tomography (CT)

    Science.gov (United States)

    Kim, J.-H.; Nuyts, J.; Kyme, A.; Kuncic, Z.; Fulton, R.

    2015-03-01

    We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data.

  14. Summary of the research methods of DNAPL-water interfacial area and DNAPL saturation in porous media

    Science.gov (United States)

    Li, M.; Wan, L.

    2016-12-01

    The dense non-aqueous phase liquid (DNAPL)-water interfacial area and DNAPL saturation are key factors in groundwater pollution remediation. The research methods of DNAPL-water interfacial area were summarized, including interfacial partitioning tracer tests, synchrotron X-ray microtomography and theoretical models, and the disparity of the study results with different methods was analyzed. The applications of DNAPL saturation measurement methods including tracer test method, light transmission visualization (LTV) and electrical resistivity tomography (ERT) were also summarized, especially the current applications of light transmission method in China. The partitioning tracer test, as an important method in the study of correlation between DNAPL-water interfacial areas and DNAPL saturation for porous media systems, should be given more attention in laboratory and field experiments.

  15. Laser Radar Through the Window (LRTW) Coordinate Correction Method

    Science.gov (United States)

    Hayden, Joseph Ethan (Inventor); Kubalak, David Albert (Inventor); Hadjimichael, Theodore John (Inventor); Eegholm, Bente Hoffmann (Inventor); Ohl, IV, Raymond George (Inventor); Telfer, Randal Crawford (Inventor); Coulter, Phillip (Inventor)

    2015-01-01

    A method for corrections of measurements of points of interests measured by beams of radiation propagating through stratified media including performance of ray-tracing of at least one ray lunched from a metrology instrument in a direction of an apparent point of interest, calculation a path length of the ray through stratified medium, and determination of coordinates of true position of the point interest using the at least one path length and the direction of propagation of the ray.

  16. a New Color Correction Method for Underwater Imaging

    Science.gov (United States)

    Bianco, G.; Muzzupappa, M.; Bruno, F.; Garcia, R.; Neumann, L.

    2015-04-01

    Recovering correct or at least realistic colors of underwater scenes is a very challenging issue for imaging techniques, since illumination conditions in a refractive and turbid medium as the sea are seriously altered. The need to correct colors of underwater images or videos is an important task required in all image-based applications like 3D imaging, navigation, documentation, etc. Many imaging enhancement methods have been proposed in literature for these purposes. The advantage of these methods is that they do not require the knowledge of the medium physical parameters while some image adjustments can be performed manually (as histogram stretching) or automatically by algorithms based on some criteria as suggested from computational color constancy methods. One of the most popular criterion is based on gray-world hypothesis, which assumes that the average of the captured image should be gray. An interesting application of this assumption is performed in the Ruderman opponent color space lαβ, used in a previous work for hue correction of images captured under colored light sources, which allows to separate the luminance component of the scene from its chromatic components. In this work, we present the first proposal for color correction of underwater images by using lαβ color space. In particular, the chromatic components are changed moving their distributions around the white point (white balancing) and histogram cutoff and stretching of the luminance component is performed to improve image contrast. The experimental results demonstrate the effectiveness of this method under gray-world assumption and supposing uniform illumination of the scene. Moreover, due to its low computational cost it is suitable for real-time implementation.

  17. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  18. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  19. The Etiology of Presbyopia, Contributing Factors, and Future Correction Methods

    Science.gov (United States)

    Hickenbotham, Adam Lyle

    Presbyopia has been a complicated problem for clinicians and researchers for centuries. Defining what constitutes presbyopia and what are its primary causes has long been a struggle for the vision and scientific community. Although presbyopia is a normal aging process of the eye, the continuous and gradual loss of accommodation is often dreaded and feared. If presbyopia were to be considered a disease, its global burden would be enormous as it affects more than a billion people worldwide. In this dissertation, I explore factors associated with presbyopia and develop a model for explaining the onset of presbyopia. In this model, the onset of presbyopia is associated primarily with three factors; depth of focus, focusing ability (accommodation), and habitual reading (or task) distance. If any of these three factors could be altered sufficiently, the onset of presbyopia could be delayed or prevented. Based on this model, I then examine possible optical methods that would be effective in correcting for presbyopia by expanding depth of focus. Two methods that have been show to be effective at expanding depth of focus include utilizing a small pupil aperture or generating higher order aberrations, particularly spherical aberration. I compare these two optical methods through the use of simulated designs, monitor testing, and visual performance metrics and then apply them in subjects through an adaptive optics system that corrects aberrations through a wavefront aberrometer and deformable mirror. I then summarize my findings and speculate about the future of presbyopia correction.

  20. Research on evaluation method for water saturation of tight sandstone in Suxi region

    Science.gov (United States)

    Lv, Hong; Lai, Fuqiang; Chen, Liang; Li, Chao; Li, Jie; Yi, Heping

    2017-05-01

    The evaluation of irreducible water saturation is important for qualitative and quantitative prediction of residual oil distribution. However, it is to be improved for the accuracy of experimental measuring the irreducible water saturation and logging evaluation. In this paper, firstly the multi-functional core flooding experiment and the nuclear magnetic resonance centrifugation experiment are carried out in the west of Sulige gas field. Then, the influence was discussed about particle size, porosity and permeability on the water saturation. Finally, the evaluation model was established about irreducible water saturation and the evaluation of irreducible water saturation was carried out. The results show that the results of two experiments are both reliable. It is inversely proportional to the median particle size, porosity and permeability, and is most affected by the median particle size. The water saturation of the dry layer is higher than that of the general reservoir. The worse the reservoir property, the greater the water saturation. The test results show that the irreducible water saturation model can be used to evaluate the water floor.

  1. An agent-based method for simulating porous fluid-saturated structures with indistinguishable components

    Science.gov (United States)

    Kashani, Jamal; Pettet, Graeme John; Gu, YuanTong; Zhang, Lihai; Oloyede, Adekunle

    2017-10-01

    Single-phase porous materials contain multiple components that intermingle up to the ultramicroscopic level. Although the structures of the porous materials have been simulated with agent-based methods, the results of the available methods continue to provide patterns of distinguishable solid and fluid agents which do not represent materials with indistinguishable phases. This paper introduces a new agent (hybrid agent) and category of rules (intra-agent rule) that can be used to create emergent structures that would more accurately represent single-phase structures and materials. The novel hybrid agent carries the characteristics of system's elements and it is capable of changing within itself, while also responding to its neighbours as they also change. As an example, the hybrid agent under one-dimensional cellular automata formalism in a two-dimensional domain is used to generate patterns that demonstrate the striking morphological and characteristic similarities with the porous saturated single-phase structures where each agent of the ;structure; carries semi-permeability property and consists of both fluid and solid in space and at all times. We conclude that the ability of the hybrid agent to change locally provides an enhanced protocol to simulate complex porous structures such as biological tissues which could facilitate models for agent-based techniques and numerical methods.

  2. A Method To ModifyCorrect The Performance Of Amplifiers

    Directory of Open Access Journals (Sweden)

    Rohith Krishnan R

    2015-01-01

    Full Text Available Abstract The actual response of the amplifier may vary with the replacement of some aged or damaged components and this method is to compensate that problem. Here we use op-amp Fixator as the design tool. The tool helps us to isolate the selected circuit component from rest of the circuit adjust its operating point to correct the performance deviations and to modify the circuit without changing other parts of the circuit. A method to modifycorrect the performance of amplifiers by properly redesign the circuit is presented in this paper.

  3. Mapping topsoil field-saturated hydraulic conductivity from point measurements using different methods

    Directory of Open Access Journals (Sweden)

    Braud Isabelle

    2017-09-01

    Full Text Available Topsoil field-saturated hydraulic conductivity, Kfs, is a parameter that controls the partition of rainfall between infiltration and runoff and is a key parameter in most distributed hydrological models. There is a mismatch between the scale of local in situ Kfs measurements and the scale at which the parameter is required in models for regional mapping. Therefore methods for extrapolating local Kfs values to larger mapping units are required. The paper explores the feasibility of mapping Kfs in the Cévennes-Vivarais region, in south-east France, using more easily available GIS data concerning geology and land cover. Our analysis makes uses of a data set from infiltration measurements performed in the area and its vicinity for more than ten years. The data set is composed of Kfs derived from infiltration measurements performed using various methods: Guelph permeameters, double ring and single ring infiltrotrometers and tension infiltrometers. The different methods resulted in a large variation in Kfs up to several orders of magnitude. A method is proposed to pool the data from the different infiltration methods to create an equivalent set of Kfs. Statistical tests showed significant differences in Kfs distributions in function of different geological formations and land cover. Thus the mapping of Kfs at regional scale was based on geological formations and land cover. This map was compared to a map based on the Rawls and Brakensiek (RB pedotransfer function (mainly based on texture and the two maps showed very different patterns. The RB values did not fit observed equivalent Kfs at the local scale, highlighting that soil texture alone is not a good predictor of Kfs.

  4. Thermodynamic correction of numerical diffusion in WCSPH method

    Directory of Open Access Journals (Sweden)

    David López Gómez

    2015-01-01

    Full Text Available The SPH method has been used successfully for the numerical simulation of hydrodynamic flows. CEDEX has developed its own SPH model, SPHERIMENTAL for quasi-compressible flow, with which several studies of calibration have been performed. Problems of numerical diffusion have been observed in simulations of variable transient regime, which increase the entropy of the system and damp the movement of the fluid. It has been used a dambreak test case (Lobovsky, 2013 in which this effect has been noted. It is necessary to take care of the boundary conditions, the correct spatial discretization of the fluid and to use a suitable turbulence model to obtain an accurate numerical simulation, but even so, excessive energy dissipation occurs. The causes of this problem have been analyzed in this paper, and a correction is proposed.

  5. Simulation of electrochemical machining using the boundary element method with no saturation

    Science.gov (United States)

    Petrov, A. G.; Sanduleanu, S. V.

    2016-10-01

    The simulation of electrochemical machining (ECM) is based on determining the surface shape at each point in time. The change in the shape of the surface depends on the rate of the electrochemical dissolution of the metal (conducting material), which is assumed to be proportional to the electric field strength on the boundary of the workpiece. The potential of the electric field is a harmonic function outside the two domains—the tool electrode and the workpiece. Constant potentials are specified on the boundaries of the tool electrode and the workpiece. A scheme with no saturation in which the strength of the electric field created by the potential difference on the boundary of the workpiece is proposed. The scheme converges exponentially in the number of grid elements on the workpiece boundary. Given the rate of electrochemical dissolution, the workpiece boundary, which depends on time, is found. The numerical solutions are compared with exact solutions, examples of the ECM simulation are discussed, and the results are compared with those obtained by other numerical methods and the ones obtained using ECM machines.

  6. Using automatic calibration method for optimizing the performance of Pedotransfer functions of saturated hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    Ahmed M. Abdelbaki

    2016-06-01

    Full Text Available Pedotransfer functions (PTFs are an easy way to predict saturated hydraulic conductivity (Ksat without measurements. This study aims to auto calibrate 22 PTFs. The PTFs were divided into three groups according to its input requirements and the shuffled complex evolution algorithm was used in calibration. The results showed great modification in the performance of the functions compared to the original published functions. For group 1 PTFs, the geometric mean error ratio (GMER and the geometric standard deviation of error ratio (GSDER values were modified from range (1.27–6.09, (5.2–7.01 to (0.91–1.15, (4.88–5.85 respectively. For group 2 PTFs, the GMER and the GSDER values were modified from (0.3–1.55, (5.9–12.38 to (1.00–1.03, (5.5–5.9 respectively. For group 3 PTFs, the GMER and the GSDER values were modified from (0.11–2.06, (5.55–16.42 to (0.82–1.01, (5.1–6.17 respectively. The result showed that the automatic calibration is an efficient and accurate method to enhance the performance of the PTFs.

  7. Auto correct method of AD converters precision based on ethernet

    Directory of Open Access Journals (Sweden)

    NI Jifeng

    2013-10-01

    Full Text Available Ideal AD conversion should be a straight zero-crossing line in the Cartesian coordinate axis system. While in practical engineering, the signal processing circuit, chip performance and other factors have an impact on the accuracy of conversion. Therefore a linear fitting method is adopted to improve the conversion accuracy. An automatic modification of AD conversion based on Ethernet is presented by using software and hardware. Just by tapping the mouse, all the AD converter channel linearity correction can be automatically completed, and the error, SNR and ENOB (effective number of bits are calculated. Then the coefficients of linear modification are loaded into the onboard AD converter card's EEPROM. Compared with traditional methods, this method is more convenient, accurate and efficient,and has a broad application prospects.

  8. The contour method cutting assumption: error minimization and correction

    Energy Technology Data Exchange (ETDEWEB)

    Prime, Michael B [Los Alamos National Laboratory; Kastengren, Alan L [ANL

    2010-01-01

    The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.

  9. Biogeosystem Technique as a method to correct the climate

    Science.gov (United States)

    Kalinitchenko, Valery; Batukaev, Abdulmalik; Batukaev, Magomed; Minkina, Tatiana

    2017-04-01

    can be produced; The less energy is consumed for climate correction, the better. The proposed algorithm was never discussed before because most of its ingredients were unenforceable. Now the possibility to execute the algorithm exists in the framework of our new scientific-technical branch - Biogeosystem Technique (BGT*). The BGT* is a transcendental (non-imitating natural processes) approach to soil processing, regulation of energy, matter, water fluxes and biological productivity of biosphere: intra-soil machining to provide the new highly productive dispersed system of soil; intra-soil pulse continuous-discrete plants watering to reduce the transpiration rate and water consumption of plants for 5-20 times; intra-soil environmentally safe return of matter during intra-soil milling processing and (or) intra-soil pulse continuous-discrete plants watering with nutrition. Are possible: waste management; reducing flow of nutrients to water systems; carbon and other organic and mineral substances transformation into the soil to plant nutrition elements; less degradation of biological matter to greenhouse gases; increasing biological sequestration of carbon dioxide in terrestrial system's photosynthesis; oxidizing methane and hydrogen sulfide by fresh photosynthesis ionized biologically active oxygen; expansion of the active terrestrial site of biosphere. The high biological product output of biosphere will be gained. BGT* robotic systems are of low cost, energy and material consumption. By BGT* methods the uncertainties of climate and biosphere will be reduced. Key words: Biogeosystem Technique, method to correct, climate

  10. A new boundary correction method for lung parenchyma

    Science.gov (United States)

    Liang, Junfang; Jiang, Huiqin; Ma, Ling; Liu, Yumin; Toshiya, Nakaguchi

    2017-02-01

    In order to repair the boundary depressions caused by juxtapleural nodules and improve the lung segmentation accuracy, we propose a new boundary correction method for lung parenchyma. Firstly, the top-hat filter is used to enhance the image contrast; Secondly, we employ the Ostu algorithm for image binarization; Thirdly, the connected component labeling algorithm is utilized to remove the main trachea; Fourthly, the initial mask image is obtained by morphological region filling algorithm; Fifthly, the boundary tracing algorithm is applied to extract the initial lung contour; Afterwards, we design a sudden change degree algorithm to modify the initial lung contour; Finally, the complete lung parenchyma image is obtained. The novelty is that sudden change degree algorithm can detect the inflection points more accurately than other methods, which contributes to repairing lung contour efficiently. The experimental results show that the proposed method can incorporate the juxtapleural nodules into the lung parenchyma effectively, and the precision is increased by 6.46% and 2.72% respectively compared with the other two methods, providing favorable conditions for the accurate detection of pulmonary nodules and having important clinical value.

  11. A method to calculate arterial and venous saturation from near infrared spectroscopy (NIRS)

    NARCIS (Netherlands)

    Menssen, J.J.M.; Colier, W.N.J.M.; Hopman, J.C.W.; Liem, D.; Korte, C.L. de

    2009-01-01

    For adequate development and functioning of the neonatal brain, sufficient oxygen (O2) should be available. With a fast sampling (f(s) > 50 Hz) continuous wave NIRS device, arterial (SaO2) and venous (SvO2) saturation can be measured using the physiological fluctuations in the oxyhemoglobin (O2Hb)

  12. Diagnostics and correction of disregulation states by physical methods

    OpenAIRE

    Gorsha, O. V.; Gorsha, V. I.

    2017-01-01

    Nicolaus Copernicus University, Toruń, Poland Ukrainian Research Institute for Medicine of Transport, Odesa, Ukraine Gorsha O. V., Gorsha V. I. Diagnostics and correction of disregulation states by physical methods Горша О. В., Горша В. И. Диагностика и коррекция физическими методами дизрегуляторных состояний Toruń, Odesa 2017 Nicolaus Copernicus University, To...

  13. Systems and Methods for Correcting Optical Reflectance Measurements

    Science.gov (United States)

    Yang, Ye (Inventor); Soller, Babs R. (Inventor); Soyemi, Olusola O. (Inventor); Shear, Michael A. (Inventor)

    2014-01-01

    We disclose measurement systems and methods for measuring analytes in target regions of samples that also include features overlying the target regions. The systems include: (a) a light source; (b) a detection system; (c) a set of at least first, second, and third light ports which transmit light from the light source to a sample and receive and direct light reflected from the sample to the detection system, generating a first set of data including information corresponding to both an internal target within the sample and features overlying the internal target, and a second set of data including information corresponding to features overlying the internal target; and (d) a processor configured to remove information characteristic of the overlying features from the first set of data using the first and second sets of data to produce corrected information representing the internal target.

  14. [First Experience with Femtosecond Laser Presbyopia Correction Method INTRACOR].

    Science.gov (United States)

    Žiak, P; Lucká, K; Mojžiš, P; Katuščáková, I; Halička, J

    We report the first experience with presbyopia correcting femtosecond laser surgical procedure INTRACOR. This procedure is so far the only one that is made purely intrastromally without generating a wound connected to corneal surface or anterior chamber.Presbyopia - caused by physiological aging and decreasing elasticity of the lens, impairs patients accommodative ability. In the case of the method INTRACOR, presbyopia is corrected by steepening of corneal curvature in the central optical zone. Procedure is usually performed only in the non-dominant eye. Intracor procedure was performed in 10 eyes of 10 patients (3 women and 7 men, aged 47-58 years). All procedures were performed with the femtosecond laser VICTUS (Bausch - Lomb, USA) in the non-dominant eye by an experienced surgeon. One-year follow-up. Mean monocular uncorrected near visual acuity (UNVA) improved from 0.2 ± 0.1 before surgery to 0.7 ± 0.3 after treatment (mean improvement of four lines). Mean near uncorrected binocular visual acuity (UNBVA) improved from a mean preoperative value of 0.23 ± 0.08 to a mean postoperative value of 0.8 ± 0.22 (mean improvement of about 5 lines). The mean monocular uncorrected distance visual acuity (UDVA) was 0.9 ± 0.1 before surgery and 0.8 ± 0.3 after treatment (average loss of 1 line). The mean binocular uncorrected distance visual acuity improved from 1.0 ± 0,1 to 1,3 ± 0.3 after surgery. All patients had improvements in near vision. In 3 patient, monocular distance vision improved, in 6 patient improved binocular distance vision. We observed statistically significant decrease (mean loss of 1 line) of monocular best corrected distance visual acuity (BCDVA). Patients subjectively reported satisfaction with the quality of vision achieved for near and distance and high levels of spectacle independence under good lighting conditions.The results shows that INTRACOR method is well suitable for low hyperopic patients, who because of good distance visual acuity are

  15. Correction

    CERN Multimedia

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  16. Correction: Electrochemical Investigation of the Corrosion of Different Microstructural Phases of X65 Pipeline Steel under Saturated Carbon Dioxide Conditions. Materials 2015, 8, 2635–2649

    Directory of Open Access Journals (Sweden)

    Yuanfeng Yang

    2015-12-01

    Full Text Available In the published manuscript “Electrochemical Investigation of the Corrosion of Different Microstructural Phases of X65 Pipeline Steel under Saturated Carbon Dioxide Conditions. [...

  17. Comparison of saturated areas mapping methods in the Jizera Mountains, Czech Republic

    Directory of Open Access Journals (Sweden)

    Kulasova Alena

    2014-06-01

    Full Text Available Understanding and modelling the processes of flood runoff generation is still a challenge in catchment hydrology. In particular, there are issues about how best to represent the effects of the antecedent state of saturation of a catchment on runoff formation and flood hydrographs. This paper reports on the experience of mapping saturated areas using measured water table by piezometers and more qualitative assessments of the state of the moisture at soil surface or immediately under it to provide information that can usefully condition model predictions. Vegetation patterns can also provide useful indicators of runoff source areas, but integrated over much longer periods of time. In this way, it might be more likely that models will get the right predictions for the right reasons.

  18. Solving Fractional Partial Differential Equations with Corrected Fourier Series Method

    Directory of Open Access Journals (Sweden)

    Nor Hafizah Zainal

    2014-01-01

    Full Text Available The corrected Fourier series (CFS is proposed for solving partial differential equations (PDEs with fractional time derivative on a finite domain. In the previous work, we have been solving partial differential equations by using corrected Fourier series. The fractional derivatives are described in Riemann sense. Some numerical examples are presented to show the solutions.

  19. Synthesis of high saturation magnetic iron oxide nanomaterials via low temperature hydrothermal method

    Energy Technology Data Exchange (ETDEWEB)

    Bhavani, P.; Rajababu, C.H. [Department of Materials Science & Nanotechnology, Yogivemana University, Vemanapuram 516003, Kadapa (India); Arif, M.D. [Environmental Magnetism Laboratory, Indian Institute of Geomagnetism (IIG), Navi Mumbai 410218, Mumbai (India); Reddy, I. Venkata Subba [Department of Physics, Gitam University, Hyderabad Campus, Rudraram, Medak 502329 (India); Reddy, N. Ramamanohar, E-mail: manoharphd@gmail.com [Department of Materials Science & Nanotechnology, Yogivemana University, Vemanapuram 516003, Kadapa (India)

    2017-03-15

    Iron oxide nanoparticles (IONPs) were synthesized through a simple low temperature hydrothermal approach to obtain with high saturation magnetization properties. Two series of iron precursors (sulfates and chlorides) were used in synthesis process by varying the reaction temperature at a constant pH. The X-ray diffraction pattern indicates the inverse spinel structure of the synthesized IONPs. The Field emission scanning electron microscopy and high resolution transmission electron microscopy studies revealed that the particles prepared using iron sulfate were consisting a mixer of spherical (16–40 nm) and rod (diameter ~20–25 nm, length <100 nm) morphologies that synthesized at 130 °C, while the IONPs synthesized by iron chlorides are found to be well distributed spherical shapes with size range 5–20 nm. On other hand, the IONPs synthesized at reaction temperature of 190 °C has spherical (16–46 nm) morphology in both series. The band gap values of IONPs were calculated from the obtained optical absorption spectra of the samples. The IONPs synthesized using iron sulfate at temperature of 130 °C exhibited high saturation magnetization (M{sub S}) of 103.017 emu/g and low remanant magnetization (M{sub r}) of 0.22 emu/g with coercivity (H{sub c}) of 70.9 Oe{sub ,} which may be attributed to the smaller magnetic domains (d{sub m}) and dead magnetic layer thickness (t). - Highlights: • Comparison of iron oxide materials prepared with Fe{sup +2}/Fe{sup +3} sulfates and chlorides at different temperatures. • We prepared super-paramagnetic and soft ferromagnetic magnetite nanoparticles. • We report higher saturation magnetization with lower coercivity.

  20. Method of coupling 1-D unsaturated flow with 3-D saturated flow on large scale

    Directory of Open Access Journals (Sweden)

    Yan ZHU

    2011-12-01

    Full Text Available A coupled unsaturated-saturated water flow numerical model was developed. The water flow in the unsaturated zone is considered the one-dimensional vertical flow, which changes in the horizontal direction according to the groundwater table and the atmospheric boundary conditions. The groundwater flow is treated as the three-dimensional water flow. The recharge flux to groundwater from soil water is considered the bottom flux for the numerical simulation in the unsaturated zone, and the upper flux for the groundwater simulation. It connects and unites the two separated water flow systems. The soil water equation is solved based on the assumed groundwater table and the subsequent predicted recharge flux. Then, the groundwater equation is solved with the predicted recharge flux as the upper boundary condition. Iteration continues until the discrepancy between the assumed and calculated groundwater nodal heads have a certain accuracy. Illustrative examples with different water flow scenarios regarding the Dirichlet boundary condition, the Neumann boundary condition, the atmospheric boundary condition, and the source or sink term were calculated by the coupled model. The results are compared with those of other models, including Hydrus-1D, SWMS-2D, and FEFLOW, which demonstrate that the coupled model is effective and accurate and can significantly reduce the computational time for the large number of nodes in saturated-unsaturated water flow simulation.

  1. The effect of different corrective feedback methods on the outcome and self confidence of young athletes

    National Research Council Canada - National Science Library

    Tzetzis, George; Votsis, Evandros; Kourtessis, Thomas

    2008-01-01

    This experiment investigated the effects of three corrective feedback methods, using different combinations of correction, or error cues and positive feedback for learning two badminton skills with different difficulty...

  2. Effect of methods of myopia correction on visual acuity, contrast sensitivity, and depth of focus

    NARCIS (Netherlands)

    Nio, YK; Jansonius, NM; Wijdh, RHJ; Beekhuis, WH; Worst, JGF; Noorby, S; Kooijman, AC

    2003-01-01

    Purpose. To psychophysically measure spherical and irregular aberrations in patients with various types of myopia correction. Setting: Laboratory of Experimental Ophthalmology, University of Groningen, Groningen, The Netherlands. Methods: Three groups of patients with low myopia correction

  3. Evaluating a Combined Bias Correction and Stochastic Downscaling Method

    Science.gov (United States)

    Volosciuk, Claudia; Maraun, Douglas; Vrac, Mathieu; Widmann, Martin

    2016-04-01

    Much of our knowledge about future changes in precipitation relies on global (GCM) and/or regional climate models (RCM) that have resolutions which are much coarser than typical spatial scales of extreme precipitation. The major problems with these projections are both climate model biases and the scale gap between grid box and point scale. Wong et al. presented a first attempt to jointly bias correct and downscale precipitation at daily scales. This approach however relied on spectrally nudged RCM simulations and was not able to post-process GCM biases. Previously, we have presented an extension of this approach that separates the downscaling from the bias correction and in principle is applicable to free running RCMs, such as those available from ENSEMBLES or CORDEX. In a first step, we bias correct the RCMs (EURO-CORDEX) against gridded observational datasets (e.g., E-OBS) at the same scale using a quantile mapping approach that relies on distribution transformation. To correct the whole precipitation distribution including extreme tails we apply a mixture distribution of a gamma distribution for the precipitation mass and a generalized Pareto distribution for the extreme tail. In a second step, we bridge the scale gap: we add small scale variability to the bias corrected precipitation time series using a vector generalized linear gamma model (VGLM gamma). To calibrate the VGLM gamma model we determine the statistical relationship between precipitation observations on different scales, i.e. between gridded (e.g., E-OBS) and station (ECA&D) observations. Here we present a comprehensive evaluation of this approach against 86 weather stations in Europe based on the VALUE perfect predictor experiment, including a comparison with standard bias correction techniques.

  4. [Bifocal contact lenses as a correction method in presbyopia].

    Science.gov (United States)

    Avetisov, S E; Rybakova, E G; Egorova, G B; Churkina, M N; Borodina, N V; Boev, V I

    2003-01-01

    A study aimed at assessing the efficiency of presbyopia correction by bifocal contact lenses (BCL) was undertaken; it envisaged a comprehensive evaluation of subjective data provided by patients and measurements of a number of functional parameters of the visual quality for far and near, including mono- and binocular measurements with BCL of different constructions versus a maximal sphero-cylindrical spectacle correction for far and for near. Soft Acuvue Bifocal BCL as well as soft and rigid BCL manufactured in the optical-and-mechanical laboratory of the Research Institute for Eye Disease of the Russian Academy of Medical Sciences and Russian-made and imported bifocal soft and rigid lenses, respectively, were made use of in the study. A reduced contrast sensitivity (mainly in high frequencies) to 7% with Russian-made BCL, to 12.5% with Acuvue Bifocal BCL, to 8.7% with monofocal BCL and to 13.4% with the "mono-vision" system was registered. A decreased visual working ability to 13% with soft bifocal Russian-made BCL, to 17.3% with Acuvue Bifocal BCL and to 20.7% with the "mono-vision" system was detected versus the spectacle correction. A reduction by 25% was noted in the stereoscopic vision indices with the "mono-vision" system. A study of sensitivity to dazzling did not show any statistically reliable differences between various correction types.

  5. Point of data saturation was assessed using resampling methods in a survey with open-ended questions.

    Science.gov (United States)

    Tran, Viet-Thi; Porcher, Raphael; Falissard, Bruno; Ravaud, Philippe

    2016-12-01

    To describe methods to determine sample sizes in surveys using open-ended questions and to assess how resampling methods can be used to determine data saturation in these surveys. We searched the literature for surveys with open-ended questions and assessed the methods used to determine sample size in 100 studies selected at random. Then, we used Monte Carlo simulations on data from a previous study on the burden of treatment to assess the probability of identifying new themes as a function of the number of patients recruited. In the literature, 85% of researchers used a convenience sample, with a median size of 167 participants (interquartile range [IQR] = 69-406). In our simulation study, the probability of identifying at least one new theme for the next included subject was 32%, 24%, and 12% after the inclusion of 30, 50, and 100 subjects, respectively. The inclusion of 150 participants at random resulted in the identification of 92% themes (IQR = 91-93%) identified in the original study. In our study, data saturation was most certainly reached for samples >150 participants. Our method may be used to determine when to continue the study to find new themes or stop because of futility. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. A new method for calculation of water saturation in shale gas reservoirs using V P -to-V S ratio and porosity

    Science.gov (United States)

    Liu, Kun; Sun, Jianmeng; Zhang, Hongpan; Liu, Haitao; Chen, Xiangyang

    2018-02-01

    Total water saturation is an important parameter for calculating the free gas content of shale gas reservoirs. Owing to the limitations of the Archie formula and its extended solutions in zones rich in organic or conductive minerals, a new method was proposed to estimate total water saturation according to the relationship between total water saturation, V P -to-V S ratio and total porosity. Firstly, the ranges of the relevant parameters in the viscoelastic BISQ model in shale gas reservoirs were estimated. Then, the effects of relevant parameters on the V P -to-V S ratio were simulated based on the partially saturated viscoelastic BISQ model. These parameters were total water saturation, total porosity, permeability, characteristic squirt-flow length, fluid viscosity and sonic frequency. The simulation results showed that the main factors influencing V P -to-V S ratio were total porosity and total water saturation. When the permeability and the characteristic squirt-flow length changed slightly for a particular shale gas reservoir, their influences could be neglected. Then an empirical equation for total water saturation with respect to total porosity and V P -to-V S ratio was obtained according to the experimental data. Finally, the new method was successfully applied to estimate total water saturation in a sequence formation of shale gas reservoirs. Practical applications have shown good agreement with the results calculated by the Archie model.

  7. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function....... In this paper we explain the successful bias field correction properties of N3 by showing that it implicitly uses the same generative models and computational strategies as expectation maximization (EM) based bias field correction methods. We demonstrate experimentally that purely EM-based methods are capable...... of producing bias field correction results comparable to those of N3 in less computation time....

  8. Band extension in digital methods of transfer function determination – signal conditioners asymmetry error corrections

    Directory of Open Access Journals (Sweden)

    Zbigniew Staroszczyk

    2014-12-01

    Full Text Available [b]Abstract[/b]. In the paper, the calibrating method for error correction in transfer function determination with the use of DSP has been proposed. The correction limits/eliminates influence of transfer function input/output signal conditioners on the estimated transfer functions in the investigated object. The method exploits frequency domain conditioning paths descriptor found during training observation made on the known reference object.[b]Keywords[/b]: transfer function, band extension, error correction, phase errors

  9. A multipoint correction method for environmental temperature changes in airborne double-antenna microwave radiometers.

    Science.gov (United States)

    Sun, Jian; Zhao, Kai; Jiang, Tao

    2014-04-29

    This manuscript describes a new type Ka-band airborne double-antenna microwave radiometer (ADAMR) designed for detecting atmospheric supercooled water content (SCWC). The source of the measurement error is investigated by analyzing the model of the system gain factor and the principle of the auto-gain compensative technique utilized in the radiometer. Then, a multipoint temperature correction method based on the two-point calibration method for this radiometer is proposed. The multipoint temperature correction method can eliminate the effect of changes in environmental temperature by establishing the relationship between the measurement error and the physical temperatures of the temperature-sensitive units. In order to demonstrate the feasibility of the correction method, the long-term outdoor temperature experiment is carried out. The multipoint temperature correction equations are obtained by using the least square regression method. The comparison results show that the measuring accuracy of the radiometer can be increased more effectively by using the multipoint temperature correction method.

  10. A new digitized reverse correction method for hypoid gears based on a one-dimensional probe

    Science.gov (United States)

    Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo

    2017-12-01

    In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.

  11. Consideration of Correction Method for Compressive Strength of Core Specimen Within Deformed Bar

    OpenAIRE

    大塚, 秀三; 中田, 善久; 大木, 崇輔

    2013-01-01

    In rare occasions, there is core specimen that cut off deformed bar in structural concrete. However, there is not the correction method for compressive strength of core specimen within deformed bar that corresponds to the current concrete. This study proposed a simple correction method for compressive strength of core specimen within deformed bar, regardless of type of cement.

  12. General method of boundary correction in kernel regression estimation

    African Journals Online (AJOL)

    Kernel estimators of both density and regression functions are not consistent near the nite end points of their supports. In other words, boundary eects seriously aect the performance of these estimators. In this paper, we combine the transformation and the reflection methods in order to introduce a new general method of ...

  13. A Ring Artifact Correction Method: Validation by Micro-CT Imaging with Flat-Panel Detectors and a 2D Photon-Counting Detector

    Directory of Open Access Journals (Sweden)

    Mohamed Elsayed Eldib

    2017-01-01

    Full Text Available We introduce an efficient ring artifact correction method for a cone-beam computed tomography (CT. In the first step, we correct the defective pixels whose values are close to zero or saturated in the projection domain. In the second step, we compute the mean value at each detector element along the view angle in the sinogram to obtain the one-dimensional (1D mean vector, and we then compute the 1D correction vector by taking inverse of the mean vector. We multiply the correction vector with the sinogram row by row over all view angles. In the third step, we apply a Gaussian filter on the difference image between the original CT image and the corrected CT image obtained in the previous step. The filtered difference image is added to the corrected CT image to compensate the possible contrast anomaly that may appear due to the contrast change in the sinogram after removing stripe artifacts. We applied the proposed method to the projection data acquired by two flat-panel detectors (FPDs and a silicon-based photon-counting X-ray detector (PCXD. Micro-CT imaging experiments of phantoms and a small animal have shown that the proposed method can greatly reduce ring artifacts regardless of detector types. Despite the great reduction of ring artifacts, the proposed method does not compromise the original spatial resolution and contrast.

  14. Saturated salt method determination of hysteresis of Pinus sylvestris L. wood for 35 ºC isotherms

    Directory of Open Access Journals (Sweden)

    García Esteban, L.

    2004-12-01

    Full Text Available The saturated salts method was used in this study to quantify hysteresis in Pinus sylvestris L. wood, in an exercise that involved plotting the 35 ºC desorption and sorption isotherms. Nine salts were used, all of which establish stable and known relative humidity values when saturated in water The wood was kept at the relative humidity generated by each of these salts until the equilibrium moisture content (EMC was reached, both in the water loss or desorption, and the water uptake or sorption processes. The Guggenheim method was used to fit the values obtained to the respective curves. Hysteresis was evaluated in terms of the hysteresis coefficient, for which a mean value of 0.87 was found.

    Con este trabajo se ha cuantificado la histéresis de la madera de Pinus sylvestris L. Para ello, se han construido las isotermas de 35 ºC de adsorción y sorción, mediante el método de las sales saturadas. Se han utilizado nueve sales que cuando se saturan en agua dan lugar a unas humedades relativas estables y conocidas. La madera fue colocada bajo las distintas humedades relativas que confieren cada una de las sales hasta que alcanzaron las distintas humedades de equilibrio higroscópico, tanto en el proceso de pérdida de agua o desorción, como en el de adquisición de agua o de sorción. Los valores obtenidos fueron ajustados a las respectivas sigmoides, haciendo uso del método de Guggenheim. La valoración de la histéresis se determinó mediante el coeficiente de histéresis, obteniendo un valor medio de 0,87.

  15. Robust scatter correction method for cone-beam CT using an interlacing-slit plate

    CERN Document Server

    Huang, Kuidong; Zhang, Dinghua; Zhang, Hua; Shi, Wenlong

    2015-01-01

    Cone-beam computed tomography (CBCT) has been widely used in medical imaging and industrial nondestructive testing, but the presence of scattered radiation will cause significant reduction of image quality. In this article, a robust scatter correction method for CBCT using an interlacing-slit plate (ISP) is carried out for convenient practice. Firstly, a Gaussian filtering method is proposed to compensate the missing data of the inner scatter image, and simultaneously avoid too-large values of calculated inner scatter and smooth the inner scatter field. Secondly, an interlacing-slit scan without detector gain correction is carried out to enhance the practicality and convenience of the scatter correction method. Finally, a denoising step for scatter-corrected projection images is added in the process flow to control the noise amplification. The experimental results show that the improved method can not only make the scatter correction more robust and convenient, but also achieve a good quality of scatter-corre...

  16. [Method of correcting sensitivity nonuniformity using gaussian distribution on 3.0 Tesla abdominal MRI].

    Science.gov (United States)

    Hayashi, Norio; Miyati, Tosiaki; Takanaga, Masako; Ohno, Naoki; Hamaguchi, Takashi; Kozaka, Kazuto; Sanada, Shigeru; Yamamoto, Tomoyuki; Matsui, Osamu

    2011-01-01

    In the direction where the phased array coil used in parallel magnetic resonance imaging (MRI) is perpendicular to the arrangement, sensitivity falls significantly. Moreover, in a 3.0 tesla (3T) abdominal MRI, the quality of the image is reduced by changes in the relaxation time, reinforcement of the magnetic susceptibility effect, etc. In a 3T MRI, which has a high resonant frequency, the signal of the depths (central part) is reduced in the trunk part. SCIC, which is sensitivity correction processing, has inadequate correction processing, such as that edges are emphasized and the central part is corrected. Therefore, we used 3T with a Gaussian distribution. The uneven compensation processing for sensitivity of an abdomen MR image was considered. The correction processing consisted of the following methods. 1) The center of gravity of the domain of the human body in an abdomen MR image was calculated. 2) The correction coefficient map was created from the center of gravity using the Gaussian distribution. 3) The sensitivity correction image was created from the correction coefficient map and the original picture image. Using the Gaussian correction to process the image, the uniformity calculated using the NEMA method was improved significantly compared to the original image of a phantom. In a visual evaluation by radiologists, the uniformity was improved significantly using the Gaussian correction processing. Because of the homogeneous improvement of the abdomen image taken using 3T MRI, the Gaussian correction processing is considered to be a very useful technique.

  17. A new physiological method for heart rate correction of the QT interval

    OpenAIRE

    Davey, P.

    1999-01-01

    AIM—To reassess QT interval rate correction.
BACKGROUND—The QT interval is strongly and inversely related to heart rate. To compare QT intervals between different subjects with different heart rates requires the application of a QT interval rate correction formula. To date these formulae have inappropriately assumed a fixed relation between QT interval and heart rate. An alternative method of QT interval rate correction that makes no assumptions about the QT interval-heart rate relation is ne...

  18. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz

  19. Improved PCR method for the creation of saturation mutagenesis libraries in directed evolution: application to difficult-to-amplify templates.

    Science.gov (United States)

    Sanchis, Joaquin; Fernández, Layla; Carballeira, J Daniel; Drone, Jullien; Gumulya, Yosephine; Höbenreich, Horst; Kahakeaw, Daniel; Kille, Sabrina; Lohmer, Renate; Peyralans, Jérôme J-P; Podtetenieff, John; Prasad, Shreenath; Soni, Pankaj; Taglieber, Andreas; Wu, Sheng; Zilly, Felipe E; Reetz, Manfred T

    2008-11-01

    Saturation mutagenesis constitutes a powerful method in the directed evolution of enzymes. Traditional protocols of whole plasmid amplification such as Stratagene's QuikChange sometimes fail when the templates are difficult to amplify. In order to overcome such restrictions, we have devised a simple two-primer, two-stage polymerase chain reaction (PCR) method which constitutes an improvement over existing protocols. In the first stage of the PCR, both the mutagenic primer and the antiprimer that are not complementary anneal to the template. In the second stage, the amplified sequence is used as a megaprimer. Sites composed of one or more residues can be randomized in a single PCR reaction, irrespective of their location in the gene sequence.The method has been applied to several enzymes successfully, including P450-BM3 from Bacillus megaterium, the lipases from Pseudomonas aeruginosa and Candida antarctica and the epoxide hydrolase from Aspergillus niger. Here, we show that megaprimer size as well as the direction and design of the antiprimer are determining factors in the amplification of the plasmid. Comparison of the results with the performances of previous protocols reveals the efficiency of the improved method.

  20. [Validation of a scatter correction method for IMRT verification using portal imaging].

    Science.gov (United States)

    Kyas, Ina; Partridge, Mike; Hesse, Bernd-Michael; Oelfke, Uwe; Schlegel, Wolfgang

    2004-01-01

    Complex dose-delivery techniques, as currently applied in intensity-modulated radiation therapy (IMRT), require a highly efficient treatment-verification process. The present paper deals with the problem of the scatter correction for therapy verification by use of portal images obtained by an electronic portal imaging device (EPID) based on amorphous silicon. It also presents an iterative method for the scatter correction of portal images based on Monte Carlo-generated scatter kernels. First applications of this iterative scatter-correction method for the verification of intensity-modulated treatments are discussed on the basis of MVCT- and dose reconstruction. Several experiments with homogeneous and anthropomorphic phantoms were performed in order to validate the scatter correction method and to investigate the precision and relevance in view of its clinical applicability. It is shown that the devised concept of scatter correction significantly improves the results of MVCT- and dose reconstruction models, which is in turn essential for an exact online IMRT verification.

  1. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    Science.gov (United States)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  2. Distortion Correction in EPI Using an Extended PSF Method with a Reversed Phase Gradient Approach

    Science.gov (United States)

    In, Myung-Ho; Posnansky, Oleg; Beall, Erik B.; Lowe, Mark J.; Speck, Oliver

    2015-01-01

    In echo-planar imaging (EPI), such as commonly used for functional MRI (fMRI) and diffusion-tensor imaging (DTI), compressed distortion is a more difficult challenge than local stretching as spatial information can be lost in strongly compressed areas. In addition, the effects are more severe at ultra-high field (UHF) such as 7T due to increased field inhomogeneity. To resolve this problem, two EPIs with opposite phase-encoding (PE) polarity were acquired and combined after distortion correction. For distortion correction, a point spread function (PSF) mapping method was chosen due to its high correction accuracy and extended to perform distortion correction of both EPIs with opposite PE polarity thus reducing the PSF reference scan time. Because the amount of spatial information differs between the opposite PE datasets, the method was further extended to incorporate a weighted combination of the two distortion-corrected images to maximize the spatial information content of a final corrected image. The correction accuracy of the proposed method was evaluated in distortion-corrected data using both forward and reverse phase-encoded PSF reference data and compared with the reversed gradient approaches suggested previously. Further we demonstrate that the extended PSF method with an improved weighted combination can recover local distortions and spatial information loss and be applied successfully not only to spin-echo EPI, but also to gradient-echo EPIs acquired with both PE directions to perform geometrically accurate image reconstruction. PMID:25707006

  3. A temperature error correction method for a naturally ventilated radiation shield

    Science.gov (United States)

    Yang, Jie; Liu, Qingquan; Dai, Wei; Ding, Rrenhui

    2016-11-01

    Due to solar radiation exposure, air flowing inside a naturally ventilated radiation shield may produce a measurement error of 0.8 °C or higher. To improve the air temperature observation accuracy, a temperature error correction method is proposed. The correction method is based on a Computational Fluid Dynamics (CFD) method and a Genetic Algorithm (GA) method. The CFD method is implemented to analyze and calculate the temperature errors of a naturally ventilated radiation shield under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using the GA method. To verify the performance of the correction equation, the naturally ventilated radiation shield and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated temperature measurement platform serves as an air temperature reference. The mean temperature error given by measurements is 0.36 °C, and the mean temperature error given by correction equation is 0.34 °C. This correction equation allows the temperature error to be reduced by approximately 95%. The mean absolute error (MAE) and the root mean square error (RMSE) between the temperature errors given by the correction equation and the temperature errors given by the measurements are 0.07 °C and 0.08 °C, respectively.

  4. Comparison of classical methods for blade design and the influence of tip correction on rotor performance

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Okulov, Valery; Mikkelsen, Robert Flemming

    2016-01-01

    of the blade-element approach and lifting-line (BE/LL) methods, which are less used by the wind turbine community. The BE/LL method involves different interpretations for rotors with finite or infinite numbers of blades and different assumptions with respect to the optimum circulation distribution......The classical blade-element/momentum (BE/M) method, which is used together with different types of corrections (e.g. the Prandtl or Glauert tip correction), is today the most basic tool in the design of wind turbine rotors. However, there are other classical techniques based on a combination....... In the present study we compare the performance and the resulting design of the BE/M method by Glauert [1] and the BE/LL method by Betz [2] for finite as well as for infinite-bladed rotors, corrected for finiteness through the tip correction. In the first part of the paper, expressions are given for the optimum...

  5. Subspace Correction Methods for Total Variation and $\\ell_1$-Minimization

    KAUST Repository

    Fornasier, Massimo

    2009-01-01

    This paper is concerned with the numerical minimization of energy functionals in Hilbert spaces involving convex constraints coinciding with a seminorm for a subspace. The optimization is realized by alternating minimizations of the functional on a sequence of orthogonal subspaces. On each subspace an iterative proximity-map algorithm is implemented via oblique thresholding, which is the main new tool introduced in this work. We provide convergence conditions for the algorithm in order to compute minimizers of the target energy. Analogous results are derived for a parallel variant of the algorithm. Applications are presented in domain decomposition methods for degenerate elliptic PDEs arising in total variation minimization and in accelerated sparse recovery algorithms based on 1-minimization. We include numerical examples which show e.cient solutions to classical problems in signal and image processing. © 2009 Society for Industrial and Applied Physics.

  6. Simple method for correct enumeration of Staphylococcus aureus

    DEFF Research Database (Denmark)

    Haaber, J.; Cohn, M. T.; Petersen, A.

    2016-01-01

    culture. When grown in such liquid cultures, the human pathogen Staphylococcus aureus is characterized by its aggregation of single cells into clusters of variable size. Here, we show that aggregation during growth in the laboratory standard medium tryptic soy broth (TSB) is common among clinical...... and laboratory S. aureus isolates and that aggregation may introduce significant bias when applying standard enumeration methods on S. aureus growing in laboratory batch cultures. We provide a simple and efficient sonication procedure, which can be applied prior to optical density measurements to give...... an accurate estimate of cellular numbers in liquid cultures of S. aureus regardless of the aggregation level of the given strain. We further show that the sonication procedure is applicable for accurate determination of cell numbers using agar plate counting of aggregating strains....

  7. Determination of Matric Suction and Saturation Degree for Unsaturated Soils, Comparative Study - Numerical Method versus Analytical Method

    Science.gov (United States)

    Chiorean, Vasile-Florin

    2017-10-01

    Matric suction is a soil parameter which influences the behaviour of unsaturated soils in both terms of shear strength and permeability. It is a necessary aspect to know the variation of matric suction in unsaturated soil zone for solving geotechnical issues like unsaturated soil slopes stability or bearing capacity for unsaturated foundation ground. Mathematical expression of the dependency between soil moisture content and it’s matric suction (soil water characteristic curve) has a powerful character of nonlinearity. This paper presents two methods to determine the variation of matric suction along the depth included between groundwater level and soil level. First method is an analytical approach to emphasize one direction steady state unsaturated infiltration phenomenon that occurs between the groundwater level and the soil level. There were simulated three different situations in terms of border conditions: precipitations (inflow conditions on ground surface), evaporation (outflow conditions on ground surface), and perfect equilibrium (no flow on ground surface). Numerical method is finite element method used for steady state, two-dimensional, unsaturated infiltration calculus. Regarding boundary conditions there were simulated identical situations as in analytical approach. For both methods, was adopted the equation proposed by van Genuchten-Mualen (1980) for mathematical expression of soil water characteristic curve. Also for the unsaturated soil permeability prediction model was adopted the equation proposed by van Genuchten-Mualen. The fitting parameters of these models were adopted according to RETC 6.02 software in function of soil type. The analyses were performed in both methods for three major soil types: clay, silt and sand. For each soil type were concluded analyses for three situations in terms of border conditions applied on soil surface: inflow, outflow, and no flow. The obtained results are presented in order to highlight the differences

  8. Research on environment correction algorithm in the minimum deviation angle method for refractive index measuring

    Science.gov (United States)

    Sun, Chuan; Wang, Shanshan; Zhou, Siyu; Zhu, Qiudong

    2017-02-01

    This paper studies environment correction algorithm in the minimum deviation angle method for refractive index measuring. The principle equation of minimum deviation angle method, based on the refractive index of air and the absolute refractive index of glass specimens is derived. The environmental factors are analyzed which may affect the measurement results in the process of actual measurement. According to thermal characteristics equations of glass, absolute index of refraction of glass for certain material is related to temperature. According to the Edlén equation, refractive index of air is related to temperature, pressure, humidity and so on. Sometimes, the environmental factors are uncontrollable, refractive index will change over the environmental factors, including temperature, pressure and humidity. The correction algorithm of refractive index which modified the measurement results from the non-standard environmental conditions to standard conditions is perfected. It improves the correction accuracy. Taking H-ZK9B for example, the impact of environmental factors on the refractive index is analyzed adopting controlling variable method. The need for environmental factors correction in different accuracy requirements is given. To verify the correction method, two sets of measured refractive index data of the same glass are corrected which measured under different environmental factors. The difference between the two sets of data is less than 1×10-6 with the correction.

  9. ABSORBANCE CORRECTION METHOD FOR SIMULTANEOUS DETERMINATION OF NEBIVOLOL AND AMLODIPINE BESYLATE IN COMBINED TABLET DOSAGE FORM

    OpenAIRE

    Patel Satish A; Patel Paresh U; Patel Natavarlal J.

    2011-01-01

    The manuscript describes validated absorbance correction method for the estimation of nebivolol and amlodipine besylate in combined dosage form. Absorbance correction method was based on property of additivity of absorbances. The two wavelengths on amlodipine besylate curve were found out where it showed same absorbance, which were 262 and 332.5 nm. At 332.5 nm, amlodipine besylate showed some absorbance while nebivolol showed zero absorbance. Both the drugs gave absorbance at 262 nm. The met...

  10. A third-generation dispersion and third-generation hydrogen bonding corrected PM6 method

    DEFF Research Database (Denmark)

    Kromann, Jimmy Charnley; Christensen, Anders Steen; Svendsen, Casper Steinmann

    2014-01-01

    We present new dispersion and hydrogen bond corrections to the PM6 method, PM6-D3H+, and its implementation in the GAMESS program. The method combines the DFT-D3 dispersion correction by Grimme et al. with a modified version of the H+ hydrogen bond correction by Korth. Overall, the interaction en...... vibrational free energies. While the GAMESS implementation is up to 10 times slower for geometry optimizations of proteins in bulk solvent, compared to MOPAC, it is sufficiently fast to make geometry optimizations of small proteins practically feasible....

  11. The correction of eye blink artefacts in the EEG: a comparison of two prominent methods.

    Directory of Open Access Journals (Sweden)

    Sven Hoffmann

    Full Text Available BACKGROUND: The study investigated the residual impact of eyeblinks on the electroencephalogram (EEG after application of different correction procedures, namely a regression method (eye movement correction procedure, EMCP and a component based method (Independent Component Analysis, ICA. METHODOLOGY/PRINCIPLE FINDINGS: Real and simulated data were investigated with respect to blink-related potentials and the residual mutual information of uncorrected vertical electrooculogram (EOG and corrected EEG, which is a measure of residual EOG contribution to the EEG. The results reveal an occipital positivity that peaks at about 250 ms after the maximum blink excursion following application of either correction procedure. This positivity was not observable in the simulated data. Mutual information of vertical EOG and EEG depended on the applied regression procedure. In addition, different correction results were obtained for real and simulated data. ICA yielded almost perfect correction in all conditions. However, under certain conditions EMCP yielded comparable results to the ICA approach. CONCLUSION: In conclusion, for EMCP the quality of correction depended on the EMCP variant used and the structure of the data, whereas ICA always yielded almost perfect correction. However, its disadvantage is the much more complex data processing, and that it requires a suitable amount of data.

  12. A novel method for correcting scanline-observational bias of discontinuity orientation

    Science.gov (United States)

    Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong

    2016-03-01

    Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies.

  13. Beam-Based Error Identification and Correction Methods for Particle Accelerators

    CERN Document Server

    AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas

    2014-06-10

    Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...

  14. Non-uniformity correction for division of focal plane polarimeters with a calibration method.

    Science.gov (United States)

    Zhang, Junchao; Luo, Haibo; Hui, Bin; Chang, Zheng

    2016-09-10

    Division of focal plane polarimeters are composed of nanometer polarization elements overlaid upon a focal plane array (FPA) sensor. The manufacturing flaws of the polarization grating and each detector in the FPA having a different photo response can introduce non-uniformity errors when reconstructing the polarization image without correction. A new calibration method is proposed to mitigate non-uniformity errors in the visible waveband. We correct non-uniformity in the form of a vector. The correction matrix and offset vector are calculated for the following correction. The performance of the proposed method is compared with state-of-the-art techniques by employing simulated data and real scenes. The experimental results showed that the proposed method can effectively mitigate non-uniformity errors and achieve better visual results.

  15. A new gas/supercritical fluid (SCF diffusivity measurement method for CO2 saturated polymer systems using a dielectric property

    Directory of Open Access Journals (Sweden)

    S. X. Yao

    2017-08-01

    Full Text Available In this research, theoretical CO2 diffusivity coefficients in amorphous polymers were calculated from dielectric constant changes during CO2 desorption. These values showed agreement with experimental diffusivity coefficients from a gravimetric method. Three amorphous polymer films made from Polystyrene (PS, Polycarbonate (PC, and Cyclic Olefin Polymer (COP resins were saturated with supercritical CO2 at 5.5 MPa and 25 °C for 24 hours in a pressure chamber. The CO2 infused films were removed from the chamber for gas desorption experiments. The capacitance of the samples were recorded with an Inductance, Capacitance and Resistance (LCR meter. These values were used to calculate the change in dielectric constants. CO2 weight percentages measured by a scale was used to calculate experimental diffusivity and solubility coefficients. It was found that the trend of dielectric constant changes was similar to that of the CO2 weight percentage changes during gas desorption. A mathematical model was built to predict the CO2 weight percentages during desorption from the measured dielectric constants. Theoretical diffusivity coefficients from this work agree well with literature data.

  16. An Improved Micromechanical Framework for Saturated Concrete Repaired by the Electrochemical Deposition Method considering the Imperfect Bonding

    Directory of Open Access Journals (Sweden)

    Qing Chen

    2016-01-01

    Full Text Available The interfaces between the deposition products and concrete are not always well bonded when the electrochemical deposition method (EDM is adopted to repair the deteriorated concrete. To theoretically illustrate the deposition healing process by micromechanics for saturated concrete considering the imperfect interfaces, an improved micromechanical framework with interfacial transition zone (ITZ is proposed based on our recent studies. In this extension, the imperfect bonding is characterized by the ITZ, whose effects are calculated by modifying the generalized self-consistent model. Meanwhile, new multilevel homogenization schemes are employed to predict the effective properties of repaired concrete considering the ITZ effects. Moreover, modification procedures are presented to reach the properties of repaired concrete with ITZs in the dry state. To demonstrate the feasibility of the proposed micromechanical model, predictions obtained via the proposed micromechanical model are compared with those of the existing models and the experimental data, including results from extreme states during the EDM healing process. Finally, the influences of ITZ and deposition product on the healing effectiveness of EDM are discussed based on the proposed micromechanical model.

  17. A simple, robust orthogonal background correction method for two-dimensional liquid chromatography.

    Science.gov (United States)

    Filgueira, Marcelo R; Castells, Cecilia B; Carr, Peter W

    2012-08-07

    Background correction is a very important step that must be performed before peak detection or any quantification procedure. When successful, this step greatly simplifies such procedures and enhances the accuracy of quantification. In the past, much effort has been invested to correct drifting baseline in one-dimensional chromatography. In fast online comprehensive two-dimensional liquid chromatography (LC×LC) coupled with a diode array detector (DAD), the change in the refractive index (RI) of the mobile phase in very fast gradients causes extremely serious baseline disturbances. The method reported here is based on the use of various existing baseline correction methods of one-dimensional (1D) liquid chromatography to correct the two-dimensional (2D) background in LC×LC. When such methods are applied orthogonally to the second dimension ((2)D), background correction is dramatically improved. The method gives an almost zero mean background level and it provides better background correction than does simple subtraction of a blank. Indeed, the method proposed does not require running a blank sample.

  18. Optimization and Experimentation of Dual-Mass MEMS Gyroscope Quadrature Error Correction Methods.

    Science.gov (United States)

    Cao, Huiliang; Li, Hongsheng; Kou, Zhiwei; Shi, Yunbo; Tang, Jun; Ma, Zongmin; Shen, Chong; Liu, Jun

    2016-01-07

    This paper focuses on an optimal quadrature error correction method for the dual-mass MEMS gyroscope, in order to reduce the long term bias drift. It is known that the coupling stiffness and demodulation error are important elements causing bias drift. The coupling stiffness in dual-mass structures is analyzed. The experiment proves that the left and right masses' quadrature errors are different, and the quadrature correction system should be arranged independently. The process leading to quadrature error is proposed, and the Charge Injecting Correction (CIC), Quadrature Force Correction (QFC) and Coupling Stiffness Correction (CSC) methods are introduced. The correction objects of these three methods are the quadrature error signal, force and the coupling stiffness, respectively. The three methods are investigated through control theory analysis, model simulation and circuit experiments, and the results support the theoretical analysis. The bias stability results based on CIC, QFC and CSC are 48 °/h, 9.9 °/h and 3.7 °/h, respectively, and this value is 38 °/h before quadrature error correction. The CSC method is proved to be the better method for quadrature correction, and it improves the Angle Random Walking (ARW) value, increasing it from 0.66 °/√h to 0.21 °/√h. The CSC system general test results show that it works well across the full temperature range, and the bias stabilities of the six groups' output data are 3.8 °/h, 3.6 °/h, 3.4 °/h, 3.1 °/h, 3.0 °/h and 4.2 °/h, respectively, which proves the system has excellent repeatability.

  19. On Multistep Stabilizing Correction Splitting Methods with Applications to the Heston Model

    NARCIS (Netherlands)

    W. Hundsdorfer (Willem); K. In't Hout

    2017-01-01

    textabstractIn this note we consider splitting methods based on linear multistep methods and stabilizing corrections. To enhance the stability of the methods, we employ an idea of Bruno & Cubillos [5] who combine a high-order extrapolation formula for the explicit term with a formula of one order

  20. RELIC: a novel dye-bias correction method for Illumina Methylation BeadChip.

    Science.gov (United States)

    Xu, Zongli; Langie, Sabine A S; De Boever, Patrick; Taylor, Jack A; Niu, Liang

    2017-01-03

    The Illumina Infinium HumanMethylation450 BeadChip and its successor, Infinium MethylationEPIC BeadChip, have been extensively utilized in epigenome-wide association studies. Both arrays use two fluorescent dyes (Cy3-green/Cy5-red) to measure methylation level at CpG sites. However, performance difference between dyes can result in biased estimates of methylation levels. Here we describe a novel method, called REgression on Logarithm of Internal Control probes (RELIC) to correct for dye bias on whole array by utilizing the intensity values of paired internal control probes that monitor the two color channels. We evaluate the method in several datasets against other widely used dye-bias correction methods. Results on data quality improvement showed that RELIC correction statistically significantly outperforms alternative dye-bias correction methods. We incorporated the method into the R package ENmix, which is freely available from the Bioconductor website ( https://www.bioconductor.org/packages/release/bioc/html/ENmix.html ). RELIC is an efficient and robust method to correct for dye-bias in Illumina Methylation BeadChip data. It outperforms other alternative methods and conveniently implemented in R package ENmix to facilitate DNA methylation studies.

  1. A novel 3D absorption correction method for quantitative EDX-STEM tomography

    Energy Technology Data Exchange (ETDEWEB)

    Burdet, Pierre, E-mail: pierre.burdet@a3.epfl.ch [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom); Saghi, Z. [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom); Filippin, A.N.; Borrás, A. [Nanotechnology on Surfaces Laboratory, Materials Science Institute of Seville (ICMS), CSIC-University of Seville, C/ Americo Vespucio 49, 41092 Seville (Spain); Midgley, P.A. [Department of Materials Science and Metallurgy, University of Cambridge, Charles Babbage Road 27, Cambridge CB3 0FS, Cambridgeshire (United Kingdom)

    2016-01-15

    This paper presents a novel 3D method to correct for absorption in energy dispersive X-ray (EDX) microanalysis of heterogeneous samples of unknown structure and composition. By using STEM-based tomography coupled with EDX, an initial 3D reconstruction is used to extract the location of generated X-rays as well as the X-ray path through the sample to the surface. The absorption correction needed to retrieve the generated X-ray intensity is then calculated voxel-by-voxel estimating the different compositions encountered by the X-ray. The method is applied to a core/shell nanowire containing carbon and oxygen, two elements generating highly absorbed low energy X-rays. Absorption is shown to cause major reconstruction artefacts, in the form of an incomplete recovery of the oxide and an erroneous presence of carbon in the shell. By applying the correction method, these artefacts are greatly reduced. The accuracy of the method is assessed using reference X-ray lines with low absorption. - Highlights: • A novel 3D absorption correction method is proposed for 3D EDX-STEM tomography. • The absorption of X-rays along the path to the surface is calculated voxel-by-voxel. • The method is applied on highly absorbed X-rays emitted from a core/shell nanowire. • Absorption is shown to cause major artefacts in the reconstruction. • Using the absorption correction method, the reconstruction artefacts are greatly reduced.

  2. Mild Ptosis Correction with the Stitch Method During Incisional Double Fold Formation

    Directory of Open Access Journals (Sweden)

    Edward Ilho Lee

    2014-01-01

    Full Text Available BackgroundNumerous methods exist for simultaneous correction of mild blepharoptosis during double eyelid surgery. These methods are generally categorized into either incisional (open or non-incisional (suture methods. The incisional method is commonly used for the creation of the double eyelid crease in patients with excessive or thick skin. However, concurrent open ptosis correction is often marred by the lengthy period of intraoperative adjustment, causing more swelling, a longer recovery time, and an increased risk of postoperative complications.MethodsThe authors have devised a new, minimally invasive technique to alleviate mild ptosis during incisional double eyelid surgery. The anterior lamella is approached through the incisional technique for the creation of a double eyelid while the posterior lamella, including Muller's and levator muscles, is approached with the suture method for Muller's plication and ptosis correction.ResultsThe procedure described was utilized in 28 patients from June 2012 to August 2012. Postoperative asymmetry was noted in one patient who had severe preoperative conjunctival scarring. Otherwise, ptosis was corrected as planned in the rest of the cases and all of the patients were satisfied with their postoperative appearance and experienced no complications.ConclusionsOur hybrid technique combines the benefits of both the incisional and suture methods, allowing for a predictable and easily reproducible correction of blepharoptosis with an aesthetically pleasing double eyelid.

  3. Fisheye camera method for spatial non-uniformity corrections in luminous flux measurements with integrating spheres

    Science.gov (United States)

    Kokka, Alexander; Pulli, Tomi; Poikonen, Tuomas; Askola, Janne; Ikonen, Erkki

    2017-08-01

    This paper presents a fisheye camera method for determining spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Using a fisheye camera installed into a port of an integrating sphere, the relative angular intensity distribution of the lamp under test is determined. This angular distribution is used for calculating the spatial non-uniformity correction for the lamp when combined with the spatial responsivity data of the sphere. The method was validated by comparing it to a traditional goniophotometric approach when determining spatial correction factors for 13 LED lamps with different angular spreads. The deviations between the spatial correction factors obtained using the two methods ranged from -0.15 % to 0.15%. The mean magnitude of the deviations was 0.06%. For a typical LED lamp, the expanded uncertainty (k = 2 ) for the spatial non-uniformity correction factor was evaluated to be 0.28%. The fisheye camera method removes the need for goniophotometric measurements in determining spatial non-uniformity corrections, thus resulting in considerable system simplification. Generally, no permanent modifications to existing integrating spheres are required.

  4. Error Correction of EMTDC Line and Cable Series Impedance Calculations Compared to Traditional Methods

    DEFF Research Database (Denmark)

    Sørensen, Stefan; Nielsen, Hans Ove

    2002-01-01

    In this paper we present comparison of different line and cable series impedance calculation methods, where the correction of a discovered PSCAD/EMIDC v.3.0.8 calculation error of the cable series impedance results n deviation under 0.1% instead of the previous method which gave approximately 10%......% deviation to other methods. The correction is done by adjusting he earth return path impedance for the cable model, and will thereby form the basis for a future comparison with measured data from a real full scale earth fault experiment on a mixed line and cable network.......In this paper we present comparison of different line and cable series impedance calculation methods, where the correction of a discovered PSCAD/EMIDC v.3.0.8 calculation error of the cable series impedance results n deviation under 0.1% instead of the previous method which gave approximately 10...

  5. Orbit correction using an eigenvector method with constraints for synchrotron radiation sources

    Science.gov (United States)

    Harada, Kentaro; Obina, Takashi; Kobayashi, Yukinori; Nakamura, Norio; Takaki, Hiroyuki; Sakai, Hiroshi

    2009-06-01

    An eigenvector method with constraints (EVC) is proposed as a new orbit correction scheme for synchrotron light sources. EVC efficiently corrects the global orbit in a storage ring, and can simultaneously perform exact correction of local orbits without deterioration of the global orbit. To demonstrate the advantages of EVC over the ordinary eigenvector method (EV), we carried out experimental studies at the Photon Factory storage ring (PF-ring) and the Photon Factory Advanced Ring (PF-AR) at the High Energy Accelerator Research Organization (KEK). The performance of EVC was systematically examined at PF-ring and PF-AR. The experimental results agreed well with the simulated ones. Consequently, we confirmed that EVC easily realized orbit correction for both global and local orbits, and that it was very effective for the beam stabilization of synchrotron radiation (SR) sources.

  6. Orbit correction using an eigenvector method with constraints for synchrotron radiation sources

    Energy Technology Data Exchange (ETDEWEB)

    Harada, Kentaro [Photon Factory, High Energy Accelerator Research Organization, 1-1, Oho, Tsukuba, Ibaraki 305-0801 (Japan)], E-mail: kentaro.harada@kek.jp; Obina, Takashi; Kobayashi, Yukinori [Photon Factory, High Energy Accelerator Research Organization, 1-1, Oho, Tsukuba, Ibaraki 305-0801 (Japan); Nakamura, Norio; Takaki, Hiroyuki; Sakai, Hiroshi [Institute for Solid State Physics, University of Tokyo, 5-1-5, Kashiwanoha, Kashiwa, Chiba 277-8581 (Japan)

    2009-06-11

    An eigenvector method with constraints (EVC) is proposed as a new orbit correction scheme for synchrotron light sources. EVC efficiently corrects the global orbit in a storage ring, and can simultaneously perform exact correction of local orbits without deterioration of the global orbit. To demonstrate the advantages of EVC over the ordinary eigenvector method (EV), we carried out experimental studies at the Photon Factory storage ring (PF-ring) and the Photon Factory Advanced Ring (PF-AR) at the High Energy Accelerator Research Organization (KEK). The performance of EVC was systematically examined at PF-ring and PF-AR. The experimental results agreed well with the simulated ones. Consequently, we confirmed that EVC easily realized orbit correction for both global and local orbits, and that it was very effective for the beam stabilization of synchrotron radiation (SR) sources.

  7. Evaluation of topology correction methods for the generation of the cortical surface

    Science.gov (United States)

    Li, Wen; Magnotta, Vincent A.

    2009-02-01

    The cerebral cortex is a highly convoluted anatomical structure. The folding pattern defined by sulci and gyri is a complex pattern that is very heterogeneous across subjects. The heterogeneity across subjects has made the automated labeling of this structure into its constituent components a challenge to the field of neuroimaging. One way to approach this problem is to conformally map the surface to another representation such as a plane or sphere. Conformal mapping of the surface requires that surface to be topologically correct. However, noise and partial volume artifacts in the MR images frequently cause holes or handles to exist in the surface that must be removed. Topology correction techniques have been proposed that operate on the cortical surface, the original image data, and hybrid methods have been proposed. This paper presents an experimental assessment of two different topology correction methods. The first approach is based on modification of 3D voxel data. The second method is a hybrid approach that determines the location of defects from the surface representation while repairing the surface by modifying the underlying image data. These methods have been applied to 10 brains, and a comparison is made among them. In addition, detailed statistics are given based on the voxel correction method. Based on these 10 MRI datasets, we have found the hybrid method incapable of correcting the cortical surface appropriately when a handles and holes exist in close proximity. In several cases, holes in the anatomical surface were labeled as handles thus resulting in discontinuities in the folding pattern. The image-based approach in this study was found to correct the topology in all ten cases within a reasonable time. Furthermore, the distance between the original and corrected surfaces, thickness of brain cortex, curvatures and surface areas are provided as assessments of the approach based on our datasets.

  8. [A quickly atmospheric correction method for HJ-1 CCD with deep blue algorithm].

    Science.gov (United States)

    Wang, Zhong-Ting; Wang, Hong-Mei; Li, Qing; Zhao, Shao-Hua; Li, Shen-Shen; Chen, Liang-Fu

    2014-03-01

    In the present, for the characteristic of HJ-1 CCD camera, after receiving aerosol optical depth (AOD) from deep blue algorithm which was developed by Hsu et al. assisted by MODerate-resolution imaging spectroradiometer (MODIS) surface reflectance database, bidirectional reflectance distribution function (BRDF) correction with Kernel-Driven Model, and the calculation of viewing geometry with auxiliary data, a new atmospheric correction method of HJ-1 CCD was developed which can be used over vegetation, soil and so on. And, when the CCD data is processed to correct atmospheric influence, with look up table (LUT) and bilinear interpolation, atmospheric correction of HJ-1 CCD is completed quickly by grid calculation of atmospheric parameters and matrix operations of interface define language (IDL). The experiment over China North Plain on July 3rd, 2012 shows that by our method, the atmospheric influence was corrected well and quickly (one CCD image of 1 GB can be corrected in eight minutes), and the reflectance after correction over vegetation and soil was close to the spectrum of vegetation and soil. The comparison with MODIS reflectance product shows that for the advantage of high resolution, the corrected reflectance image of HJ-1 is finer than that of MODIS, and the correlation coefficient of the reflectance over typical surface is greater than 0.9. Error analysis shows that the recognition error of aerosol type leads to 0. 05 absolute error of surface reflectance in near infrared band, which is larger than that in visual bands, and the 0. 02 error of reflectance database leads to 0.01 absolute error of surface reflectance of atmospheric correction in green and red bands.

  9. Bias correction methods for regional climate model simulations considering the distributional parametric uncertainty underlying the observations

    Science.gov (United States)

    Kim, Kue Bum; Kwon, Hyun-Han; Han, Dawei

    2015-11-01

    In this paper, we present a comparative study of bias correction methods for regional climate model simulations considering the distributional parametric uncertainty underlying the observations/models. In traditional bias correction schemes, the statistics of the simulated model outputs are adjusted to those of the observation data. However, the model output and the observation data are only one case (i.e., realization) out of many possibilities, rather than being sampled from the entire population of a certain distribution due to internal climate variability. This issue has not been considered in the bias correction schemes of the existing climate change studies. Here, three approaches are employed to explore this issue, with the intention of providing a practical tool for bias correction of daily rainfall for use in hydrologic models ((1) conventional method, (2) non-informative Bayesian method, and (3) informative Bayesian method using a Weather Generator (WG) data). The results show some plausible uncertainty ranges of precipitation after correcting for the bias of RCM precipitation. The informative Bayesian approach shows a narrower uncertainty range by approximately 25-45% than the non-informative Bayesian method after bias correction for the baseline period. This indicates that the prior distribution derived from WG may assist in reducing the uncertainty associated with parameters. The implications of our results are of great importance in hydrological impact assessments of climate change because they are related to actions for mitigation and adaptation to climate change. Since this is a proof of concept study that mainly illustrates the logic of the analysis for uncertainty-based bias correction, future research exploring the impacts of uncertainty on climate impact assessments and how to utilize uncertainty while planning mitigation and adaptation strategies is still needed.

  10. X-ray scatter correction method for dedicated breast computed tomography: improvements and initial patient testing

    NARCIS (Netherlands)

    Ramamurthy, S.; D'Orsi, C.J.; Sechopoulos, I.

    2016-01-01

    A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360 degrees with a

  11. A New Hybrid Method to Correct for Wind Tunnel Wall- and Support Interference On-line

    NARCIS (Netherlands)

    Horsten, B.J.C.; Veldhuis, L.L.M.

    2009-01-01

    Because support interference corrections are not properly understood, engineers mostly rely on expensive dummy measurements or CFD calculations. This paper presents a method based on uncorrected wind tunnel measurements and fast calculation techniques (it is a hybrid method) to calculate wall

  12. A Geometric Correction Method of Plane Image Based on OpenCV

    Directory of Open Access Journals (Sweden)

    Li Xiaopeng

    2014-02-01

    Full Text Available Using OpenCV, a geometric correction method of plane image from single grid image in a state of unknown camera position is presented. The method can remove the perspective and lens distortions from an image. The method is simple and easy to implement, and the efficiency is high. Experiments indicate that this method has high precision, and can be used in some domains such as plane measurement.

  13. NLO Corrections to Hard Process in Parton Shower MC - KrkNLO Method

    CERN Document Server

    Jadach, S; Sapeta, S; Siódmok, A; Skrzypek, M

    2015-01-01

    A new method of combining an NLO-corrected hard process with an LO parton shower Monte Carlo, nicknamed {\\sf KrkNLO}, was proposed recently. It is simpler than well-established two other methods: {\\sf MC@NLO} and {\\sf POWHEG}. In this contribution, we present some results of extensive numerical tests of the new method for single $Z$-boson production at hadron colliders and numerical comparisons with two other methods as well as with NNLO calculations.

  14. Improvement of hydrological flood forecasting through an event based output correction method

    Science.gov (United States)

    Klotz, Daniel; Nachtnebel, Hans Peter

    2014-05-01

    This contribution presents an output correction method for hydrological models. A conceptualisation of the method is presented and tested in an alpine basin in Salzburg, Austria. The aim is to develop a method which is not prone to the drawbacks of autoregressive models. Output correction methods are an attractive option for improving hydrological predictions. They are complementary to the main modelling process and do not interfere with the modelling process itself. In general, output correction models estimate the future error of a prediction and use the estimation to improve the given prediction. Different estimation techniques are available dependent on the utilized information and the estimation procedure itself. Autoregressive error models are widely used for such corrections. Autoregressive models with exogenous inputs (ARX) allow the use of additional information for the error modelling, e.g. measurements from upper basins or predicted input-signals. Autoregressive models do however exhibit deficiencies, since the errors of hydrological models do generally not behave in an autoregressive manner. The decay of the error is usually different from an autoregressive function and furthermore the residuals exhibit different patterns under different circumstances. As for an example, one might consider different error-propagation behaviours under high- and low-flow situations or snow melt driven conditions. This contribution presents a conceptualisation of an event-based correction model and focuses on flood events only. The correction model uses information about the history of the residuals and exogenous variables to give an error-estimation. The structure and parameters of the correction models can be adapted to given event classes. An event-class is a set of flood events that exhibit a similar pattern for the residuals or the hydrological conditions. In total, four different event-classes have been identified in this study. Each of them represents a different

  15. High-power passively mode-locked Nd:YVO(4) laser using SWCNT saturable absorber fabricated by dip coating method.

    Science.gov (United States)

    Tang, Chun Yin; Chai, Yang; Long, Hui; Tao, Lili; Zeng, Long Hui; Tsang, Yuen Hong; Zhang, Ling; Lin, Xuechun

    2015-02-23

    Passive mode locked laser is typically achieved by the Semiconductor Saturable absorber Mirror, SESAM, saturable absorber, which is produced by expensive and complicated metal organic chemical vapor deposition method. Carbon based single wall carbon nanotube (SWCNT), saturable absorber, is a promising material which is capable to produce stable passive mode-locking in the high power laser cavity over a wide operational wavelength range. This study has successfully demonstrated the high power mode locking laser system operating at 1 micron by using SWCNT based absorbers fabricated by dip coating method. The proposed fabrication method is practical, simple and cost effective for fabricating SWCNT saturable absorber. The demonstrated high power Nd:YVO(4) mode-locked laser operating at 1064nm have maximum output power up to 2.7W,with the 167MHz repetition rate and 3.1 ps pulse duration, respectively. The calculated output pulse energy and peak power are 16.1nJ and 5.2kW, respectively.

  16. Correction Impulse Method for Turbo Decoding over Middleton Class-A Impulsive Noise

    Directory of Open Access Journals (Sweden)

    TRIFINA, L.

    2016-11-01

    Full Text Available The correction impulse method (CIM is very effective to achieve low error rates in turbo decoding. It was applied for transmission over Additive White Gaussian Noise (AWGN channels, where the correction impulse value must be a real number greater than the minimum distance of the turbo code. The original version of CIM can not be used for channels modeled as Middleton additive white Class-A impulsive noise (MAWCAIN, because of nonlinearity of channel reliability. Thus, in this paper we propose two ways to modify the method such that it improves the system performances in the case of aforementioned channels. In the first one, the value of the correction impulse is chosen to maximize the channel reliability. It depends on the signal to noise ratio (SNR and the error rates are significantly improved compared to those obtained by using the correction impulse value applied for AWGN channels. The second version is based on the least squares method and performs an approximation of the correction impulse. The approximated value depends on the SNR and the parameter A of the MAWCAIN model. The differences between the error rates obtained by the two proposed methods are negligible.

  17. Evaluation of bias-correction methods for ensemble streamflow volume forecasts

    Directory of Open Access Journals (Sweden)

    T. Hashino

    2007-01-01

    Full Text Available Ensemble prediction systems are used operationally to make probabilistic streamflow forecasts for seasonal time scales. However, hydrological models used for ensemble streamflow prediction often have simulation biases that degrade forecast quality and limit the operational usefulness of the forecasts. This study evaluates three bias-correction methods for ensemble streamflow volume forecasts. All three adjust the ensemble traces using a transformation derived with simulated and observed flows from a historical simulation. The quality of probabilistic forecasts issued when using the three bias-correction methods is evaluated using a distributions-oriented verification approach. Comparisons are made of retrospective forecasts of monthly flow volumes for a north-central United States basin (Des Moines River, Iowa, issued sequentially for each month over a 48-year record. The results show that all three bias-correction methods significantly improve forecast quality by eliminating unconditional biases and enhancing the potential skill. Still, subtle differences in the attributes of the bias-corrected forecasts have important implications for their use in operational decision-making. Diagnostic verification distinguishes these attributes in a context meaningful for decision-making, providing criteria to choose among bias-correction methods with comparable skill.

  18. [Atmospheric correction method for HJ-1 CCD imagery over waters based on radiative transfer model].

    Science.gov (United States)

    Xu, Hua; Gu, Xing-Fa; Li, Zheng-Qiang; Li, Li; Chen, Xing-Feng

    2011-10-01

    Atmospheric correction is a bottleneck in quantitative application of Chinese satellites HJ-1 data to remote sensing of water color. According to the characteristics of CCD sensors, the present paper made use of air-water coupled radiative transfer model to work out the look-up table (LUT) of atmospheric corrected parameters, and thereafter developed pixel-by-pixel atmospheric correction method over waters accomplishing the water-leaving remote sensing reflectance with accessorial meteorological input. The paper validates the HJ-1 CCD retrievals with MODIS and in-situ results. It was found that the accuracy in blue and green bands is good. However, the accuracy in red or NIR bands is much worse than blue or green ones. It was also demonstrated that the aerosol model is a sensitive factor to the atmospheric correction accuracy.

  19. A multilevel correction adaptive finite element method for Kohn-Sham equation

    Science.gov (United States)

    Hu, Guanghui; Xie, Hehu; Xu, Fei

    2018-02-01

    In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.

  20. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    Science.gov (United States)

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  1. COMPARATIVE EVALUATION OF SURGICAL CORRECTION METHODS OF VESICO-URETERIC REFLUX IN CHILDREN

    Directory of Open Access Journals (Sweden)

    S. P. Yatsyk

    2014-01-01

    Full Text Available Background: To compare different surgical correction methods of vesico-ureteric reflux in children using both open surgery and endoluminal (intraluminal treatment options of this pathology. Patients and methods: 166 patients aged from 4 months to 13 were examined and treated. All children underwent X-ray urological examination through cystography and ultrasound examination of kidneys and urocyst. Cohen's operation, STING procedure involving endoscopic injection of bulking agents was performed. Treatment results were assessed 6 months later through control cystography. Conclusion: Endoscopic correction of vesico-ureteric reflux is an effective and minimally invasive treatment method for this pathology. Maximum treatment effect after biodegradable bulking agents application should be expected in younger age group. Treatment rates of endoluminal correction of vesico-ureteric reflux with the use of sterile viscous gel as a fixing agent are higher than with the use of bovine collagen.

  2. Assessment of calculation methods for calcium carbonate saturation in drinking water for DIN 38404-10 compliance

    NARCIS (Netherlands)

    De Moel, P.J.; Van der Helm, A.W.C.; Van Rijn, M.; Van Dijk, J.C.; Van der Meer, W.G.J.

    2013-01-01

    The new German standard on the calculation of calcite saturation in drinking water, DIN 38404-10, 2012 (DIN), marks a change in drinking water standardization from using simplified equations applicable for nomographs and simple calculators to using extensive chemical modeling requiring computer

  3. Output power PDF of a saturated semiconductor optical amplifier: Second-order noise contributions by path integral method

    DEFF Research Database (Denmark)

    Öhman, Filip; Mørk, Jesper; Tromborg, Bjarne

    2007-01-01

    We have developed a second-order small-signal model for describing the nonlinear redistribution of noise in a saturated semiconductor optical amplifier. In this paper, the details of the model are presented. A numerical example is used to compare the model to statistical simulations. We show...

  4. Reanalysis of Agelietti Procedure (A Method of Corrective Supracondylar Femoral Osteotomy

    Directory of Open Access Journals (Sweden)

    Sharat Agarwal

    2012-04-01

    Full Text Available Objective: Supracondylar femoral osteotomy is the time tested method, used for correcting the angular (varus & valgus deformities at the knee. Traditionally, Coventry type of osteotomy where a medial or lateral based wedge of bone is removed or an open wedge osteotomy is made & the space filled with bone graft, is done to achieve the desired correction. This osteotomy is subsequently stabilized with Kirschner wires or plates & screws. Later the limb is externally supported in brace or plaster cast. Here we present a case series of 10 cases, where we have analyzed the efficacy of Aglietti procedure, as a method of femoral supracondylar osteotomy for correcting the valgus deformity at the knee. Methods: Ten valgus adolescent knees were operated in 7 patients by following the Aglietti procedure for correcting the angular deformity at the knee. The results were analyzed taking into consideration the operating time, blood loss during surgery estimated by the number of surgical mops used, stability of the osteotomy in the post-operative period & ultimate range of motion (ROM obtained at the end of 6 months after the surgery. Results: The average age of patients dealt with was 12.6 years (n=7 with females predominating (n=5 against 2 males. The average time was 47.5 minutes. The average size of the surgical mops used was 15x20 cms. Surgical mops used per patient were 1.6. The average range of flexion achieved at the end of 6 months after surgery was 131.45 degrees ( Rounded average to a measurable value being 131 degrees. Conclusion: In our case series we found Aglietti procedure as an effective method to correct the valgus deformity in adolescent knees. Supracondylar femoral osteotomies are not only for varus an valgus corrections; this osteotomy is used as well for rotation correction and flexion and extension correction, mainly in CP patients. But we used the Agelietti procedure for the correction of angular deformities(varus/valgus in patients of

  5. Characteristic of methods for prevention and correction of moral of alienation of students

    Directory of Open Access Journals (Sweden)

    Z. K. Malieva

    2014-01-01

    Full Text Available A moral alienation is a complex integrative phenomenon characterized by individual’s rejection of universal spiritual and moral values of society. The last opportunity to find a purposeful competent solution of the problem of individual’s moral alienation lies in the space of professional education.The subject of study of this article is to identify methods for prevention and correction of moral alienation of students that can be used by teachers both in the process of extracurricular activities, and in conducting classes in humanitarian disciplines.The purpose of the work is to study methods and techniques that enhance the effectiveness of the prevention and correction of moral alienation of students, identify their characteristics and application in the educational activities of teachers.The paper concretizes a definition of methods to prevent and correct the moral alienation of students who represent a system of interrelated actions of educator and students aimed at: redefining of negative values, rules and norms of behavior; overcoming the negative mental states, negative attitudes, interests and aptitudes of aducatees.The article distinguishes and characterizes the most effective methods for prevention and correction of moral alienation of students: the conviction, the method of "Socrates"; understanding; semiotic analysis; suggestion, method of "explosion." It also presents the rules and necessary conditions for the application of these methods in the educational process.It is ascertained that the choice of effective preventive and corrective methods and techniques is determined by the content of intrapersonal, psychological sources of moral alienation associated with the following: negative attitude due to previous experience; orientation to these or those negative values; inadequate self-esteem, having a negative impact on the development and functioning of the individual’s psyche and behavior; mental states.The conclusions of the

  6. A general method for cupping artifact correction of cone-beam breast computed tomography images.

    Science.gov (United States)

    Qu, Xiaolei; Lai, Chao-Jen; Zhong, Yuncheng; Yi, Ying; Shaw, Chris C

    2016-07-01

    Cone-beam breast computed tomography (CBBCT), a promising breast cancer diagnostic technique, has been under investigation for the past decade. However, owing to scattered radiation and beam hardening, CT numbers are not uniform on CBBCT images. This is known as cupping artifact, and it presents an obstacle for threshold-based volume segmentation. In this study, we proposed a general post-reconstruction method for cupping artifact correction. There were four steps in the proposed method. First, three types of local region histogram peaks were calculated: adipose peaks with low CT numbers, glandular peaks with high CT numbers, and unidentified peaks. Second, a linear discriminant analysis classifier, which was trained by identified adipose and glandular peaks, was employed to identify the unidentified peaks as adipose or glandular peaks. Third, adipose background signal profile was fitted according to the adipose peaks using the least squares method. Finally, the adipose background signal profile was subtracted from original image to obtain cupping corrected image In experimental study, standard deviation of adipose tissue CT numbers was obviously reduced and the CT numbers were more uniform after cupping correction by proposed method; in simulation study, root-mean-square errors were significantly reduced for both symmetric and asymmetric cupping artifacts, indicating that the proposed method was effective to both artifacts. A general method without a circularly symmetric assumption was proposed to correct cupping artifacts in CBBCT images for breast. It may be properly applied to images of real patient breasts with natural pendent geometry.

  7. A simple and robust method for artifacts correction on X-ray microtomography images

    Science.gov (United States)

    Timofey, Sizonenko; Marina, Karsanina; Dina, Gilyazetdinova; Irina, Bayuk; Kirill, Gerke

    2017-04-01

    X-ray microtomography images of rock material often have some kinds of distortion due to different reasons such as X-ray attenuation, beam hardening, irregularity of distribution of liquid/solid phases. Several kinds of distortion can arise from further image processing and stitching of images from different measurements. Beam-hardening is a well-known and studied distortion which is relative easy to be described, fitted and corrected using a number of equations. However, this is not the case for other grey scale intensity distortions. Shading by irregularity of distribution of liquid phases, incorrect scanner operating/parameters choosing, as well as numerous artefacts from mathematical reconstructions from projections, including stitching from separate scans cannot be described using single mathematical model. To correct grey scale intensities on large 3D images we developed a package Traditional method for removing the beam hardening [1] has been modified in order to find the center of distortion. The main contribution of this work is in development of a method for arbitrary image correction. This method is based on fitting the distortion by Bezier curve using image histogram. The distortion along the image is represented by a number of Bezier curves and one base line that characterizes the natural distribution of gray value along the image. All of these curves are set manually by the operator. We have tested our approaches on different X-ray microtomography images of porous media. Arbitrary correction removes all principal distortion. After correction the images has been binarized with subsequent pore-network extracted. Equal distribution of pore-network elements along the image was the criteria to verify the proposed technique to correct grey scale intensities. [1] Iassonov, P. and Tuller, M., 2010. Application of segmentation for correction of intensity bias in X-ray computed tomography images. Vadose Zone Journal, 9(1), pp.187-191.

  8. A New Non-Linearity Correction Method for the JWST Near-Infrared Camera

    Science.gov (United States)

    Canipe, Alicia Michelle; Robberto, Massimo; Hilbert, Bryan

    2017-06-01

    JWST infrared detectors have an intrinsic non-linearity due to the change in PN junction capacitance as charge accumulates in the individual pixel capacitors. Correction of this non-linearity is a fundamental step in the JWST Science Calibration Pipeline. I evaluate a proposed method to calculate a more accurate non-linearity correction for the Near-Infrared Camera (NIRCam) using a function of the ideal linear signal count rate. This algorithm allows the reconstruction of the true linear signal to within 0.2% over ~97% of the full dynamic range, a substantial improvement over the current correction strategy adopted, for example, for the Wide Field Camera 3 infrared channel on Hubble. Using this method, I demonstrate that the coefficients derived to correct a regular ramp (i.e., a sequence of non-destructive samples) are also adequate to reconstruct the true signal in the case of grouped (averaged) samples, characteristic of JWST observations. The robustness of the method is tested using both real data and simulated ramps with different count rates. The new algorithm consistently provides highly accurate non-linearity corrections and can successfully be applied to all 10 NIRCam detectors.

  9. Correction of lower limb deformities in children with renal osteodystrophy by the Ilizarov method.

    Science.gov (United States)

    Bar-On, Elhanan; Horesh, Zvi; Katz, Kalman; Weigl, Daniel Martin; Becker, Tali; Cleper, Rosana; Krause, Irit; Davidovits, Miriam

    2008-01-01

    Children with renal osteodystrophy (ROD) may develop severe angular deformities of the limbs. Various methods, both medical and surgical, have been described for correction of these deformities, but a literature search showed only 1 child previously treated by the Ilizarov method. The purpose of this study was to characterize the deformities found in our group of patients and to describe our experience in treating these patients with the Ilizarov method. Correction of angular deformity by the Ilizarov method was performed on 8 limb segments in 5 patients with ROD. Mean age was 14.9 years. Two patients were on hemodialysis, and 3 had functioning kidney grafts. Surgery was deferred until stabilization of metabolic parameters. There was 1 varus and 7 valgus deformities. Preoperative coronal deformity averaged 29 degrees (18-38 degrees). The Ilizarov apparatus was used in all cases. Correction time averaged 23 days (20-28 days). The time from completion of correction to frame removal averaged 71 days (48-113 days). There were no changes in metabolic parameters or frequency of hemodialysis throughout the treatment. Restoration of a normal mechanical axis was achieved in 4 of the 5 patients. One case failed due to intraarticular instability. There were no major complications. Minor complications included pin tract infections, which responded to antibiotic treatment, and premature consolidation in 1 case. Follow-up averaged 6.5 years (1-10 years). The alignment obtained at surgery was maintained in all 4 patients, and they are functional and symptom-free. The patient for whom the surgery failed remains wheelchair-bound. The Ilizarov method was found to be safe and effective for correction of malalignment due to ROD. Optimization of metabolic parameters is essential before surgery and throughout correction. The procedure is contraindicated in patients with significant intraarticular knee pathology.

  10. Comparing bias correction methods for high-resolution COSMO-CLM daily precipitation fields

    Science.gov (United States)

    Gutjahr, O.; Heinemann, G.

    2012-04-01

    Regional climate models (RCMs) are approaching to the 1km scale. This is necessary, since impact models, like hydrological or species distribution models, forced with the output of RCMs need input data on this high resolution in order to capture adequately the behaviour of the system on small scales and the extreme statistics. However, RCMs are still subject to systematic biases when compared to observations. Especially precipitation is often affected with large and non-linear bias. Since extreme values are critical to any impact model, a special care must be established for the tails of the distributions. Within the "Global-Change"-project of the Research Initiative Rhineland-Palatinate (http://www.uni-trier.de/index.php?id=40193&L=2), a new parametric bias correction method has been developed, which includes an extension for extreme values. Daily precipitation fields from COSMO-CLM (version 4.8.11) model output for the time period 1991-2000 and 2091-2100 were then bias corrected. This new method is compared to existing parametric and non-parametric methods in order to answer the question whether an extension with an extreme value distribution for the tail is necessary. Additionally, the effect of the bias correction on the climate signal is investigated, which should be the same after the corrections. As observations, 128 precipitation stations (DWD/LUWG) were used. Both parametric bias correction methods are able to correct the precipitation fields and are thus valid replacements for the empirical method but the extension with an extreme value distribution is an improvement, especially concerning estimated return values, which were underestimated in the uncorrected model and did not show any similarity to observations. Without an extension for extreme values, the pattern of the climate change signal deviates largely from the original and reveals another source of uncertainty. The comparison of the methods demonstrates the importance of special treatment of the

  11. Correction of the heat loss method for calculating clothing real evaporative resistance.

    Science.gov (United States)

    Wang, Faming; Zhang, Chengjiao; Lu, Yehu

    2015-08-01

    In the so-called isothermal condition (i.e., Tair [air temperature]=Tmanikin [manikin temperature]=Tr [radiant temperature]), the actual energy used for moisture evaporation detected by most sweating manikins was underestimated due to the uncontrolled fabric 'skin' temperature Tsk,f (i.e., Tsk,fclothing real evaporative resistance. In this study, correction of the real evaporative heat loss from the wet fabric 'skin'-clothing system was proposed and experimentally validated on a 'Newton' sweating manikin. The real evaporative resistance of five clothing ensembles and the nude fabric 'skin' calculated by the corrected heat loss method was also reported and compared with that by the mass loss method. Results revealed that, depending on the types of tested clothing, different amounts of heat were drawn from the ambient environment. In general, a greater amount of heat was drawn from the ambient environment by the wet fabric 'skin'-clothing system in lower thermal insulation clothing than that in higher insulation clothing. There were no significant differences between clothing real evaporative resistances calculated by the corrected heat loss method and those by the mass loss method. It was therefore concluded that the correction method proposed in this study has been successfully validated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Comparatively Studied Color Correction Methods for Color Calibration of Automated Microscopy Complex of Biomedical Specimens

    Directory of Open Access Journals (Sweden)

    T. A. Kravtsova

    2016-01-01

    Full Text Available The paper considers a task of generating the requirements and creating a calibration target for automated microscopy systems (AMS of biomedical specimens to provide the invariance of algorithms and software to the hardware configuration. The required number of color fields of the calibration target and their color coordinates are mostly determined by the color correction method, for which coefficients of the equations are estimated during the calibration process. The paper analyses existing color calibration techniques for digital imaging systems using an optical microscope and shows that there is a lack of published results of comparative studies to demonstrate a particular useful color correction method for microscopic images. A comparative study of ten image color correction methods in RGB space using polynomials and combinations of color coordinate of different orders was carried out. The method of conditioned least squares to estimate the coefficients in the color correction equations using captured images of 217 color fields of the calibration target Kodak Q60-E3 was applied. The regularization parameter in this method was chosen experimentally. It was demonstrated that the best color correction quality characteristics are provided by the method that uses a combination of color coordinates of the 3rd order. The study of the influence of the number and the set of color fields included in calibration target on color correction quality for microscopic images was performed. Six train sets containing 30, 35, 40, 50, 60 and 80 color fields, and test set of 47 color fields not included in any of the train sets were formed. It was found out that the train set of 60 color fields minimizes the color correction error values for both operating modes of digital camera: using "default" color settings and with automatic white balance. At the same time it was established that the use of color fields from the widely used now Kodak Q60-E3 target does not

  13. Assessment of Atmospheric Correction Methods for Sentinel-2 MSI Images Applied to Amazon Floodplain Lakes

    Directory of Open Access Journals (Sweden)

    Vitor Souza Martins

    2017-03-01

    Full Text Available Satellite data provide the only viable means for extensive monitoring of remote and large freshwater systems, such as the Amazon floodplain lakes. However, an accurate atmospheric correction is required to retrieve water constituents based on surface water reflectance ( R W . In this paper, we assessed three atmospheric correction methods (Second Simulation of a Satellite Signal in the Solar Spectrum (6SV, ACOLITE and Sen2Cor applied to an image acquired by the MultiSpectral Instrument (MSI on-board of the European Space Agency’s Sentinel-2A platform using concurrent in-situ measurements over four Amazon floodplain lakes in Brazil. In addition, we evaluated the correction of forest adjacency effects based on the linear spectral unmixing model, and performed a temporal evaluation of atmospheric constituents from Multi-Angle Implementation of Atmospheric Correction (MAIAC products. The validation of MAIAC aerosol optical depth (AOD indicated satisfactory retrievals over the Amazon region, with a correlation coefficient (R of ~0.7 and 0.85 for Terra and Aqua products, respectively. The seasonal distribution of the cloud cover and AOD revealed a contrast between the first and second half of the year in the study area. Furthermore, simulation of top-of-atmosphere (TOA reflectance showed a critical contribution of atmospheric effects (>50% to all spectral bands, especially the deep blue (92%–96% and blue (84%–92% bands. The atmospheric correction results of the visible bands illustrate the limitation of the methods over dark lakes ( R W < 1%, and better match of the R W shape compared with in-situ measurements over turbid lakes, although the accuracy varied depending on the spectral bands and methods. Particularly above 705 nm, R W was highly affected by Amazon forest adjacency, and the proposed adjacency effect correction minimized the spectral distortions in R W (RMSE < 0.006. Finally, an extensive validation of the methods is required for

  14. A novel energy conversion based method for velocity correction in molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Hanhui [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Collaborative Innovation Center of Advanced Aero-Engine, Hangzhou 310027 (China); Liu, Ningning [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Ku, Xiaoke, E-mail: xiaokeku@zju.edu.cn [School of Aeronautics and Astronautics, Zhejiang University, Hangzhou 310027 (China); Fan, Jianren [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)

    2017-05-01

    Molecular dynamics (MD) simulation has become an important tool for studying micro- or nano-scale dynamics and the statistical properties of fluids and solids. In MD simulations, there are mainly two approaches: equilibrium and non-equilibrium molecular dynamics (EMD and NEMD). In this paper, a new energy conversion based correction (ECBC) method for MD is developed. Unlike the traditional systematic correction based on macroscopic parameters, the ECBC method is developed strictly based on the physical interaction processes between the pair of molecules or atoms. The developed ECBC method can apply to EMD and NEMD directly. While using MD with this method, the difference between the EMD and NEMD is eliminated, and no macroscopic parameters such as external imposed potentials or coefficients are needed. With this method, many limits of using MD are lifted. The application scope of MD is greatly extended.

  15. A systematic evaluation of contemporary impurity correction methods in ITS-90 aluminium fixed point cells

    Science.gov (United States)

    da Silva, Rodrigo; Pearce, Jonathan V.; Machin, Graham

    2017-06-01

    The fixed points of the International Temperature Scale of 1990 (ITS-90) are the basis of the calibration of standard platinum resistance thermometers (SPRTs). Impurities in the fixed point material at the level of parts per million can give rise to an elevation or depression of the fixed point temperature of order of millikelvins, which often represents the most significant contribution to the uncertainty of SPRT calibrations. A number of methods for correcting for the effect of impurities have been advocated, but it is becoming increasingly evident that no single method can be used in isolation. In this investigation, a suite of five aluminium fixed point cells (defined ITS-90 freezing temperature 660.323 °C) have been constructed, each cell using metal sourced from a different supplier. The five cells have very different levels and types of impurities. For each cell, chemical assays based on the glow discharge mass spectroscopy (GDMS) technique have been obtained from three separate laboratories. In addition a series of high quality, long duration freezing curves have been obtained for each cell, using three different high quality SPRTs, all measured under nominally identical conditions. The set of GDMS analyses and freezing curves were then used to compare the different proposed impurity correction methods. It was found that the most consistent corrections were obtained with a hybrid correction method based on the sum of individual estimates (SIE) and overall maximum estimate (OME), namely the SIE/Modified-OME method. Also highly consistent was the correction technique based on fitting a Scheil solidification model to the measured freezing curves, provided certain well defined constraints are applied. Importantly, the most consistent methods are those which do not depend significantly on the chemical assay.

  16. Awareness and Attitude toward Refractive Error Correction Methods: A Population Based Study in Mashhad

    Directory of Open Access Journals (Sweden)

    Saber Moghaddam Ranjbar AK

    2013-10-01

    Full Text Available Objectives: This study was designed to determine the level of awareness and attitude toward refractive correction methods in a randomly selected population in Mashhad, Iran. Materials and Methods: A random cluster sampling method was applied to choose 193 subjects aged 12 years and above from Mashhad population. A structured questionnaire with open-ended and closed-ended questions was designed to gather the participants' demographic data such as: gender, age, educational status and occupation, as well as their awareness and attitude toward refractive correction methods (Spectacles, Contact lenses and Refractive surgery. Results:  In overall, 39% of the participants had a clear perception of 'ophthalmologist' and 'optometrist' terms. 80.3%, 87% and 71% of respondents had no information of contact lens application instead of spectacles, cosmetic contact lenses and contact lenses with both refractive correction and cosmetic properties, respectively. 82.5% of participants were not aware of the possibility of refractive surgery for improving their eyesight and decreasing their dependency on spectacles. Awareness about contact lenses and refractive surgery’s adverse effects were only 16% and 8%, respectively. Conclusion: Awareness and perception of refractive correction methods was low among the participants of this study. Although, ophthalmologists were the first source of consultation on sight impairments among respondents, a predominant percentage of subjects were not even aware of obvious differences between an ophthalmologist and an optometrist. These findings emphasize the necessity for proper public education on ophthalmic care and the available services, specially the new correction methods for improvement of quality of life.

  17. An FFT-based Method for Attenuation Correction in Fluorescence Confocal Microscopy

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Bakker, M.

    1993-01-01

    A problem in three-dimensional imaging by a confocal scanning laser microscope (CSLM) in the (epi)fluorescence mode is the darkening of the deeper layers due to absorption and scattering of both the excitation and the fluorescence light. In this paper we propose a new method to correct for these

  18. An FFT-based method for attenuation correction in fluorescence confocal microscopy

    NARCIS (Netherlands)

    J.B.T.M. Roerdink (Jos); M. Bakker (Miente)

    1993-01-01

    htmlabstractA problem in three-dimensional imaging by a confocal scanning laser microscope (CSLM) in the (epi)fluorescence mode is the darkening of the deeper layers due to absorption and scattering of both the excitation and the fluorescence light. In this paper we propose a new method to correct

  19. Photoproduction of W Bosons at HERA Reweighting Method for implementing QCD Corrections in Monte Carlo Programs

    CERN Document Server

    Diener, K P O; Spira, Michael; Diener, Kai-Peer O.; Schwanenberger, Christian; Spira, Michael

    2003-01-01

    A procedure of implementing QCD corrections in Monte Carlo programs by a reweighting method is described for the photoproduction of W bosons at HERA. Tables for W boson production in LO and NLO are given in bins of the transverse momentum of the W boson and its rapidity.

  20. Error Correction Method for Wind Speed Measured with Doppler Wind LIDAR at Low Altitude

    Science.gov (United States)

    Liu, Bingyi; Feng, Changzhong; Liu, Zhishen

    2014-11-01

    For the purpose of obtaining global vertical wind profiles, the Atmospheric Dynamics Mission Aeolus of European Space Agency (ESA), carrying the first spaceborne Doppler lidar ALADIN (Atmospheric LAser Doppler INstrument), is going to be launched in 2015. DLR (German Aerospace Center) developed the A2D (ALADIN Airborne Demonstrator) for the prelaunch validation. A ground-based wind lidar for wind profile and wind field scanning measurement developed by Ocean University of China is going to be used for the ground-based validation after the launch of Aeolus. In order to provide validation data with higher accuracy, an error correction method is investigated to improve the accuracy of low altitude wind data measured with Doppler lidar based on iodine absorption filter. The error due to nonlinear wind sensitivity is corrected, and the method for merging atmospheric return signal is improved. The correction method is validated by synchronous wind measurements with lidar and radiosonde. The results show that the accuracy of wind data measured with Doppler lidar at low altitude can be improved by the proposed error correction method.

  1. A correction function method for the wave equation with interface jump conditions

    Science.gov (United States)

    Abraham, David S.; Marques, Alexandre Noll; Nave, Jean-Christophe

    2018-01-01

    In this paper a novel method to solve the constant coefficient wave equation, subject to interface jump conditions, is presented. In general, such problems pose issues for standard finite difference solvers, as the inherent discontinuity in the solution results in erroneous derivative information wherever the stencils straddle the given interface. Here, however, the recently proposed Correction Function Method (CFM) is used, in which correction terms are computed from the interface conditions, and added to affected nodes to compensate for the discontinuity. In contrast to existing methods, these corrections are not simply defined at affected nodes, but rather generalized to a continuous function within a small region surrounding the interface. As a result, the correction function may be defined in terms of its own governing partial differential equation (PDE) which may be solved, in principle, to arbitrary order of accuracy. The resulting scheme is not only arbitrarily high order, but also robust, having already seen application to Poisson problems and the heat equation. By extending the CFM to this new class of PDEs, the treatment of wave interface discontinuities in homogeneous media becomes possible. This allows, for example, for the straightforward treatment of infinitesimal source terms and sharp boundaries, free of staircasing errors. Additionally, new modifications to the CFM are derived, allowing compatibility with explicit multi-step methods, such as Runge-Kutta (RK4), without a reduction in accuracy. These results are then verified through numerous numerical experiments in one and two spatial dimensions.

  2. A brain MRI bias field correction method created in the Gaussian multi-scale space

    Science.gov (United States)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  3. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    Science.gov (United States)

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to

  4. A pertinent analytic method to correctly measure contributions to growth in gross domestic product

    Directory of Open Access Journals (Sweden)

    Brunet Antoine

    2009-01-01

    Full Text Available In this paper, Antoine Brunet questions the OECD method in calculating contributions to GDP growth. He tries to show this method induces the users to seriously misjudge the contribution of external trade balance to GDP growth. He shows there is an alternative method, i.e. the AB method which is mathematically as correct as the OECD one. And this method is much more pertinent and allows the users to distinguish between two kinds of countries: on the one hand, the mercantilist countries and on the other hand, the non-mercantilist countries.

  5. A power supply error correction method for single-ended digital audio class D amplifiers

    Science.gov (United States)

    Yu, Zeqi; Wang, Fengqin; Fan, Yangyu

    2016-12-01

    In single-ended digital audio class D amplifiers (CDAs), the errors caused by power supply noise in the power stages degrade the output performance seriously. In this article, a novel power supply error correction method is proposed. This method introduces the power supply noise of the power stage into the digital signal processing block and builds a power supply error corrector between the interpolation filter and the uniform-sampling pulse width modulation (UPWM) lineariser to pre-correct the power supply error in the single-ended digital audio CDA. The theoretical analysis and implementation of the method are also presented. To verify the effectiveness of the method, a two-channel single-ended digital audio CDA with different power supply error correction methods is designed, simulated, implemented and tested. The simulation and test results obtained show that the method can greatly reduce the error caused by the power supply noise with low hardware cost, and that the CDA with the proposed method can achieve a total harmonic distortion + noise (THD + N) of 0.058% for a -3 dBFS, 1 kHz input when a 55 V linear unregulated direct current (DC) power supply (with the -51 dBFS, 100 Hz power supply noise) is used in the power stages.

  6. Generation of transport lattice code KARMA library with doppler broadening rejection correction method

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ho Jin; Cho, Jin Young; Park, Sang Yoon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Hong, Ser Gi [Kyung Hee Univ., Yongin (Korea, Republic of); Kim, Kang Seog [Oak Ridge National Laboratory, Tennessee (United States)

    2012-10-15

    In order to solve the exact neutron transport equations, the temperature-dependent neutron cross sections including scattering kernel are needed. However, the current cross section generation systems such as NJOY do not generate the temperature dependent scatting kernels. In Monte Carlo (MC) code, the sampling of velocity of target nucleus method is used to determine the energy and direction of outgoing neutron by the approximated constant cross section model. Recently, the Doppler-broadening rejection correction (DBRC) and weight correction method are proposed for the exact sampling. In this study, the KARMA(Kernel Analyzer by Ray tracing Method for fuel Assembly) library system incorporating McCARD calculations with the DBRC method are established and the effect of improved Doppler treatment will be examined.

  7. The sequential value correction method for the two-dimensional irregular cutting stock problem

    OpenAIRE

    Verkhoturov M.A.; Sergeyeva O.Y.

    2000-01-01

    This paper regards the problem of the two-dimensional irregular cutting stock problem (ICSP), where the pieces to be cut out may be of any shape. The sequential value correction method has been developed to solve this problem. This method is based on dual values (variables), which is the essential concept of linear programming. We suggest a technique of value calculation for such pieces. The algorithms are included. We also describe a computing experiment whose results are the evidence of the...

  8. A scatter correction method for dual-energy digital mammography: Monte Carlo simulation.

    Science.gov (United States)

    Ai, Kai; Gao, Yanhua; Yu, Gang

    2014-01-01

    To develop a novel scatter correction method without additional patient dose for dual-energy digital mammography (DEDM) to reduce scatter's impacts and enhance microcalcification detectability in dual-energy X-ray subtraction image. Combining scatter radiation is lower spatial frequency component and calcifications are sparsely distributed in digital mammogram, we develop a new scatter correction strategy. First, an adaptive sampling scheme is presented to find possible noncalcification (zero calcification) pixels. Then the maximum likelihood expectation maximization (MLEM) algorithm is applied to evaluate initial scatter surface. The accurate scatter radiation of sampling pixels is obtained by solving dual-energy computational formula with zero calcification constraint and scatter surface constraint. After scatter correction, the scatter-to-primary ratio (SPR) of wedge phantom is reduced from ~36.0% to ~3.1% for low-energy (LE) image and ~29.6% to ~0.6% for high-energy (HE) image. For step phantom, the SPR is reduced from ~42.1% and ~30.3% to ~3.9% and ~0.9% for LE and HE image, respectively. The calcification contrast-to-noise ratio is improved by two orders of magnitudes in calcification images. The proposed method shows an excellent performance on scatter reduction and calcification detection. Compared with hardware based scatter correction strategy, our method need no extra exposure and is easy to implementation.

  9. Medical imaging correction: a comparative study of five contrast and brightness matching methods.

    Science.gov (United States)

    Matsopoulos, G K

    2012-06-01

    Contrast and brightness matching are often required in many medical imaging applications, especially when comparing medical data acquired over different time periods, due to dissimilarities in the acquisition process. Numerous methods have been proposed in this field, ranging from simple correction filters to more complicated recursive techniques. This paper presents a comprehensive comparison of five methods for matching the contrast and brightness of medical image pairs, namely, Contrast Stretching, Ruttimann's Robust Film Correction, Boxcar Filtering, Least-Squares Approximation and Histogram Registration. The five methods were applied to a total of 100 image pairs, divided into five sets, in order to evaluate the performance of the compared methods on images with different levels of contrast, brightness and combinational contrast and brightness variations. Qualitative evaluation was performed by means of visual assessment on the corrected images as well as on digitally subtracted images, in order to estimate the deviations relative to the reference data. Quantitative evaluation was performed by pair-wise statistical evaluation on all image pairs in terms of specific features of merit based on widely used metrics. Following qualitative and quantitative analysis, it was deduced that the Histogram Registration method systematically outperformed the other four methods in comparison in most cases on average. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  10. Correcting for cryptic relatedness by a regression-based genomic control method

    Directory of Open Access Journals (Sweden)

    Yang Yaning

    2009-12-01

    Full Text Available Abstract Background Genomic control (GC method is a useful tool to correct for the cryptic relatedness in population-based association studies. It was originally proposed for correcting for the variance inflation of Cochran-Armitage's additive trend test by using information from unlinked null markers, and was later generalized to be applicable to other tests with the additional requirement that the null markers are matched with the candidate marker in allele frequencies. However, matching allele frequencies limits the number of available null markers and thus limits the applicability of the GC method. On the other hand, errors in genotype/allele frequencies may cause further bias and variance inflation and thereby aggravate the effect of GC correction. Results In this paper, we propose a regression-based GC method using null markers that are not necessarily matched in allele frequencies with the candidate marker. Variation of allele frequencies of the null markers is adjusted by a regression method. Conclusion The proposed method can be readily applied to the Cochran-Armitage's trend tests other than the additive trend test, the Pearson's chi-square test and other robust efficiency tests. Simulation results show that the proposed method is effective in controlling type I error in the presence of population substructure.

  11. Comparison of FLAASH and QUAC atmospheric correction methods for Resourcesat-2 LISS-IV data

    Science.gov (United States)

    Saini, V.; Tiwari, R. K.; Gupta, R. P.

    2016-05-01

    The LISS-IV sensor aboard Resourcesat-2 is a modern relatively high resolution multispectral sensor having immense potential for generation of good quality land use land cover maps. It generates data in high (10-bit) radiometric resolution and 5.8 m spatial resolution and has three bands in the visible-near infrared region. This is of particular importance to global community as the data are provided at highly competitive prices. However, no literature describing the atmospheric correction of Resourcesat-2-LISS-IV data could be found. Further, without atmospheric correction full radiometric potential of any remote sensing data remains underutilized. The FLAASH and QUAC module of ENVI software are highly used by researchers for atmospheric correction of popular remote sensing data such as Landsat, SPOT, IKONOS, LISS-I, III etc. This article outlines a methodology for atmospheric correction of Resourcesat-2-LISS-IV data. Also, a comparison of reflectance from different atmospheric correction modules (FLAASH and QUAC) with TOA and standard data has been made to determine the best suitable method for reflectance estimation for this sensor.

  12. An improved temporal correction method for mobile measurement of outdoor thermal climates

    Science.gov (United States)

    Liu, Lin; Lin, Yaoyu; Wang, Dan; Liu, Jing

    2017-07-01

    Accurate temporal corrections for the spatial meteorological parameters obtained through mobile measurements are essential in the synchronous analysis of local urban climates. This paper discusses current temporal correction models and proposes an improved model by considering correlation coefficients that are influenced by the underlying surface conditions, and the distance between the stationary weather stations and the mobile location points during a mobile measurement. Together with four adjacent, simultaneously recording stationary weather stations, long-term mobile temperature and humidity measurements were taken along a 17-km transect covering 18 mobile location points through the University Town of Shenzhen. Using the multiple air temperature and relative humidity values of the mobile location points and stationary weather stations, the function equations for determining the correlation coefficients were obtained for application in the proposed temporal correction model. Further, three kinds of validation methods were applied to compare temporal correction models. Validation results showed that the temporal correction model proposed in this study was significantly more accurate and reliable compared to the other models.

  13. Three-dimensional accuracy of different correction methods for cast implant bars

    Science.gov (United States)

    Kwon, Ji-Yung; Kim, Chang-Whe; Lim, Young-Jun; Kwon, Ho-Beom

    2014-01-01

    PURPOSE The aim of the present study was to evaluate the accuracy of three techniques for correction of cast implant bars. MATERIALS AND METHODS Thirty cast implant bars were fabricated on a metal master model. All cast implant bars were sectioned at 5 mm from the left gold cylinder using a disk of 0.3 mm thickness, and then each group of ten specimens was corrected by gas-air torch soldering, laser welding, and additional casting technique. Three dimensional evaluation including horizontal, vertical, and twisting measurements was based on measurement and comparison of (1) gap distances of the right abutment replica-gold cylinder interface at buccal, distal, lingual side, (2) changes of bar length, and (3) axis angle changes of the right gold cylinders at the step of the post-correction measurements on the three groups with a contact and non-contact coordinate measuring machine. One-way analysis of variance (ANOVA) and paired t-test were performed at the significance level of 5%. RESULTS Gap distances of the cast implant bars after correction procedure showed no statistically significant difference among groups. Changes in bar length between pre-casting and post-correction measurement were statistically significance among groups. Axis angle changes of the right gold cylinders were not statistically significance among groups. CONCLUSION There was no statistical significance among three techniques in horizontal, vertical and axial errors. But, gas-air torch soldering technique showed the most consistent and accurate trend in the correction of implant bar error. However, Laser welding technique, showed a large mean and standard deviation in vertical and twisting measurement and might be technique-sensitive method. PMID:24605205

  14. A software-based x-ray scatter correction method for breast tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Jia Feng, Steve Si; Sechopoulos, Ioannis [Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, and Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, 1701 Uppergate Drive Northeast, Suite 5018, Atlanta, Georgia 30322 (United States); Department of Radiology and Imaging Sciences, Hematology and Medical Oncology and Winship Cancer Institute, Emory University, 1701 Uppergate Drive Northeast, Suite 5018, Atlanta, Georgia 30322 (United States)

    2011-12-15

    Purpose: To develop a software-based scatter correction method for digital breast tomosynthesis (DBT) imaging and investigate its impact on the image quality of tomosynthesis reconstructions of both phantoms and patients. Methods: A Monte Carlo (MC) simulation of x-ray scatter, with geometry matching that of the cranio-caudal (CC) view of a DBT clinical prototype, was developed using the Geant4 toolkit and used to generate maps of the scatter-to-primary ratio (SPR) of a number of homogeneous standard-shaped breasts of varying sizes. Dimension-matched SPR maps were then deformed and registered to DBT acquisition projections, allowing for the estimation of the primary x-ray signal acquired by the imaging system. Noise filtering of the estimated projections was then performed to reduce the impact of the quantum noise of the x-ray scatter. Three dimensional (3D) reconstruction was then performed using the maximum likelihood-expectation maximization (MLEM) method. This process was tested on acquisitions of a heterogeneous 50/50 adipose/glandular tomosynthesis phantom with embedded masses, fibers, and microcalcifications and on acquisitions of patients. The image quality of the reconstructions of the scatter-corrected and uncorrected projections was analyzed by studying the signal-difference-to-noise ratio (SDNR), the integral of the signal in each mass lesion (integrated mass signal, IMS), and the modulation transfer function (MTF). Results: The reconstructions of the scatter-corrected projections demonstrated superior image quality. The SDNR of masses embedded in a 5 cm thick tomosynthesis phantom improved 60%-66%, while the SDNR of the smallest mass in an 8 cm thick phantom improved by 59% (p < 0.01). The IMS of the masses in the 5 cm thick phantom also improved by 15%-29%, while the IMS of the masses in the 8 cm thick phantom improved by 26%-62% (p < 0.01). Some embedded microcalcifications in the tomosynthesis phantoms were visible only in the scatter-corrected

  15. Method of glass selection for color correction in optical system design.

    Science.gov (United States)

    de Albuquerque, Bráulio Fonseca Carneiro; Sasian, Jose; de Sousa, Fabiano Luis; Montes, Amauri Silva

    2012-06-18

    A method of glass selection for the design of optical systems with reduced chromatic aberration is presented. This method is based on the unification of two previously published methods adding new contributions and using a multi-objective approach. This new method makes it possible to select sets of compatible glasses suitable for the design of super-apochromatic optical systems. As an example, we present the selection of compatible glasses and the effective designs for all-refractive optical systems corrected in five spectral bands, with central wavelengths going from 485 nm to 1600 nm.

  16. A combined statistical bias correction and stochastic downscaling method for precipitation

    Science.gov (United States)

    Volosciuk, Claudia; Maraun, Douglas; Vrac, Mathieu; Widmann, Martin

    2017-03-01

    Much of our knowledge about future changes in precipitation relies on global (GCMs) and/or regional climate models (RCMs) that have resolutions which are much coarser than typical spatial scales of precipitation, particularly extremes. The major problems with these projections are both climate model biases and the gap between gridbox and point scale. Wong et al. (2014) developed a model to jointly bias correct and downscale precipitation at daily scales. This approach, however, relied on pairwise correspondence between predictor and predictand for calibration, and, thus, on nudged simulations which are rarely available. Here we present an extension of this approach that separates the downscaling from the bias correction and in principle is applicable to free-running GCMs/RCMs. In a first step, we bias correct RCM-simulated precipitation against gridded observations at the same scale using a parametric quantile mapping (QMgrid) approach. In a second step, we bridge the scale gap: we predict local variance employing a regression-based model with coarse-scale precipitation as a predictor. The regression model is calibrated between gridded and point-scale (station) observations. For this concept we present one specific implementation, although the optimal model may differ for each studied location. To correct the whole distribution including extreme tails we apply a mixture distribution of a gamma distribution for the precipitation mass and a generalized Pareto distribution for the extreme tail in the first step. For the second step a vector generalized linear gamma model is employed. For evaluation we adopt the perfect predictor experimental setup of VALUE. We also compare our method to the classical QM as it is usually applied, i.e., between RCM and point scale (QMpoint). Precipitation is in most cases improved by (parts of) our method across different European climates. The method generally performs better in summer than in winter and in winter best in the

  17. A fast beam hardening correction method incorporated in a filtered back-projection based MAP algorithm

    Science.gov (United States)

    Luo, Shouhua; Wu, Huazhen; Sun, Yi; Li, Jing; Li, Guang; Gu, Ning

    2017-03-01

    The beam hardening effect can induce strong artifacts in CT images, which result in severely deteriorated image quality with incorrect intensities (CT numbers). This paper develops an effective and efficient beam hardening correction algorithm incorporated in a filtered back-projection based maximum a posteriori (BHC-FMAP). In the proposed algorithm, the beam hardening effect is modeled and incorporated into the forward-projection of the MAP to suppress beam hardening induced artifacts, and the image update process is performed by Feldkamp-Davis-Kress method based back-projection to speed up the convergence. The proposed BHC-FMAP approach does not require information about the beam spectrum or the material properties, or any additional segmentation operation. The proposed method was qualitatively and quantitatively evaluated using both phantom and animal projection data. The experimental results demonstrate that the BHC-FMAP method can efficiently provide a good correction of beam hardening induced artefacts.

  18. Brine Distribution after Vacuum Saturation

    DEFF Research Database (Denmark)

    Hedegaard, Kathrine; Andersen, Bertel Lohmann

    1999-01-01

    Experiments with the vacuum saturation method for brine in plugs of chalk showed that a homogeneous distribution of brine cannot be ensured at saturations below 20% volume. Instead of a homogeneous volume distribution the brine becomes concentrated close to the surfaces of the plugs...

  19. Monitor hemoglobin concentration and oxygen saturation in living mouse tail using photoacoustic CT scanner

    Science.gov (United States)

    Liu, Bo; Kruger, Robert; Reinecke, Daniel; Stantz, Keith M.

    2010-02-01

    Purpose: The purpose of this study is to use PCT spectroscopy scanner to monitor the hemoglobin concentration and oxygen saturation change of living mouse by imaging the artery and veins in a mouse tail. Materials and Methods: One mouse tail was scanned using the PCT small animal scanner at the isosbestic wavelength (796nm) to obtain its hemoglobin concentration. Immediately after the scan, the mouse was euthanized and its blood was extracted from the heart. The true hemoglobin concentration was measured using a co-oximeter. Reconstruction correction algorithm to compensate the acoustic signal loss due to the existence of bone structure in the mouse tail was developed. After the correction, the hemoglobin concentration was calculated from the PCT images and compared with co-oximeter result. Next, one mouse were immobilized in the PCT scanner. Gas with different concentrations of oxygen was given to mouse to change the oxygen saturation. PCT tail vessel spectroscopy scans were performed 15 minutes after the introduction of gas. The oxygen saturation values were then calculated to monitor the oxygen saturation change of mouse. Results: The systematic error for hemoglobin concentration measurement was less than 5% based on preliminary analysis. Same correction technique was used for oxygen saturation calculation. After correction, the oxygen saturation level change matches the oxygen volume ratio change of the introduced gas. Conclusion: This living mouse tail experiment has shown that NIR PCT-spectroscopy can be used to monitor the oxygen saturation status in living small animals.

  20. Semi-implicit spectral deferred correction methods for highly nonlinear partial differential equations

    Science.gov (United States)

    Guo, Ruihan; Xia, Yinhua; Xu, Yan

    2017-06-01

    The goal of this paper is to develop a novel semi-implicit spectral deferred correction (SDC) time marching method. The method can be used in a large class of problems, especially for highly nonlinear ordinary differential equations (ODEs) without easily separating of stiff and non-stiff components, which is more general and efficient comparing with traditional semi-implicit SDC methods. The proposed semi-implicit SDC method is based on low order time integration methods and corrected iteratively. The order of accuracy is increased for each additional iteration. And we also explore its local truncation error analytically. This SDC method is intended to be combined with the method of lines, which provides a flexible framework to develop high order semi-implicit time marching methods for nonlinear partial differential equations (PDEs). In this paper we mainly focus on the applications of the nonlinear PDEs with higher order spatial derivatives, e.g. convection diffusion equation, the surface diffusion and Willmore flow of graphs, the Cahn-Hilliard equation, the Cahn-Hilliard-Brinkman system and the phase field crystal equation. Coupled with the local discontinuous Galerkin (LDG) spatial discretization, the fully discrete schemes are all high order accurate in both space and time, and stable numerically with the time step proportional to the spatial mesh size. Numerical experiments are carried out to illustrate the accuracy and capability of the proposed semi-implicit SDC method.

  1. Simplified Transient Hot-Wire Method for Effective Thermal Conductivity Measurement in Geo Materials: Microstructure and Saturation Effect

    Directory of Open Access Journals (Sweden)

    B. Merckx

    2012-01-01

    Full Text Available The thermal conductivity measurement by a simplified transient hot-wire technique is applied to geomaterials in order to show the relationships which can exist between effective thermal conductivity, texture, and moisture of the materials. After a validation of the used “one hot-wire” technique in water, toluene, and glass-bead assemblages, the investigations were performed (1 in glass-bead assemblages of different diameters in dried, water, and acetone-saturated states in order to observe the role of grain sizes and saturation on the effective thermal conductivity, (2 in a compacted earth brick at different moisture states, and (3 in a lime-hemp concrete during 110 days following its manufacture. The lime-hemp concrete allows the measurements during the setting, desiccation and carbonation steps. The recorded Δ/ln( diagrams allow the calculation of one effective thermal conductivity in the continuous and homogeneous fluids and two effective thermal conductivities in the heterogeneous solids. The first one measured in the short time acquisitions (<1 s mainly depends on the contact between the wire and grains and thus microtexture and hydrated state of the material. The second one, measured for longer time acquisitions, characterizes the mean effective thermal conductivity of the material.

  2. A robust method using propensity score stratification for correcting verification bias for binary tests.

    Science.gov (United States)

    He, Hua; McDermott, Michael P

    2012-01-01

    Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified.

  3. A non-parametric method for correction of global radiation observations

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik; Perers, Bengt

    2013-01-01

    This paper presents a method for correction and alignment of global radiation observations based on information obtained from calculated global radiation, in the present study one-hour forecast of global radiation from a numerical weather prediction (NWP) model is used. Systematical errors detect...... University. The method can be useful for optimized use of solar radiation observations for forecasting, monitoring, and modeling of energy production and load which are affected by solar radiation.......This paper presents a method for correction and alignment of global radiation observations based on information obtained from calculated global radiation, in the present study one-hour forecast of global radiation from a numerical weather prediction (NWP) model is used. Systematical errors detected...... the observed and the calculated radiation in order to find systematic deviations between them. The method is applied to correct global radiation observations from a climate station located at a district heating plant in Denmark. The results are compared to observations recorded at the Danish Technical...

  4. Extreme Wind Calculation Applying Spectral Correction Method – Test and Validation

    DEFF Research Database (Denmark)

    Rathmann, Ole Steen; Hansen, Brian Ohrbeck; Larsén, Xiaoli Guo

    2016-01-01

    We present a test and validation of extreme wind calculation applying the Spectral Correction (SC) method as implemented in a DTU Wind Condition Software. This method can do with a short-term(~1 year) local measured wind data series in combination with a long-term (10-20 years) reference modelled...... wind data series like CFSR and CFDDA reanalysis data for the site in question. The validation of the accuracy was performed by comparing with estimates by the traditional Annual Maxim a (AM) method and the Peak Over Threshold (POT) method, applied to measurements, for six sites: four sites located...... in Denmark, one site located in the Netherlands and one site located in the USA, comprising both on-shore and off-shore sites. The SC method was applied to 1-year measured wind data while the AM and POT methods were applied to long-term measured wind data. Further, the consistency of the SC method...

  5. New simpler method of matching NLO corrections with parton shower Monte Carlo

    CERN Document Server

    Jadach, Stanislaw; Sapeta, Sebastian; Siodmok, Andrzej Konrad; Skrzypek, Maciej

    2016-01-01

    Next steps in development of the KrkNLO method of implementing NLO QCD corrections to hard processes in parton shower Monte Carlo programs are presented. This new method is a simpler alternative to other well-known approaches, such as MC@NLO and POWHEG. The KrkNLO method owns its simplicity to the use of parton distribution functions (PDFs) in a new, so-called Monte Carlo (MC), factorization scheme which was recently fully defined for the first time. Preliminary numerical results for the Higgs-boson production process are also presented.

  6. Hydrological modeling as an evaluation tool of EURO-CORDEX climate projections and bias correction methods

    Science.gov (United States)

    Hakala, Kirsti; Addor, Nans; Seibert, Jan

    2017-04-01

    Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of

  7. THE EFFECT OF DIFFERENT CORRECTIVE FEEDBACK METHODS ON THE OUTCOME AND SELF CONFIDENCE OF YOUNG ATHLETES

    Directory of Open Access Journals (Sweden)

    George Tzetzis

    2008-09-01

    Full Text Available This experiment investigated the effects of three corrective feedback methods, using different combinations of correction, or error cues and positive feedback for learning two badminton skills with different difficulty (forehand clear - low difficulty, backhand clear - high difficulty. Outcome and self-confidence scores were used as dependent variables. The 48 participants were randomly assigned into four groups. Group A received correction cues and positive feedback. Group B received cues on errors of execution. Group C received positive feedback, correction cues and error cues. Group D was the control group. A pre, post and a retention test was conducted. A three way analysis of variance ANOVA (4 groups X 2 task difficulty X 3 measures with repeated measures on the last factor revealed significant interactions for each depended variable. All the corrective feedback methods groups, increased their outcome scores over time for the easy skill, but only groups A and C for the difficult skill. Groups A and B had significantly better outcome scores than group C and the control group for the easy skill on the retention test. However, for the difficult skill, group C was better than groups A, B and D. The self confidence scores of groups A and C improved over time for the easy skill but not for group B and D. Again, for the difficult skill, only group C improved over time. Finally a regression analysis depicted that the improvement in performance predicted a proportion of the improvement in self confidence for both the easy and the difficult skill. It was concluded that when young athletes are taught skills of different difficulty, different type of instruction, might be more appropriate in order to improve outcome and self confidence. A more integrated approach on teaching will assist coaches or physical education teachers to be more efficient and effective

  8. BIOFEEDBACK: A NEW METHOD FOR CORRECTION OF MOTOR DISORDERS IN PATIENTS WITH MULTIPLE SCLEROSIS

    Directory of Open Access Journals (Sweden)

    Ya. S. Pekker

    2014-01-01

    Full Text Available Major disabling factors in multiple sclerosis is motor disorders. Rehabilitation of such violations is one of the most important medical and social problems. Currently, most of the role given to the development of methods for correction of motor disorders based on accessing natural resources of the human body. One of these methods is the adaptive control with biofeedback (BFB. The aim of our study was the correction of motor disorders in multiple sclerosis patients using biofeedback training. In the study, we have developed scenarios for training rehabilitation program computer EMG biofeedback aimed at correction of motor disorders in patients with multiple sclerosis (MS. The method was tested in the neurological clinic of SSMU. The study included 9 patients with definite diagnosis of MS with the presence of the clinical picture of combined pyramidal and cerebellar symptoms. Assessed the effectiveness of rehabilitation procedures biofeedback training using specialized scales (rating scale functional systems Kurtzke; questionnaire research quality of life – SF-36, evaluation of disease impact Profile – SIP and score on a scale fatigue – FSS. In the studied group of patients decreased score on a scale of fatigue (FSS, increased motor control (SIP2, the physical and mental components of health (SF-36. The tendency to reduce the amount of neurological deficit by reducing the points on the pyramidal Kurtske violations. Analysis of the exchange rate dynamics of biofeedback training on EMG for trained muscles indicates an increase in the recorded signal OEMG from session to session. Proved a tendency to increase strength and coordination trained muscles of patients studied.Positive results of biofeedback therapy in patients with MS can be recommended to use this method in the complex rehabilitation measures to correct motor and psycho-emotional disorders.

  9. Methods of bronchial tree reconstruction and camera distortion corrections for virtual endoscopic environments.

    Science.gov (United States)

    Socha, Mirosław; Duplaga, Mariusz; Turcza, Paweł

    2004-01-01

    The use of three-dimensional visualization of anatomical structures in diagnostics and medical training is growing. The main components of virtual respiratory tract environments include reconstruction and simulation algorithms as well as correction methods of endoscope camera distortions in the case of virtually-enhanced navigation systems. Reconstruction methods rely usually on initial computer tomography (CT) image segmentation to trace contours of the tracheobronchial tree, which in turn are used in the visualization process. The main segmentation methods, including relatively simple approaches such as adaptive region-growing algorithms and more complex methods, e.g. hybrid algorithms based on region growing and mathematical morphology methods, are described in this paper. The errors and difficulties in the process of tracheobronchial tree reconstruction depend on the occurrence of distortions during CT image acquisition. They are usually related to the inability to exactly fulfil the sampling theorem's conditions. Other forms of distortions and noise such as additive white Gaussian noise, may also appear. The impact of these distortions on the segmentation and reconstruction may be diminished through the application of appropriately selected image prefiltering, which is also demonstrated in this paper. Methods of surface rendering (ray-casting, ray-tracing techniques) and volume rendering will be shown, with special focus on aspects of hardware and software implementations. Finally, methods of camera distortions correction and simulation are presented. The mathematical camera models, the scope of their applications and types of distortions were have also been indicated.

  10. Novel D-shaped fiber fabrication method for saturable absorber application in the generation of ultra-short pulses

    Science.gov (United States)

    Ahmad, H.; Safaei, R.; Rezayi, M.; Amiri, I. S.

    2017-08-01

    A cost-efficient, time-saving and effective technique for the fabrication of D-shaped fibers is presented, to provide a platform with a strong evanescent field to be used as a saturable absorber (SA). This technique provides flexibility by removing the required portion of the fiber, and a small polished length which offers a unique opportunity to deposit SA on its surface by simply submerging it in the SA solution without high losses. A compact fiber laser utilizing a graphene oxide coating on a fabricated D-shaped fiber as an SA capable of generating ultrashort pulses is designed and verified. We report the generation of ultrafast pulses as short as 227 fs with a 34.7 MHz repetition rate, having a 3 dB bandwidth of 14 nm at the 1570 nm center wavelength.

  11. Facts about saturated fats

    Science.gov (United States)

    ... fat diary with low-fat or nonfat milk, yogurt, and cheese. Eat more fruits, vegetables, whole grains, and other foods with low or no saturated fat. Alternative Names Cholesterol - saturated fat; Atherosclerosis - saturated fat; Hardening of the ...

  12. Saturated fat (image)

    Science.gov (United States)

    Saturated fat can raise blood cholesterol and can put you at risk for heart disease and stroke. You should ... limit any foods that are high in saturated fat. Sources of saturated fat include whole-milk dairy ...

  13. Bias correction of regional climate model simulations for hydrological climate-change impact studies: Review and evaluation of different methods

    Science.gov (United States)

    Teutschbein, Claudia; Seibert, Jan

    2012-08-01

    SummaryDespite the increasing use of regional climate model (RCM) simulations in hydrological climate-change impact studies, their application is challenging due to the risk of considerable biases. To deal with these biases, several bias correction methods have been developed recently, ranging from simple scaling to rather sophisticated approaches. This paper provides a review of available bias correction methods and demonstrates how they can be used to correct for deviations in an ensemble of 11 different RCM-simulated temperature and precipitation series. The performance of all methods was assessed in several ways: At first, differently corrected RCM data was compared to observed climate data. The second evaluation was based on the combined influence of corrected RCM-simulated temperature and precipitation on hydrological simulations of monthly mean streamflow as well as spring and autumn flood peaks for five catchments in Sweden under current (1961-1990) climate conditions. Finally, the impact on hydrological simulations based on projected future (2021-2050) climate conditions was compared for the different bias correction methods. Improvement of uncorrected RCM climate variables was achieved with all bias correction approaches. While all methods were able to correct the mean values, there were clear differences in their ability to correct other statistical properties such as standard deviation or percentiles. Simulated streamflow characteristics were sensitive to the quality of driving input data: Simulations driven with bias-corrected RCM variables fitted observed values better than simulations forced with uncorrected RCM climate variables and had more narrow variability bounds.

  14. An empirical method of RH correction for satellite estimation of ground-level PM concentrations

    Science.gov (United States)

    Wang, Zifeng; Chen, Liangfu; Tao, Jinhua; Liu, Yang; Hu, Xuefei; Tao, Minghui

    2014-10-01

    A hygroscopic growth model suitable for local aerosol characteristics and their temporal variations is necessary for accurate satellite retrieval of ground-level particulate matters (PM). This study develops an empirical method to correct the relative humidity (RH) impact on aerosol extinction coefficient and to further derive PM concentrations from satellite observations. Not relying on detailed information of aerosol chemical and microphysical properties, this method simply uses the in-situ observations of visibility (VIS), RH and PM concentrations to characterize aerosol hygroscopicity, and thus makes the RH correction capable of supporting the satellite PM estimations with large spatial and temporal coverage. In this method, the aerosol average mass extinction efficiency (αext) is used to describe the general hygroscopic growth behaviors of the total aerosol populations. The association between αext and RH is obtained through empirical model fitting, and is then applied to carry out RH correction. Nearly one year of in-situ measurements of VIS, RH and PM10 in Beijing urban area are collected for this study and RH correction is made for each of the months with sufficient data samples. The correlations between aerosol extinction coefficients and PM10 concentrations are significantly improved, with the monthly correlation R2 increasing from 0.26-0.63 to 0.49-0.82, as well as the whole dataset's R2 increasing from 0.36 to 0.68. PM10 concentrations are retrieved through RH correction and validated for each season individually. Good agreements between the retrieved and observed PM10 concentrations are found in all seasons, with R2 ranging from 0.54 in spring to 0.73 in fall, and the mean relative errors ranging from -2.5% in winter to -10.8% in spring. Based on the satellite AOD and the model simulated aerosol profiles, surface PM10 over Beijing area is retrieved through the RH correction. The satellite retrieved PM10 and those observed at ground sites agree well

  15. Absorbance correction method for estimation of telmisartan and metoprolol succinate in combined tablet dosage forms.

    Science.gov (United States)

    Patel, Komal; Patel, Amit; Dave, Jayant; Patel, Chaganbhai

    2012-07-01

    The present manuscript describes simple, sensitive, rapid, accurate, precise and economical spectrophotometric method for the simultaneous determination of telmisartan and metoprolol succinate in combined tablet dosage form. The method is based on the absorbance correction equations for analysis of both the drugs using methanol as solvent. Telmisartan has absorbance maxima at 296 nm and metoprolol succinate has absorbance maxima at 223 nm in methanol. The linearity was obtained in the concentration range of 2-16 μg/ ml and 3-24 μg/ml for telmisartan and metoprolol succinate, respectively. The concentrations of the drugs were determined by using absorbance correction method at both the wavelengths. The method was successfully applied to pharmaceutical dosage form because no interference from the tablet excipients was found. The suitability of this method for the quantitative determination of telmisartan and metoprolol succinate was proved by validation. The proposed method was found to be simple and sensitive for the quality control application of telmisartan and metoprolol succinate in pharmaceutical dosage form. The result of analysis has been validated statistically and by recovery studies. Recoveries were found in the range of 98.08-100.55% of telmisartan and 98.41-101.87% of metoprolol succinate.

  16. CT metal artifact reduction method correcting for beam hardening and missing projections

    Science.gov (United States)

    Verburg, Joost M.; Seco, Joao

    2012-05-01

    We present and validate a computed tomography (CT) metal artifact reduction method that is effective for a wide spectrum of clinical implant materials. Projections through low-Z implants such as titanium were corrected using a novel physics correction algorithm that reduces beam hardening errors. In the case of high-Z implants (dental fillings, gold, platinum), projections through the implant were considered missing and regularized iterative reconstruction was performed. Both algorithms were combined if multiple implant materials were present. For comparison, a conventional projection interpolation method was implemented. In a blinded and randomized evaluation, ten radiation oncologists ranked the quality of patient scans on which the different methods were applied. For scans that included low-Z implants, the proposed method was ranked as the best method in 90% of the reviews. It was ranked superior to the original reconstruction (p = 0.0008), conventional projection interpolation (p < 0.0001) and regularized limited data reconstruction (p = 0.0002). All reviewers ranked the method first for scans with high-Z implants, and better as compared to the original reconstruction (p < 0.0001) and projection interpolation (p = 0.004). We conclude that effective reduction of CT metal artifacts can be achieved by combining algorithms tailored to specific types of implant materials.

  17. CT metal artifact reduction method correcting for beam hardening and missing projections.

    Science.gov (United States)

    Verburg, Joost M; Seco, Joao

    2012-05-07

    We present and validate a computed tomography (CT) metal artifact reduction method that is effective for a wide spectrum of clinical implant materials. Projections through low-Z implants such as titanium were corrected using a novel physics correction algorithm that reduces beam hardening errors. In the case of high-Z implants (dental fillings, gold, platinum), projections through the implant were considered missing and regularized iterative reconstruction was performed. Both algorithms were combined if multiple implant materials were present. For comparison, a conventional projection interpolation method was implemented. In a blinded and randomized evaluation, ten radiation oncologists ranked the quality of patient scans on which the different methods were applied. For scans that included low-Z implants, the proposed method was ranked as the best method in 90% of the reviews. It was ranked superior to the original reconstruction (p = 0.0008), conventional projection interpolation (p < 0.0001) and regularized limited data reconstruction (p = 0.0002). All reviewers ranked the method first for scans with high-Z implants, and better as compared to the original reconstruction (p < 0.0001) and projection interpolation (p = 0.004). We conclude that effective reduction of CT metal artifacts can be achieved by combining algorithms tailored to specific types of implant materials.

  18. Correction of 157-nm lens based on phase ring aberration extraction method

    Science.gov (United States)

    Meute, Jeff; Rich, Georgia K.; Conley, Will; Smith, Bruce W.; Zavyalova, Lena V.; Cashmore, Julian S.; Ashworth, Dominic; Webb, James E.; Rich, Lisa

    2004-05-01

    Early manufacture and use of 157nm high NA lenses has presented significant challenges including: intrinsic birefringence correction, control of optical surface contamination, and the use of relatively unproven materials, coatings, and metrology. Many of these issues were addressed during the manufacture and use of International SEMATECH"s 0.85NA lens. Most significantly, we were the first to employ 157nm phase measurement interferometry (PMI) and birefringence modeling software for lens optimization. These efforts yielded significant wavefront improvement and produced one of the best wavefront-corrected 157nm lenses to date. After applying the best practices to the manufacture of the lens, we still had to overcome the difficulties of integrating the lens into the tool platform at International SEMATECH instead of at the supplier facility. After lens integration, alignment, and field optimization were complete, conventional lithography and phase ring aberration extraction techniques were used to characterize system performance. These techniques suggested a wavefront error of approximately 0.05 waves RMS--much larger than the 0.03 waves RMS predicted by 157nm PMI. In-situ wavefront correction was planned for in the early stages of this project to mitigate risks introduced by the use of development materials and techniques and field integration of the lens. In this publication, we document the development and use of a phase ring aberration extraction method for characterizing imaging performance and a technique for correcting aberrations with the addition of an optical compensation plate. Imaging results before and after the lens correction are presented and differences between actual and predicted results are discussed.

  19. Investigation of partial volume correction methods for brain FDG PET studies

    Science.gov (United States)

    Yang, J.; Huang, S. C.; Mega, M.; Lin, K. P.; Toga, A. W.; Small, G. W.; Phelps, M. E.

    1996-12-01

    The use of positron emission tomography (PET) in quantitative fluorodeoxyglucose (FDG) studies of aging and dementia has been limited by partial volume effects. A general method for correction of partial volume effects (PVE) in PET involves the following common procedures: segmentation of MRI brain images into gray matter (GM), white matter (WM), cerebral spinal fluid (CSF), and muscle (MS) components: MRI PET registration; and generation of simulated PET images. Afterward, two different approaches can be taken. The first approach derives first a pixel-by-pixel correction map as the ratio of the measured image to the simulated image [with realistic full-width at half-maximum (FWHM)]. The correction map was applied to the MRI segmentation image. Regions of interest (ROI's) can then be applied to give results free of partial volume effects. The second approach uses the ROI values of the simulated "pure" image (with negligible FWHM) and those of the simulated and the measured PET images to correct for the PVE effect. By varying the ratio of radiotracer concentrations for different tissue components, the in-plane FWHM's of a three-dimensional point spread function, and the ROI size, the authors evaluated the performance of these two approaches in terms of their accuracy and sensitivity to different simulation configurations. The results showed that both approaches are more robust than the approach developed by Muller-Gartner et al. (1992), and the second approach is more accurate and more robust than the first. In conclusion, the authors recommend that the second approach should be used on FDG PET images to correct for partial volume effects and to determine whether an apparent change in GM radiotracer concentration is truly due to metabolic changes.

  20. Investigation of partial volume correction methods for brain FDG PET studies

    Energy Technology Data Exchange (ETDEWEB)

    Yang, J.; Huang, S.C.; Mega, M.; Toga, A.W.; Small, G.W.; Phelps, M.E. [Univ. of California, Los Angeles, CA (United States); Lin, K.P. [Chung Yuan Christian Univ., Chung-Li (China). Dept. of Electrical Engineering

    1996-12-01

    The use of positron emission tomography (PET) in quantitative fluorodeoxyglucose (FDG) studies of aging and dementia has been limited by partial volume effects. A general method for correction of partial volume effects (PVE) in PET involves the following common procedures; segmentation of MRI brain images into gray matter (GM), white matter (WM), cerebral spinal fluid (CSF), and muscle (MS) components; MRI PET registration; and generation of simulated PET images. Afterward, two different approaches can be taken. The first approach derives first a pixel-by-pixel correction map as the ratio of the measured image to the simulated image [with realistic full-width at half-maximum (FWHM)]. The correction map was applied to the MRI segmentation image. Regions of interest (ROI`s) can then be applied to give results free of partial volume effects. The second approach uses the ROI values of the simulated ``pure`` image (with negligible FWHM) and those of the simulated and the measured PET images to correct for the PVE effect. By varying the ratio of radiotracer concentrations for different tissue components, the in-plane FWHM`s of a three-dimensional point spread function, and the ROI size, the authors evaluated the performance of these two approaches in terms of their accuracy and sensitivity to different simulation configurations. The results showed that both approaches are more robust than the approach developed by Muller-Gartner et al., and the second approach is more accurate and more robust than the first. In conclusion, the authors recommend that the second approach should be used on FDG PET images to correct for partial volume effects and to determine whether an apparent change in GM radiotracer concentration is truly due to metabolic changes.

  1. The sequential value correction method for the two-dimensional irregular cutting stock problem

    Directory of Open Access Journals (Sweden)

    M.A. Verkhoturov

    2000-12-01

    Full Text Available This paper regards the problem of the two-dimensional irregular cutting stock problem (ICSP, where the pieces to be cut out may be of any shape. The sequential value correction method has been developed to solve this problem. This method is based on dual values (variables, which is the essential concept of linear programming. We suggest a technique of value calculation for such pieces. The algorithms are included. We also describe a computing experiment whose results are the evidence of the good performance of the algorithms developed.

  2. KURTOSIS CORRECTION METHOD FOR VARIABLE CONTROL CHARTS - A COMPARISON IN LAPLACE DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Metlapalli Chaitanya Priya

    2010-12-01

    Full Text Available A variable quality characteristic is assumed to follow the well known Laplace Distribution. Control chart constants for the process mean, process dispersion based on a number of sub group statistics including sub group mean and range are evaluated from the first principles. Limits obtained through kurtosis correction method are borrowed from Tadikamalla and Popescu (2003. The performance of these sets of control limits is compared through a simulation study and the relative preferences are arrived at. The methods are illustrated by an example.

  3. A numerical method for determining the radial wave motion correction in plane wave couplers

    DEFF Research Database (Denmark)

    Cutanda Henriquez, Vicente; Barrera Figueroa, Salvador; Torras Rosell, Antoni

    2016-01-01

    Microphones are used for realising the unit of sound pressure level, the pascal (Pa). Electro-acoustic reciprocity is the preferred method for the absolute determination of the sensitivity. This method can be applied in different sound fields: uniform pressure, free field or diffuse field. Pressure...... solution is an analytical expression that estimates the difference between the ideal plane wave sound field and a more complex lossless sound field created by a non-planar movement of the microphone’s membranes. Alternatively, a correction may be calculated numerically by introducing a full model...

  4. A Novel Bias Correction Method for Soil Moisture and Ocean Salinity (SMOS Soil Moisture: Retrieval Ensembles

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2015-12-01

    Full Text Available Bias correction is a very important pre-processing step in satellite data assimilation analysis, as data assimilation itself cannot circumvent satellite biases. We introduce a retrieval algorithm-specific and spatially heterogeneous Instantaneous Field of View (IFOV bias correction method for Soil Moisture and Ocean Salinity (SMOS soil moisture. To the best of our knowledge, this is the first paper to present the probabilistic presentation of SMOS soil moisture using retrieval ensembles. We illustrate that retrieval ensembles effectively mitigated the overestimation problem of SMOS soil moisture arising from brightness temperature errors over West Africa in a computationally efficient way (ensemble size: 12, no time-integration. In contrast, the existing method of Cumulative Distribution Function (CDF matching considerably increased the SMOS biases, due to the limitations of relying on the imperfect reference data. From the validation at two semi-arid sites, Benin (moderately wet and vegetated area and Niger (dry and sandy bare soils, it was shown that the SMOS errors arising from rain and vegetation attenuation were appropriately corrected by ensemble approaches. In Benin, the Root Mean Square Errors (RMSEs decreased from 0.1248 m3/m3 for CDF matching to 0.0678 m3/m3 for the proposed ensemble approach. In Niger, the RMSEs decreased from 0.14 m3/m3 for CDF matching to 0.045 m3/m3 for the ensemble approach.

  5. A Corrective Strategy to Alleviate Overloading in Transmission Lines Based on Particle Swarm Optimization Method

    Directory of Open Access Journals (Sweden)

    Manoj Kumar Maharana

    2010-06-01

    Full Text Available This paper presents novel corrective control actions to alleviate overloads in transmission lines by the Particle Swarm Optimization (PSO method. Generator rescheduling and/or load shedding is performed locally, to restore the system from abnormal to normal operating state. The appropriate identification of generators and load buses to perform the corrective control action is an important task for the operators. Anew Direct Acyclic Graph (DAG technique for selection of participating generators and buses with respect to a contingency is presented. The effectiveness of the proposed approach is demonstrated with the help of the IEEE 30 bus system. The result shows that the proposed approach is computationally fast, reliable and efficient, in restoring the system to normal state after a contingency with minimal control actions.

  6. Computational method for the correction of proximity effect in electron-beam lithography (Poster Paper)

    Science.gov (United States)

    Chang, Chih-Yuan; Owen, Gerry; Pease, Roger Fabian W.; Kailath, Thomas

    1992-07-01

    Dose correction is commonly used to compensate for the proximity effect in electron lithography. The computation of the required dose modulation is usually carried out using 'self-consistent' algorithms that work by solving a large number of simultaneous linear equations. However, there are two major drawbacks: the resulting correction is not exact, and the computation time is excessively long. A computational scheme, as shown in Figure 1, has been devised to eliminate this problem by the deconvolution of the point spread function in the pattern domain. The method is iterative, based on a steepest descent algorithm. The scheme has been successfully tested on a simple pattern with a minimum feature size 0.5 micrometers , exposed on a MEBES tool at 10 KeV in 0.2 micrometers of PMMA resist on a silicon substrate.

  7. X-ray scatter correction method for dedicated breast computed tomography: improvements and initial patient testing.

    Science.gov (United States)

    Ramamurthy, Senthil; D'Orsi, Carl J; Sechopoulos, Ioannis

    2016-02-07

    A previously proposed x-ray scatter correction method for dedicated breast computed tomography was further developed and implemented so as to allow for initial patient testing. The method involves the acquisition of a complete second set of breast CT projections covering 360° with a perforated tungsten plate in the path of the x-ray beam. To make patient testing feasible, a wirelessly controlled electronic positioner for the tungsten plate was designed and added to a breast CT system. Other improvements to the algorithm were implemented, including automated exclusion of non-valid primary estimate points and the use of a different approximation method to estimate the full scatter signal. To evaluate the effectiveness of the algorithm, evaluation of the resulting image quality was performed with a breast phantom and with nine patient images. The improvements in the algorithm resulted in the avoidance of introduction of artifacts, especially at the object borders, which was an issue in the previous implementation in some cases. Both contrast, in terms of signal difference and signal difference-to-noise ratio were improved with the proposed method, as opposed to with the correction algorithm incorporated in the system, which does not recover contrast. Patient image evaluation also showed enhanced contrast, better cupping correction, and more consistent voxel values for the different tissues. The algorithm also reduces artifacts present in reconstructions of non-regularly shaped breasts. With the implemented hardware and software improvements, the proposed method can be reliably used during patient breast CT imaging, resulting in improvement of image quality, no introduction of artifacts, and in some cases reduction of artifacts already present. The impact of the algorithm on actual clinical performance for detection, diagnosis and other clinical tasks in breast imaging remains to be evaluated.

  8. A motion correction method for indoor robot based on lidar feature extraction and matching

    Science.gov (United States)

    Gou, Jiansong; Guo, Yu; Wei, Yang; Li, Zheng; Zhao, Yeming; Wang, Lirong; Chen, Xiaohe

    2018-01-01

    For robots used for the indoor environment detection, positioning and navigation with a Light Detection and Ranging system (Lidar), the accuracy of map building, positioning and navigation, is largely restricted by the motion accuracy. Due to manufacture error and transmission error of the mechanical structure, sensors easily affected by the environment and other factors, robots' cumulative motion error is inevitable. This paper presents a series of methods and solutions to overcome those problems, such as point set partition and feature extraction methods for processing Lidar scan points, feature matching method to correct the motion process, with less computation, more reasonable and rigorous threshold, wider scope of application, higher efficiency and accuracy. While extracting environment features and building indoor maps, these methods analyze the motion error of the robot and correct it, improving the accuracy of movement and map without any additional hardware. Experiments prove that the rotation error and translation error of the robot platform used in experiments can by reduced by 50% and by 70% respectively. The methods evidently improve the motion accuracy with a strong effectiveness and practicality.

  9. Nonuniform Illumination Correction Algorithm for Underwater Images Using Maximum Likelihood Estimation Method

    Directory of Open Access Journals (Sweden)

    Sonali Sachin Sankpal

    2016-01-01

    Full Text Available Scattering and absorption of light is main reason for limited visibility in water. The suspended particles and dissolved chemical compounds in water are also responsible for scattering and absorption of light in water. The limited visibility in water results in degradation of underwater images. The visibility can be increased by using artificial light source in underwater imaging system. But the artificial light illuminates the scene in a nonuniform fashion. It produces bright spot at the center with the dark region at surroundings. In some cases imaging system itself creates dark region in the image by producing shadow on the objects. The problem of nonuniform illumination is neglected by the researchers in most of the image enhancement techniques of underwater images. Also very few methods are discussed showing the results on color images. This paper suggests a method for nonuniform illumination correction for underwater images. The method assumes that natural underwater images are Rayleigh distributed. This paper used maximum likelihood estimation of scale parameter to map distribution of image to Rayleigh distribution. The method is compared with traditional methods for nonuniform illumination correction using no-reference image quality metrics like average luminance, average information entropy, normalized neighborhood function, average contrast, and comprehensive assessment function.

  10. A new method of body habitus correction for total body potassium measurements

    Energy Technology Data Exchange (ETDEWEB)

    O' Hehir, S [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom); Green, S [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom); Beddoe, A H [University Hospital Birmingham Foundation NHS Trust, Birmingham (United Kingdom)

    2006-09-07

    This paper describes an accurate and time-efficient method for the determination of total body potassium via a combination of measurements in the Birmingham whole body counter and the use of the Monte Carlo n-particle (MCNP) simulation code. In developing this method, MCNP has also been used to derive values for some components of the total measurement uncertainty which are difficult to quantify experimentally. A method is proposed for MCNP-assessed body habitus corrections based on a simple generic anthropomorphic model, scaled for individual height and weight. The use of this model increases patient comfort by reducing the need for comprehensive anthropomorphic measurements. The analysis shows that the total uncertainty in potassium weight determination by this whole body counting methodology for water-filled phantoms with a known amount of potassium is 2.7% (SD). The uncertainty in the method of body habitus correction (applicable also to phantom-based methods) is 1.5% (SD). It is concluded that this new strategy provides a sufficiently accurate model for routine clinical use.

  11. Application of an iterative sea effect correction method to MT soundings carried out in Jeju Island, Korea

    Science.gov (United States)

    Yang, J.; Lee, H.; Yoo, H.; Huh, S.

    2009-12-01

    When magnetotelluric (MT) data are obtained in the vicinity of the coast, the surrounding seas make it difficult to interpret subsurface structures, in particular for the deep parts of the subsurface. We apply an iterative method to remove the sea effects. The iterative method was originally developed to remove the distortion due to topographic changes from MT data recorded on the seafloor. The iterative sea-effect correction method is performed in two steps. One is to correct the sea effect, and the other is the inversion of the sea-effect corrected responses. The two steps are alternatively carried out, until a criterion for either the inversion or the sea-effect correction is satisfied. Since the 3D surrounding sea bathymetry is only incorporated into forward modeling for the sea-effect correction, it can be more robust than the method that incorporates the 3D sea bathymetry into a model space for inversion. The synthetic examples show that the sea-effect correction method yields an inverted model comparable to the true model. By applying the sea-effect correction method to real field data acquired in Jeju Island, Korea, we also demonstrate that the sea-effect correction method effectively removes the sea effects from the 1-D and 2-D real field data, which contributes to enhance the inversion results. From these results, we can conclude that the iterative sea-effect correction method can be promising to recover the true response of the subsurface more precisely. (a) Observed (uncorrected) (b) sea-effect corrected sounding curves of XY- and YX-mode with those of determinant average (DET) at the site JSL12. (c) 1-D resistivity models obtained by Occam inversion of determinant average (DET) at each iteration stage for the site JSL12. (d) RMS misfit between Z and Zo at each iteration stage. Note that the inverted model at the initial stage (0th) was obtained without sea effect correction.

  12. Analysis of efficient preconditioned defect correction methods for nonlinear water waves

    DEFF Research Database (Denmark)

    Engsig-Karup, Allan Peter

    2014-01-01

    Robust computational procedures for the solution of non-hydrostatic, free surface, irrotational and inviscid free-surface water waves in three space dimensions can be based on iterative preconditioned defect correction (PDC) methods. Such methods can be made efficient and scalable to enable...... prediction of free-surface wave transformation and accurate wave kinematics in both deep and shallow waters in large marine areas or for predicting the outcome of experiments in large numerical wave tanks. We revisit the classical governing equations are fully nonlinear and dispersive potential flow...... equations. We present new detailed fundamental analysis using finite-amplitude wave solutions for iterative solvers. We demonstrate that the PDC method in combination with a high-order discretization method enables efficient and scalable solution of the linear system of equations arising in potential flow...

  13. Comparing bias correction methods in downscaling meteorological variables for hydrologic impact study in an arid area in China

    Science.gov (United States)

    Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.

    2014-11-01

    Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River Basin, Northwest China, and expected to be vulnerable to climate change. Regional Climate Models (RCM) have been proved to provide more reliable results for regional impact study of climate change (e.g. on water resources) than GCM models. However, it is still necessary to apply bias correction before they are used for water resources research due to often considerable biases. In this paper, after a sensitivity analysis on input meteorological variables based on Sobol' method, we compared five precipitation correction methods and three temperature correction methods to the output of a RCM model with its application to the Kaidu River Basin, one of the headwaters of the Tarim River Basin. Precipitation correction methods include Linear Scaling (LS), LOCal Intensity scaling (LOCI), Power Transformation (PT), Distribution Mapping (DM) and Quantile Mapping (QM); and temperature correction methods include LS, VARIance scaling (VARI) and DM. These corrected precipitation and temperature were compared to the observed meteorological data, and then their impacts on streamflow were also compared by driving a distributed hydrologic model. The results show: (1) precipitation, temperature, solar radiation are sensitivity to streamflow while relative humidity and wind speed are not, (2) raw RCM simulations are heavily biased from observed meteorological data, which results in biases in the simulated streamflows, and all bias correction methods effectively improved theses simulations, (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g. SD, percentile values) while LOCI method performed best in terms of the time series based indices (e.g. Nash-Sutcliffe coefficient, R2), (4) for temperature, all bias correction methods performed equally well in correcting raw temperature. (5) For simulated streamflow

  14. Comparing bias correction methods in downscaling meteorological variables for a hydrologic impact study in an arid area in China

    Science.gov (United States)

    Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.

    2015-06-01

    Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all

  15. Influence of Soret, Hall and Joule heating effects on mixed convection flow saturated porous medium in a vertical channel by Adomian Decomposition Method

    Science.gov (United States)

    Reddy, Ch. Ram; Kaladhar, K.; Srinivasacharya, D.; Pradeepa, T.

    2016-02-01

    This paper analyzes the laminar, incompressible mixed convective transport inside vertical channel in an electrically conducting fluid saturated porous medium. In addition, this model incorporates the combined effects of Soret, Hall current and Joule heating. The nonlinear governing equations and their related boundary conditions are initially cast into a dimensionless form using suitable similarity transformations and hence solved using Adomian Decomposition Method (ADM). In order to explore the influence of various parameters on fluid flow properties, quantitative analysis is exhibited graphically and shown in tabular form.

  16. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  17. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    Science.gov (United States)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  18. ENmix: a novel background correction method for Illumina HumanMethylation450 BeadChip.

    Science.gov (United States)

    Xu, Zongli; Niu, Liang; Li, Leping; Taylor, Jack A

    2016-02-18

    The Illumina HumanMethylation450 BeadChip is increasingly utilized in epigenome-wide association studies, however, this array-based measurement of DNA methylation is subject to measurement variation. Appropriate data preprocessing to remove background noise is important for detecting the small changes that may be associated with disease. We developed a novel background correction method, ENmix, that uses a mixture of exponential and truncated normal distributions to flexibly model signal intensity and uses a truncated normal distribution to model background noise. Depending on data availability, we employ three approaches to estimate background normal distribution parameters using (i) internal chip negative controls, (ii) out-of-band Infinium I probe intensities or (iii) combined methylated and unmethylated intensities. We evaluate ENmix against other available methods for both reproducibility among duplicate samples and accuracy of methylation measurement among laboratory control samples. ENmix out-performed other background correction methods for both these measures and substantially reduced the probe-design type bias between Infinium I and II probes. In reanalysis of existing EWAS data we show that ENmix can identify additional CpGs, and results in smaller P-value estimates for previously-validated CpGs. We incorporated the method into R package ENmix, which is freely available from Bioconductor website. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Fast pressure-correction method for incompressible Navier-Stokes equations in curvilinear coordinates

    Science.gov (United States)

    Aithal, Abhiram; Ferrante, Antonino

    2017-11-01

    In order to perform direct numerical simulations (DNS) of turbulent flows over curved surfaces and axisymmetric bodies, we have developed the numerical methodology to solve the incompressible Navier-Stokes (NS) equations in curvilinear coordinates for orthogonal meshes. The orthogonal meshes are generated by solving a coupled system of non-linear Poisson equations. The NS equations in orthogonal curvilinear coordinates are discretized in space on a staggered mesh using second-order central-difference scheme and are solved with an FFT-based pressure-correction method. The momentum equation is integrated in time using the second-order Adams-Bashforth scheme. The velocity field is advanced in time by applying the pressure correction to the approximate velocity such that it satisfies the divergence free condition. The novelty of the method stands in solving the variable coefficient Poisson equation for pressure using an FFT-based Poisson solver rather than the slower multigrid methods. We present the verification and validation results of the new numerical method and the DNS results of transitional flow over a curved axisymmetric body.

  20. Methods for the correction of vascular artifacts in PET O-15 water brain-mapping studies

    Energy Technology Data Exchange (ETDEWEB)

    Chen, K.; Reiman, E.M. [Univ. of Arizona, Tucson, AZ (United States)]|[Good Samaritan Regional Medical Center, Phoenix, AZ (United States). PET Center; Lawson, M.; Yun, L.S.; Bandy, D. [Good Samaritan Regional Medical Center, Phoenix, AZ (United States). PET Center

    1996-12-01

    While positron emission tomographic (PET) measurements of regional cerebral blood flow (rCBF) can be used to map brain regions that are involved in normal and pathological human behaviors, measurements in the anteromedial temporal lobe can be confounded by the combined effects of radiotracer activity in neighboring arteries and partial-volume averaging. The authors now describe two simple methods to address this vascular artifact. One method utilizes the early frames of a dynamic PET study, while the other method utilizes a coregistered magnetic resonance image (MRI) to characterize the vascular region of interest (VROI). Both methods subsequently assign a common value to each pixel in the VROI for the control scan and the activation scan. To study the vascular artifact and to demonstrate the ability of the proposed methods correcting the vascular artifact, four dynamic PET scans were performed in a single subject during the same behavioral state. For each of the four scans, a vascular scan containing vascular activity was computed as the summation of the images acquired 0--60 s after radiotracer administrations, and a control scan containing minimal vascular activity was computed as the summation of the images acquired 20--80 s after radiotracer administration. t-score maps calculated from the four pairs of vascular and control scans were used to characterize regional blood flow differences related to vascular activity before and after the applications of each vascular artifact correction method. Both methods eliminated the observed differences in vascular activity, as well as the vascular artifact observed in the anteromedial temporal lobes. Using PET data from a study of normal human emotion, these methods permitted us to identify rCBF increases in the anteromedial temporal lobe free from the potentially confounding, combined effects of vascular activity and partial-volume averaging.

  1. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    Science.gov (United States)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological

  2. Improved grey world color correction method based on weighted gain coefficients

    Science.gov (United States)

    Pan, Bin; Jiang, Zhiguo; Zhang, Haopeng; Luo, Xiaoyan; Wu, Junfeng

    2014-10-01

    Grey world algorithm is a simple but widely used global white balance method for color cast images. However, this algorithm only assumes that the mean values of the R, G, and B components tend to be equal, which may lead to false alarms in some normal images with large areas of single color background, for example, images in ocean background. Another defect is that grey world algorithm may cause luminance variations in the channels having no cast. We note that though different in mean values, standard deviations of the three channels are supposed to converge in color cast images, which is not suitable for those false alarms. Based on this discrepancy, through a mathematical manipulation both on mean values and standard deviations of the three channels, a novel color correction model is proposed by weighting the gain coefficients in grey world model. All the three weighted gain coefficients in the proposed model tend to be 1 on images containing large single color regions so as to avoid false alarms. For the color cast images, the channel existing color cast is given a weighted gain coefficient much less than 1 to correct color cast, while the other two channels are distributed weighted gain coefficients approximately equal to 1 thus to ensure that the proposed model has little negative effects on channels with no color cast. Experiments show that our model presents better performance in color correction.

  3. Simulations of Dissipative Circular Restricted Three-body Problems Using the Velocity-scaling Correction Method

    Science.gov (United States)

    Wang, Shoucheng; Huang, Guoqing; Wu, Xin

    2018-02-01

    In this paper, we survey the effect of dissipative forces including radiation pressure, Poynting–Robertson drag, and solar wind drag on the motion of dust grains with negligible mass, which are subjected to the gravities of the Sun and Jupiter moving in circular orbits. The effect of the dissipative parameter on the locations of five Lagrangian equilibrium points is estimated analytically. The instability of the triangular equilibrium point L4 caused by the drag forces is also shown analytically. In this case, the Jacobi constant varies with time, whereas its integral invariant relation still provides a probability for the applicability of the conventional fourth-order Runge–Kutta algorithm combined with the velocity scaling manifold correction scheme. Consequently, the velocity-only correction method significantly suppresses the effects of artificial dissipation and a rapid increase in trajectory errors caused by the uncorrected one. The stability time of an orbit, regardless of whether it is chaotic or not in the conservative problem, is apparently longer in the corrected case than in the uncorrected case when the dissipative forces are included. Although the artificial dissipation is ruled out, the drag dissipation leads to an escape of grains. Numerical evidence also demonstrates that more orbits near the triangular equilibrium point L4 escape as the integration time increases.

  4. A method for multiplex gene synthesis employing error correction based on expression.

    Directory of Open Access Journals (Sweden)

    Timothy H-C Hsiau

    Full Text Available Our ability to engineer organisms with new biosynthetic pathways and genetic circuits is limited by the availability of protein characterization data and the cost of synthetic DNA. With new tools for reading and writing DNA, there are opportunities for scalable assays that more efficiently and cost effectively mine for biochemical protein characteristics. To that end, we have developed the Multiplex Library Synthesis and Expression Correction (MuLSEC method for rapid assembly, error correction, and expression characterization of many genes as a pooled library. This methodology enables gene synthesis from microarray-synthesized oligonucleotide pools with a one-pot technique, eliminating the need for robotic liquid handling. Post assembly, the gene library is subjected to an ampicillin based quality control selection, which serves as both an error correction step and a selection for proteins that are properly expressed and folded in E. coli. Next generation sequencing of post selection DNA enables quantitative analysis of gene expression characteristics. We demonstrate the feasibility of this approach by building and testing over 90 genes for empirical evidence of soluble expression. This technique reduces the problem of part characterization to multiplex oligonucleotide synthesis and deep sequencing, two technologies under extensive development with projected cost reduction.

  5. Estimation methods and correction factors for body weight in Mangalarga Marchador horses

    Directory of Open Access Journals (Sweden)

    Felipe Amorim Caetano de Souza

    Full Text Available ABSTRACT The objective was to evaluate the accuracy of six body weight (BW estimating methods in Mangalarga Marchador horses (MM (n = 318: method A - tape placements at three different positions around the thoracic girth; B - Crevat and Quetelec's formula; C - Hall's formula; D - Hintz and Griffiths’ table; E - Santos’ table; and F - Cintra's formula. For additional analyses, gender, age, and gestational stage were considered. Estimated average BW was compared to the actual scale weight by the paired T test, mean predicted error, and determination coefficient. In the general population, methods A (position 3, B, and C estimated BW that were different from that of the scale. Method A, at positions 1 and 2, was more accurate in predicting the scale weight results compared with all other methods. For pregnant mares, the tape in positions 1 and 2 in method A did not differ from those of the scale. Method A in positions 1 and 2 and the table (method E may be used to estimate the BW of males and females of different ages and/or gestational stages. To use Methods B and C, correction factors are necessary to precisely estimate the body weights in this breed.

  6. An Enhanced VOF Method Coupled with Heat Transfer and Phase Change to Characterise Bubble Detachment in Saturated Pool Boiling

    Directory of Open Access Journals (Sweden)

    Anastasios Georgoulas

    2017-02-01

    Full Text Available The present numerical investigation identifies quantitative effects of fundamental controlling parameters on the detachment characteristics of isolated bubbles in cases of pool boiling in the nucleate boiling regime. For this purpose, an improved Volume of Fluid (VOF approach, developed previously in the general framework of OpenFOAM Computational Fluid Dynamics (CFD Toolbox, is further coupled with heat transfer and phase change. The predictions of the model are quantitatively verified against an existing analytical solution and experimental data in the literature. Following the model validation, four different series of parametric numerical experiments are performed, exploring the effect of the initial thermal boundary layer (ITBL thickness for the case of saturated pool boiling of R113 as well as the effects of the surface wettability, wall superheat and gravity level for the cases of R113, R22 and R134a refrigerants. It is confirmed that the ITBL is a very important parameter in the bubble growth and detachment process. Furthermore, for all of the examined working fluids the bubble detachment characteristics seem to be significantly affected by the triple-line contact angle (i.e., the wettability of the heated plate for equilibrium contact angles higher than 45°. As expected, the simulations revealed that the heated wall superheat is very influential on the bubble growth and detachment process. Finally, besides the novelty of the numerical approach, a last finding is the fact that the effect of the gravity level variation in the bubble detachment time and the volume diminishes with the increase of the ambient pressure.

  7. Geometric correction method for 3D in-line X-ray phase contrast image reconstruction.

    Science.gov (United States)

    Wu, Geming; Wu, Mingshu; Dong, Linan; Luo, Shuqian

    2014-07-29

    Mechanical system with imperfect or misalignment of X-ray phase contrast imaging (XPCI) components causes projection data misplaced, and thus result in the reconstructed slice images of computed tomography (CT) blurred or with edge artifacts. So the features of biological microstructures to be investigated are destroyed unexpectedly, and the spatial resolution of XPCI image is decreased. It makes data correction an essential pre-processing step for CT reconstruction of XPCI. To remove unexpected blurs and edge artifacts, a mathematics model for in-line XPCI is built by considering primary geometric parameters which include a rotation angle and a shift variant in this paper. Optimal geometric parameters are achieved by finding the solution of a maximization problem. And an iterative approach is employed to solve the maximization problem by using a two-step scheme which includes performing a composite geometric transformation and then following a linear regression process. After applying the geometric transformation with optimal parameters to projection data, standard filtered back-projection algorithm is used to reconstruct CT slice images. Numerical experiments were carried out on both synthetic and real in-line XPCI datasets. Experimental results demonstrate that the proposed method improves CT image quality by removing both blurring and edge artifacts at the same time compared to existing correction methods. The method proposed in this paper provides an effective projection data correction scheme and significantly improves the image quality by removing both blurring and edge artifacts at the same time for in-line XPCI. It is easy to implement and can also be extended to other XPCI techniques.

  8. METHOD OF RADIOMETRIC DISTORTION CORRECTION OF MULTISPECTRAL DATA FOR THE EARTH REMOTE SENSING

    Directory of Open Access Journals (Sweden)

    A. N. Grigoriev

    2015-07-01

    Full Text Available The paper deals with technologies of ground secondary processing of heterogeneous multispectral data. The factors of heterogeneous data include uneven illumination of objects on the Earth surface caused by different properties of the relief. A procedure for the image restoration of spectral channels by means of terrain distortion compensation is developed. The object matter of this paper is to improve the quality of the results during image restoration of areas with large and medium landforms. Methods. Researches are based on the elements of the digital image processing theory, statistical processing of the observation results and the theory of multi-dimensional arrays. Main Results. The author has introduced operations on multidimensional arrays: concatenation and elementwise division. Extended model description for input data about the area is given. The model contains all necessary data for image restoration. Correction method for multispectral data radiometric distortions of the Earth remote sensing has been developed. The method consists of two phases: construction of empirical dependences for spectral reflectance on the relief properties and restoration of spectral images according to semiempirical data. Practical Relevance. Research novelty lies in developme nt of the application theory of multidimensional arrays with respect to the processing of multispectral data, together with data on the topography and terrain objects. The results are usable for development of radiometric data correction tools. Processing is performed on the basis of a digital terrain model without carrying out ground works connected with research of the objects reflective properties.

  9. An energy minimization method for the correction of cupping artifacts in cone-beam CT.

    Science.gov (United States)

    Xie, Shipeng; Zhuang, Wenqin; Li, Haibo

    2016-07-08

    The purpose of this study was to reduce cupping artifacts and improve quantitative accuracy of the images in cone-beam CT (CBCT). An energy minimization method (EMM) is proposed to reduce cupping artifacts in reconstructed image of the CBCT. The cupping artifacts are iteratively optimized by using efficient matrix computations, which are verified to be numerically stable by matrix analysis. Moreover, the energy in our formulation is convex in each of its variables, which brings the robustness of the proposed energy minimization algorithm. The cupping artifacts are estimated as a result of minimizing this energy. The results indicate that proposed algorithm is effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. The proposed method focuses on the reconstructed image without requiring any additional physical equipment; it is easily implemented and provides cupping correction using a single scan acquisition. The experimental results demonstrate that this method can successfully reduce the magnitude of cupping artifacts. The correction algorithm reported here may improve the uniformity of the reconstructed images, thus assisting the development of perfect volume visualization and threshold-based visualization techniques for reconstructed images. © 2016 The Authors.

  10. Automated computational aberration correction method for OCT and OCM (Conference Presentation)

    Science.gov (United States)

    Liu, Yuan-Zhi; Pande, Paritosh; South, Fredrick A.; Boppart, Stephen A.

    2017-02-01

    Aberrations in an optical system cause a reduction in imaging resolution and poor image contrast, and limit the imaging depth when imaging biological samples. Computational adaptive optics (CAO) provides an inexpensive and simpler alternative to the traditionally used hardware-based adaptive optics (HAO) techniques. In this paper, we present an automated computational aberration correction method for broadband interferometric imaging techniques, e.g. optical coherence tomography (OCT) and optical coherence microscopy (OCM). In the proposed method, the process of aberration correction is modeled as a filtering operation on the aberrant image using a phase filter in the Fourier domain. The phase filter is expressed as a linear combination of Zernike polynomials with unknown coefficients, which are estimated through an iterative optimization scheme based on maximizing an image sharpness metric. The Resilient backpropagation (Rprop) algorithm, which was originally proposed as an alternative to the gradient-descent-based backpropagation algorithm for training the weights in a multilayer feedforward neural network, is employed to optimize the Zernike polynomial coefficients because of its simplicity and the robust performance to the choice of various parameters. Stochastic selection of the number and type of Zernike modes is introduced at each optimization step to explore different trajectories to enable search for multiple optima in the multivariate search space. The method was validated on various tissue samples and shows robust performance for samples with different scattering properties, e.g. a phantom with subresolution particles, an ex vivo rabbit adipose tissue, and an in vivo photoreceptor layer of the human retina.

  11. Methods of InSAR atmosphere correction for volcano activity monitoring

    Science.gov (United States)

    Gong, W.; Meyer, F.; Webley, P.W.; Lu, Z.

    2011-01-01

    When a Synthetic Aperture Radar (SAR) signal propagates through the atmosphere on its path to and from the sensor, it is inevitably affected by atmospheric effects. In particular, the applicability and accuracy of Interferometric SAR (InSAR) techniques for volcano monitoring is limited by atmospheric path delays. Therefore, atmospheric correction of interferograms is required to improve the performance of InSAR for detecting volcanic activity, especially in order to advance its ability to detect subtle pre-eruptive changes in deformation dynamics. In this paper, we focus on InSAR tropospheric mitigation methods and their performance in volcano deformation monitoring. Our study areas include Okmok volcano and Unimak Island located in the eastern Aleutians, AK. We explore two methods to mitigate atmospheric artifacts, namely the numerical weather model simulation and the atmospheric filtering using Persistent Scatterer processing. We investigate the capability of the proposed methods, and investigate their limitations and advantages when applied to determine volcanic processes. ?? 2011 IEEE.

  12. Efficient time-sampling method in Coulomb-corrected strong-field approximation.

    Science.gov (United States)

    Xiao, Xiang-Ru; Wang, Mu-Xue; Xiong, Wei-Hao; Peng, Liang-You

    2016-11-01

    One of the main goals of strong-field physics is to understand the complex structures formed in the momentum plane of the photoelectron. For this purpose, different semiclassical methods have been developed to seek an intuitive picture of the underlying mechanism. The most popular ones are the quantum trajectory Monte Carlo (QTMC) method and the Coulomb-corrected strong-field approximation (CCSFA), both of which take the classical action into consideration and can describe the interference effect. The CCSFA is more widely applicable in a large range of laser parameters due to its nonadiabatic nature in treating the initial tunneling dynamics. However, the CCSFA is much more time consuming than the QTMC method because of the numerical solution to the saddle-point equations. In the present work, we present a time-sampling method to overcome this disadvantage. Our method is as efficient as the fast QTMC method and as accurate as the original treatment in CCSFA. The performance of our method is verified by comparing the results of these methods with that of the exact solution to the time-dependent Schrödinger equation.

  13. On methods for correcting for the look-elsewhere effect in searches for new physics

    Science.gov (United States)

    Algeri, S.; van Dyk, D. A.; Conrad, J.; Anderson, B.

    2016-12-01

    The search for new significant peaks over a energy spectrum often involves a statistical multiple hypothesis testing problem. Separate tests of hypothesis are conducted at different locations over a fine grid producing an ensemble of local p-values, the smallest of which is reported as evidence for the new resonance. Unfortunately, controlling the false detection rate (type I error rate) of such procedures may lead to excessively stringent acceptance criteria. In the recent physics literature, two promising statistical tools have been proposed to overcome these limitations. In 2005, a method to ``find needles in haystacks'' was introduced by Pilla et al. [1], and a second method was later proposed by Gross and Vitells [2] in the context of the ``look-elsewhere effect'' and trial factors. We show that, although the two methods exhibit similar performance for large sample sizes, for relatively small sample sizes, the method of Pilla et al. leads to an artificial inflation of statistical power that stems from an increase in the false detection rate. This method, on the other hand, becomes particularly useful in multidimensional searches, where the Monte Carlo simulations required by Gross and Vitells are often unfeasible. We apply the methods to realistic simulations of the Fermi Large Area Telescope data, in particular the search for dark matter annihilation lines. Further, we discuss the counter-intuitive scenario where the look-elsewhere corrections are more conservative than much more computationally efficient corrections for multiple hypothesis testing. Finally, we provide general guidelines for navigating the tradeoffs between statistical and computational efficiency when selecting a statistical procedure for signal detection.

  14. Temperature effects on pitfall catches of epigeal arthropods: a model and method for bias correction.

    Science.gov (United States)

    Saska, Pavel; van der Werf, Wopke; Hemerik, Lia; Luff, Martin L; Hatten, Timothy D; Honek, Alois; Pocock, Michael

    2013-02-01

    Carabids and other epigeal arthropods make important contributions to biodiversity, food webs and biocontrol of invertebrate pests and weeds. Pitfall trapping is widely used for sampling carabid populations, but this technique yields biased estimates of abundance ('activity-density') because individual activity - which is affected by climatic factors - affects the rate of catch. To date, the impact of temperature on pitfall catches, while suspected to be large, has not been quantified, and no method is available to account for it. This lack of knowledge and the unavailability of a method for bias correction affect the confidence that can be placed on results of ecological field studies based on pitfall data.Here, we develop a simple model for the effect of temperature, assuming a constant proportional change in the rate of catch per °C change in temperature, r, consistent with an exponential Q10 response to temperature. We fit this model to 38 time series of pitfall catches and accompanying temperature records from the literature, using first differences and other detrending methods to account for seasonality. We use meta-analysis to assess consistency of the estimated parameter r among studies.The mean rate of increase in total catch across data sets was 0·0863 ± 0·0058 per °C of maximum temperature and 0·0497 ± 0·0107 per °C of minimum temperature. Multiple regression analyses of 19 data sets showed that temperature is the key climatic variable affecting total catch. Relationships between temperature and catch were also identified at species level. Correction for temperature bias had substantial effects on seasonal trends of carabid catches.Synthesis and Applications. The effect of temperature on pitfall catches is shown here to be substantial and worthy of consideration when interpreting results of pitfall trapping. The exponential model can be used both for effect estimation and for bias correction of observed data. Correcting for temperature

  15. A method to correct sampling ghosts in historic near-infrared Fourier transform spectrometer (FTS measurements

    Directory of Open Access Journals (Sweden)

    S. Dohe

    2013-08-01

    Full Text Available The Total Carbon Column Observing Network (TCCON has been established to provide ground-based remote sensing measurements of the column-averaged dry air mole fractions (DMF of key greenhouse gases. To ensure network-wide consistency, biases between Fourier transform spectrometers at different sites have to be well controlled. Errors in interferogram sampling can introduce significant biases in retrievals. In this study we investigate a two-step scheme to correct these errors. In the first step the laser sampling error (LSE is estimated by determining the sampling shift which minimises the magnitude of the signal intensity in selected, fully absorbed regions of the solar spectrum. The LSE is estimated for every day with measurements which meet certain selection criteria to derive the site-specific time series of the LSEs. In the second step, this sequence of LSEs is used to resample all the interferograms acquired at the site, and hence correct the sampling errors. Measurements acquired at the Izaña and Lauder TCCON sites are used to demonstrate the method. At both sites the sampling error histories show changes in LSE due to instrument interventions (e.g. realignment. Estimated LSEs are in good agreement with sampling errors inferred from the ratio of primary and ghost spectral signatures in optically bandpass-limited tungsten lamp spectra acquired at Lauder. The original time series of Xair and XCO2 (XY: column-averaged DMF of the target gas Y at both sites show discrepancies of 0.2–0.5% due to changes in the LSE associated with instrument interventions or changes in the measurement sample rate. After resampling, discrepancies are reduced to 0.1% or less at Lauder and 0.2% at Izaña. In the latter case, coincident changes in interferometer alignment may also have contributed to the residual difference. In the future the proposed method will be used to correct historical spectra at all TCCON sites.

  16. A novel coding method for gene mutation correction during protein translation process.

    Science.gov (United States)

    Zhang, Lei; Tian, Fengchun; Wang, Shiyuan; Liu, Xiao

    2012-03-07

    In gene expression, gene mutations often lead to negative effect of protein translation in prokaryotic organisms. With consideration of the influences produced by gene mutation, a novel method based on error-correction coding theory is proposed for modeling and detection of translation initiation in this paper. In the proposed method, combined with a one-dimensional codebook from block coding, a decoding method based on the minimum hamming distance is designed for analysis of translation efficiency. The results show that the proposed method can recognize the biologically significant regions such as Shine-Dalgarno region within the mRNA leader sequences effectively. Also, a global analysis of single base and multiple bases mutations of the Shine-Dalgarno sequences are established. Compared with other published experimental methods for mutation analysis, the translation initiation can not be disturbed by multiple bases mutations using the proposed method, which shows the effectiveness of this method in improving the translation efficiency and its biological relevance for genetic regulatory system. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Climate change effects on irrigation demands and minimum stream discharge: impact of bias-correction method

    Directory of Open Access Journals (Sweden)

    J. Rasmussen

    2012-12-01

    Full Text Available Climate changes are expected to result in a warmer global climate, with increased inter-annual variability. In this study, the possible impacts of these climate changes on irrigation and low stream flow are investigated using a distributed hydrological model of a sandy catchment in western Denmark. The IPCC climate scenario A1B was chosen as the basis for the study, and meteorological forcings (precipitation, reference evapotranspiration and temperature derived from the ECHAM5-RACMO regional climate model for the period 2071–2100 was applied to the model. Two bias correction methods, delta change and Distribution-Based Scaling, were used to evaluate the importance of the bias correction method. Using the annual irrigation amounts, the 5-percentile stream flow, the median minimum stream flow and the mean stream flow as indicators, the irrigation and the stream flow predicted using the two methods were compared. The study found that irrigation is significantly underestimated when using the delta change method, due to the inability of this method to account for changes in inter-annual variability of precipitation and reference ET and the resulting effects on irrigation demands. However, this underestimation of irrigation did not result in a significantly higher summer stream flow, because the summer stream flow in the studied catchment is controlled by the winter and spring recharge, rather than the summer precipitation. Additionally, future increases in CO2 are found to have a significant effect on both irrigation and low flow, due to reduced transpiration from plants.

  18. Acceleration of incremental-pressure-correction incompressible flow computations using a coarse-grid projection method

    Science.gov (United States)

    Kashefi, Ali; Staples, Anne

    2016-11-01

    Coarse grid projection (CGP) methodology is a novel multigrid method for systems involving decoupled nonlinear evolution equations and linear elliptic equations. The nonlinear equations are solved on a fine grid and the linear equations are solved on a corresponding coarsened grid. Mapping functions transfer data between the two grids. Here we propose a version of CGP for incompressible flow computations using incremental pressure correction methods, called IFEi-CGP (implicit-time-integration, finite-element, incremental coarse grid projection). Incremental pressure correction schemes solve Poisson's equation for an intermediate variable and not the pressure itself. This fact contributes to IFEi-CGP's efficiency in two ways. First, IFEi-CGP preserves the velocity field accuracy even for a high level of pressure field grid coarsening and thus significant speedup is achieved. Second, because incremental schemes reduce the errors that arise from boundaries with artificial homogenous Neumann conditions, CGP generates undamped flows for simulations with velocity Dirichlet boundary conditions. Comparisons of the data accuracy and CPU times for the incremental-CGP versus non-incremental-CGP computations are presented.

  19. [Nonsurgical correction of congenital auricular deformities a new method of neonatal molding and splinting].

    Science.gov (United States)

    Zambudio, G; Guirao, M J; Sánchez, J M; Girón, O; Ruiz, J I; Gutiérrez, M A

    2007-07-01

    The utility of the nonsurgical correction of congenital auricular deformities by ear molding and splinting has been previously established. Occasionally, its application cannot be easy, and the later collaboration of the parents is necessary. We report a new method of splinting that simplifies the procedure. prospective, case series. Twenty ears in 15 patients between 7 and 60 days of age (average 22 days) were treated. They were 12 prominent ears, 4 Stahl's ears, 2 lop ears, 1 Crinkled ear, and 1 case of increase of antihelix folder. Cotton impregnated with 2-Octyl-Cyanoacrylate is placed as splint for 6 weeks. The bilateral application lasted less than 5 minutes, end there were no spills to the external auditory canal. The splint was given off to the 2 weeks, and a second procedure was necessary in all the cases. There were no dermatitis or skin ulcers. The treatment was successful in 11 cases, partial improvement in 3, poor correction in 4, and recurrence in 2. The splint therapy is an easy nonsurgical method for the treatment of congenital auricular deformities that applied during the first weeks of life provides good aesthetic results in more than 50% of the patients.

  20. Method of Calculating the Correction Factors for Cable Dimensioning in Smart Grids

    Science.gov (United States)

    Simutkin, M.; Tuzikova, V.; Tlusty, J.; Tulsky, V.; Muller, Z.

    2017-04-01

    One of the main causes of overloading electrical equipment by currents of higher harmonics is the great increasing of a number of non-linear electricity power consumers. Non-sinusoidal voltages and currents affect the operation of electrical equipment, reducing its lifetime, increases the voltage and power losses in the network, reducing its capacity. There are standards that respects emissions amount of higher harmonics current that cannot provide interference limit for a safe level in power grid. The article presents a method for determining a correction factor to the long-term allowable current of the cable, which allows for this influence. Using mathematical models in the software Elcut, it was described thermal processes in the cable in case the flow of non-sinusoidal current. Developed in the article theoretical principles, methods, mathematical models allow us to calculate the correction factor to account for the effect of higher harmonics in the current spectrum for network equipment in any type of non-linear load.

  1. Efficient genomic correction methods in human iPS cells using CRISPR-Cas9 system.

    Science.gov (United States)

    Li, Hongmei Lisa; Gee, Peter; Ishida, Kentaro; Hotta, Akitsu

    2016-05-15

    Precise gene correction using the CRISPR-Cas9 system in human iPS cells holds great promise for various applications, such as the study of gene functions, disease modeling, and gene therapy. In this review article, we summarize methods for effective editing of genomic sequences of iPS cells based on our experiences correcting dystrophin gene mutations with the CRISPR-Cas9 system. Designing specific sgRNAs as well as having efficient transfection methods and proper detection assays to assess genomic cleavage activities are critical for successful genome editing in iPS cells. In addition, because iPS cells are fragile by nature when dissociated into single cells, a step-by-step confirmation during the cell recovery process is recommended to obtain an adequate number of genome-edited iPS cell clones. We hope that the techniques described here will be useful for researchers from diverse backgrounds who would like to perform genome editing in iPS cells. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Alteration of trace element concentrations in plants by adhering particles - Methods of correction.

    Science.gov (United States)

    Pospiech, Solveig; Fahlbusch, Wiebke; Sauer, Benedikt; Pasold, Tino; Ruppert, Hans

    2017-09-01

    Trace element concentrations in plants may be influenced by airborne dust or adhering soil particles. Neglecting adhering particles in plant tissue leads to misinterpretation of trace element concentrations in research fields such as phytomining, phytoremediation, bio-monitoring, uptake of micronutrients and provenance studies. In case washing or brushing the samples prior to analysis is insufficient or impossible due to fragile or pre-processed samples mathematical correction should be applied. In this study three methods are presented allowing to subtract the influence of adhering particles in order to obtain the element concentrations in plants resulting only from uptake. All mathematical models are based on trace elements with negligible soil to plant transfer. A prerequisite for the correction methods is trace element analytics with good accuracy and high precision, e.g. through complete acid digestion. In a data set of 1040 plant samples grown in open field and pot trials most plants show a small but detectable amount of adhering particles. While concentrations of nutrients are nearly unaffected trace element concentrations such as Al, Cd, Co, Cr, Fe, Mn, Ni, Pb, REEs, Ti and U may be significantly altered. Different sampling techniques like cutting height can also significantly alter the concentrations measured in the samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. The use of saturation in qualitative research.

    Science.gov (United States)

    Walker, Janiece L

    2012-01-01

    Understanding qualitative research is an important component of cardiovascular nurses' practice and allows them to understand the experiences, stories, and perceptions of patients with cardiovascular conditions. In understanding qualitative research methods, it is essential that the cardiovascular nurse understands the process of saturation within qualitative methods. Saturation is a tool used for ensuring that adequate and quality data are collected to support the study. Saturation is frequently reported in qualitative research and may be the gold standard. However, the use of saturation within methods has varied. Hence, the purpose of this column is to provide insight for the cardiovascular nurse regarding the use of saturation by reviewing the recommendations for which qualitative research methods it is appropriate to use and how to know when saturation is achieved. In understanding saturation, the cardiovascular nurse can be a better consumer of qualitative research.

  4. A novel method for megavoltage scatter correction in cone-beam CT acquired concurrent with rotational irradiation

    NARCIS (Netherlands)

    van Herk, Marcel; Ploeger, Lennert; Sonke, Jan-Jakob

    2011-01-01

    Acquisition of cone-beam CT (CBCT) concurrent with VMAT results in scatter of the megavoltage (MV) beam onto the kilovoltage (kV) detector deteriorating CBCT image quality. The aim of this paper is to develop a method to estimate and correct for MV scatter reaching the kV panel. The correction

  5. Deviation correction method for close-range photometric stereo with nonuniform illumination

    Science.gov (United States)

    Fan, Hao; Qi, Lin; Wang, Nan; Dong, Junyu; Chen, Yijun; Yu, Hui

    2017-10-01

    Classical photometric stereo requires uniform collimated light, but point light sources are usually employed in practical setups. This introduces errors to the recovered surface shape. We found that when the light sources are evenly placed around the object with the same slant angle, the main component of the errors is the low-frequency deformation, which can be approximately described by a quadratic function. We proposed a postprocessing method to correct the deviation caused by the nonuniform illumination. The method refines the surface shape with prior information from calibration using a flat plane or the object itself. And we further introduce an optimization scheme to improve the reconstruction accuracy when the three-dimensional information of some locations is available. Experiments were conducted using surfaces captured with our device and those from a public dataset. The results demonstrate the effectiveness of the proposed approach.

  6. Simultaneous modeling and optimization of nonlinear simulated moving bed chromatography by the prediction-correction method.

    Science.gov (United States)

    Bentley, Jason; Sloan, Charlotte; Kawajiri, Yoshiaki

    2013-03-08

    This work demonstrates a systematic prediction-correction (PC) method for simultaneously modeling and optimizing nonlinear simulated moving bed (SMB) chromatography. The PC method uses model-based optimization, SMB startup data, isotherm model selection, and parameter estimation to iteratively refine model parameters and find optimal operating conditions in a matter of hours to ensure high purity constraints and achieve optimal productivity. The PC algorithm proceeds until the SMB process is optimized without manual tuning. In case studies, it is shown that a nonlinear isotherm model and parameter values are determined reliably using SMB startup data. In one case study, a nonlinear SMB system is optimized after only two changes of operating conditions following the PC algorithm. The refined isotherm models are validated by frontal analysis and perturbation analysis. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Absolute measurement method for correction of low-spatial frequency surface figures of aspherics

    Science.gov (United States)

    Lin, Wei-Cheng; Chang, Shenq-Tsong; Ho, Cheng-Fang; Kuo, Ching-Hsiang; Chung, Chien-Kai; Hsu, Wei-Yao; Tseng, Shih-Feng; Sung, Cheng-Kuo

    2017-05-01

    An absolute measurement method involving a computer-generated hologram to facilitate the identification of manufacturing form errors and mounting- and gravity-induced deformations of a 300-mm aspheric mirror is proposed. In this method, the frequency and magnitude of the curve graph plotted from each Zernike coefficient obtained by rotating the mirror with various orientations about optical axis were adopted to distinguish the nonrotationally symmetric aberration. In addition, the random ball test was used to calibrate the rotationally symmetric aberration (spherical aberration). The measured absolute surface figure revealed that a highly accurate aspheric surface with a peak-to-valley value of 1/8 wave at 632.8 nm was realized after the surface figure was corrected using the reconstructed error map.

  8. Correction factors for source strength determination in HDR brachytherapy using the in-phantom method.

    Science.gov (United States)

    Ubrich, Frank; Wulff, Jörg; Engenhart-Cabillic, Rita; Zink, Klemens

    2014-05-01

    For the purpose of clinical source strength determination for HDR brachytherapy sources, the German society for Medical Physics (DGMP) recommends in their report 13 the usage of a solid state phantom (Krieger-phantom) with a thimble ionization chamber. In this work, the calibration chain for the determination of the reference air-kerma rate Ka,100 and reference dose rate to waterDw,1 by ionization chamber measurement in the Krieger-phantom was modeled via Monte Carlo simulations. These calculations were used to determine global correction factors k(tot), which allows a user to directly convert the reading of an ionization chamber calibrated in terms of absorbed dose to water, into the desired quantity Ka,100 or Dw,1. The factor k(tot) was determined for four available (192)Ir sources and one (60)Co source with three different thimble ionization chambers. Finally, ionization chamber measurements on three μSelectron V2 HDR sources within the Krieger-phantom were performed and Ka,100 was determined according to three different methods: 1) using a calibration factor in terms of absorbed dose to water with the global correction factor [Formula: see text] according DGMP 13 2) using a global correction factor calculated via Monte Carlo 3) using a direct reference air-kerma rate calibration factor determined by the national metrology institute PTB. The comparison of Monte Carlo based [Formula: see text] with those from DGMP 13 showed that the DGMP data were systematically smaller by about 2-2.5%. The experimentally determined [Formula: see text] , based on the direct Ka,100 calibration were also systematically smaller by about 1.5%. Despite of these systematical deviations, the agreement of the different methods was in almost all cases within the 1σ level of confidence of the interval of their respective uncertainties in a Gaussian distribution. The application of Monte Carlo based [Formula: see text] for the determination of Ka,100 for three μSelectron V2 sources

  9. X-ray scatter correction method for dedicated breast computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Sechopoulos, Ioannis [Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University School of Medicine, 1701 Upper Gate Drive NE, Suite 5018, Atlanta, Georgia 30322 (United States)

    2012-05-15

    Purpose: To improve image quality and accuracy in dedicated breast computed tomography (BCT) by removing the x-ray scatter signal included in the BCT projections. Methods: The previously characterized magnitude and distribution of x-ray scatter in BCT results in both cupping artifacts and reduction of contrast and accuracy in the reconstructions. In this study, an image processing method is proposed that estimates and subtracts the low-frequency x-ray scatter signal included in each BCT projection postacquisition and prereconstruction. The estimation of this signal is performed using simple additional hardware, one additional BCT projection acquisition with negligible radiation dose, and simple image processing software algorithms. The high frequency quantum noise due to the scatter signal is reduced using a noise filter postreconstruction. The dosimetric consequences and validity of the assumptions of this algorithm were determined using Monte Carlo simulations. The feasibility of this method was determined by imaging a breast phantom on a BCT clinical prototype and comparing the corrected reconstructions to the unprocessed reconstructions and to reconstructions obtained from fan-beam acquisitions as a reference standard. One-dimensional profiles of the reconstructions and objective image quality metrics were used to determine the impact of the algorithm. Results: The proposed additional acquisition results in negligible additional radiation dose to the imaged breast ({approx}0.4% of the standard BCT acquisition). The processed phantom reconstruction showed substantially reduced cupping artifacts, increased contrast between adipose and glandular tissue equivalents, higher voxel value accuracy, and no discernible blurring of high frequency features. Conclusions: The proposed scatter correction method for dedicated breast CT is feasible and can result in highly improved image quality. Further optimization and testing, especially with patient images, is necessary to

  10. Comparison of different cell type correction methods for genome-scale epigenetics studies.

    Science.gov (United States)

    Kaushal, Akhilesh; Zhang, Hongmei; Karmaus, Wilfried J J; Ray, Meredith; Torres, Mylin A; Smith, Alicia K; Wang, Shu-Li

    2017-04-14

    Whole blood is frequently utilized in genome-wide association studies of DNA methylation patterns in relation to environmental exposures or clinical outcomes. These associations can be confounded by cellular heterogeneity. Algorithms have been developed to measure or adjust for this heterogeneity, and some have been compared in the literature. However, with new methods available, it is unknown whether the findings will be consistent, if not which method(s) perform better. Methods: We compared eight cell-type correction methods including the method in the minfi R package, the method by Houseman et al., the Removing unwanted variation (RUV) approach, the methods in FaST-LMM-EWASher, ReFACTor, RefFreeEWAS, and RefFreeCellMix R programs, along with one approach utilizing surrogate variables (SVAs). We first evaluated the association of DNA methylation at each CpG across the whole genome with prenatal arsenic exposure levels and with cancer status, adjusted for estimated cell-type information obtained from different methods. We then compared CpGs showing statistical significance from different approaches. For the methods implemented in minfi and proposed by Houseman et al., we utilized homogeneous data with composition of some blood cells available and compared them with the estimated cell compositions. Finally, for methods not explicitly estimating cell compositions, we evaluated their performance using simulated DNA methylation data with a set of latent variables representing "cell types". Results from the SVA-based method overall showed the highest agreement with all other methods except for FaST-LMM-EWASher. Using homogeneous data, minfi provided better estimations on cell types compared to the originally proposed method by Houseman et al. Further simulation studies on methods free of reference data revealed that SVA provided good sensitivities and specificities, RefFreeCellMix in general produced high sensitivities but specificities tended to be low when

  11. Spectrum correction algorithm for detectors in airborne radioactivity monitoring equipment NH-UAV based on a ratio processing method

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Ye [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Tang, Xiao-Bin, E-mail: tangxiaobin@nuaa.edu.cn [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Chen, Da [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China); Jiangsu Key Laboratory of Nuclear Energy Equipment Materials Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016 (China)

    2015-10-11

    The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr{sub 3}) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr{sub 3} detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R{sup 2}=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible. - Highlights: • An airborne radioactivity monitoring equipment NH-UAV was developed to measure radionuclide after a nuclear accident. • A spectrum correction algorithm was proposed to obtain precise information on the detected radioactivity within a small area. • The spectrum correction method was verified as feasible. • The corresponding spectrum correction coefficients increase first and then stay constant.

  12. A physics-based correction method for homogenizing historical subdaily time series from Switzerland

    Science.gov (United States)

    Kocen, R.; Brönnimann, S.; Breda, L.; Spadin, R.; Begert, M.; Füllemann, C.

    2010-09-01

    Homogeneous long-term climatological time series provide useful information on climate back to the preindustrial era. High temporal resolution of climate data is desirable to address trends and variability in the mean climate and in climatic extremes. For Switzerland, three long (~250 yrs) historical time series (Basel, Geneva, Gr. St. Bernhard) that were hitherto available in the form of monthly means only have recently been digitized (in cooperation with MeteoSwiss) on a subdaily scale. The digitized time series contain subdaily data (varies from 2-6 daily measurements) on temperature, precipitation/snow height, pressure and humidity, and subdaily descriptions on wind direction, wind speeds and cloud cover. Long-term climatological records often contain inhomogeneities due to non climatic changes such as station relocations, changes in instrumentation and instrument exposure, changes in observing schedules/practices and environmental changes in the proximity of the observation site. Those disturbances can distort or hide the true climatic signal and could seriously affect the correct assessment and analysis of climate trends, variability and climatic extremes. It is therefore crucial to detect and eliminate artificial shifts and trends, to the extent possible, in the climate data prior to its application. Detailed information of the station history and instruments (metadata) can be of fundamental importance in the process of homogenization in order to support the determination of the exact time of inhomogeneities and the interpretation of statistical test results. While similar methods can be used for the detection of inhomogeneities in subdaily or monthly mean data, quite different correction methods can be chosen. The wealth of information in a high temporal resolution in combination with multivariate data series allows more physics-based correction methods. For instance, a detected radiation error in temperature can be corrected with an error model that

  13. Introduction of a new critical p value correction method for statistical significance analysis of metabonomics data.

    Science.gov (United States)

    Wang, Bo; Shi, Zhanquan; Weber, Georg F; Kennedy, Michael A

    2013-10-01

    Nuclear magnetic resonance (NMR) spectroscopy-based metabonomics is of growing importance for discovery of human disease biomarkers. Identification and validation of disease biomarkers using statistical significance analysis (SSA) is critical for translation to clinical practice. SSA is performed by assessing a null hypothesis test using a derivative of the Student's t test, e.g., a Welch's t test. Choosing how to correct the significance level for rejecting null hypotheses in the case of multiple testing to maintain a constant family-wise type I error rate is a common problem in such tests. The multiple testing problem arises because the likelihood of falsely rejecting the null hypothesis, i.e., a false positive, grows as the number of tests applied to the same data set increases. Several methods have been introduced to address this problem. Bonferroni correction (BC) assumes all variables are independent and therefore sacrifices sensitivity for detecting true positives in partially dependent data sets. False discovery rate (FDR) methods are more sensitive than BC but uniformly ascribe highest stringency to lowest p value variables. Here, we introduce standard deviation step down (SDSD), which is more sensitive and appropriate than BC for partially dependent data sets. Sensitivity and type I error rate of SDSD can be adjusted based on the degree of variable dependency. SDSD generates fundamentally different profiles of critical p values compared with FDR methods potentially leading to reduced type II error rates. SDSD is increasingly sensitive for more concentrated metabolites. SDSD is demonstrated using NMR-based metabonomics data collected on three different breast cancer cell line extracts.

  14. Algebraic correction methods for computational assessment of clone overlaps in DNA fingerprint mapping

    Directory of Open Access Journals (Sweden)

    Wendl Michael C

    2007-04-01

    Full Text Available Abstract Background The Sulston score is a well-established, though approximate metric for probabilistically evaluating postulated clone overlaps in DNA fingerprint mapping. It is known to systematically over-predict match probabilities by various orders of magnitude, depending upon project-specific parameters. Although the exact probability distribution is also available for the comparison problem, it is rather difficult to compute and cannot be used directly in most cases. A methodology providing both improved accuracy and computational economy is required. Results We propose a straightforward algebraic correction procedure, which takes the Sulston score as a provisional value and applies a power-law equation to obtain an improved result. Numerical comparisons indicate dramatically increased accuracy over the range of parameters typical of traditional agarose fingerprint mapping. Issues with extrapolating the method into parameter ranges characteristic of newer capillary electrophoresis-based projects are also discussed. Conclusion Although only marginally more expensive to compute than the raw Sulston score, the correction provides a vastly improved probabilistic description of hypothesized clone overlaps. This will clearly be important in overlap assessment and perhaps for other tasks as well, for example in using the ranking of overlap probabilities to assist in clone ordering.

  15. “Section to Point” Correction Method for Wind Power Forecasting Based on Cloud Theory

    Directory of Open Access Journals (Sweden)

    Dunnan Liu

    2015-01-01

    Full Text Available As an intermittent energy, wind power has the characteristics of randomness and uncontrollability. It is of great significance to improve the accuracy of wind power forecasting. Currently, most models for wind power forecasting are based on wind speed forecasting. However, it is stuck in a dilemma called “garbage in, garbage out,” which means it is difficult to improve the forecasting accuracy without improving the accuracy of input data such as the wind speed. In this paper, a new model based on cloud theory is proposed. It establishes a more accurate relational model between the wind power and wind speed, which has lots of catastrophe points. Then, combined with the trend during adjacent time and the laws of historical data, the forecasting value will be corrected by the theory of “section to point” correction. It significantly improves the stability of forecasting accuracy and reduces significant forecasting errors at some particular points. At last, by analyzing the data of generation power and historical wind speed in Inner Mongolia, China, it is proved that the proposed method can effectively improve the accuracy of wind speed forecasting.

  16. Comparison between MRI-based attenuation correction methods for brain PET in dementia patients

    Energy Technology Data Exchange (ETDEWEB)

    Cabello, Jorge; Lukas, Mathias; Pyka, Thomas; Nekolla, Stephan G.; Ziegler, Sibylle I. [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Munich (Germany); Rota Kops, Elena; Shah, N. Jon [Forschungszentrum Juelich GmbH, Institute of Neuroscience and Medicine 4, Medical Imaging Physics, Juelich (Germany); Ribeiro, Andre [Forschungszentrum Juelich GmbH, Institute of Neuroscience and Medicine 4, Medical Imaging Physics, Juelich (Germany); Institute of Biophysics and Biomedical Engineering, Lisbon (Portugal); Yakushev, Igor [Technische Universitaet Muenchen, Nuklearmedizinische Klinik und Poliklinik, Klinikum rechts der Isar, Munich (Germany); Institute TUM Neuroimaging Center (TUM-NIC), Munich (Germany)

    2016-11-15

    The combination of Positron Emission Tomography (PET) with magnetic resonance imaging (MRI) in hybrid PET/MRI scanners offers a number of advantages in investigating brain structure and function. A critical step of PET data reconstruction is attenuation correction (AC). Accounting for bone in attenuation maps (μ-map) was shown to be important in brain PET studies. While there are a number of MRI-based AC methods, no systematic comparison between them has been performed so far. The aim of this work was to study the different performance obtained by some of the recent methods presented in the literature. To perform such a comparison, we focused on [{sup 18}F]-Fluorodeoxyglucose-PET/MRI neurodegenerative dementing disorders, which are known to exhibit reduced levels of glucose metabolism in certain brain regions. Four novel methods were used to calculate μ-maps from MRI data of 15 patients with Alzheimer's dementia (AD). The methods cover two atlas-based methods, a segmentation method, and a hybrid template/segmentation method. Additionally, the Dixon-based and a UTE-based method, offered by a vendor, were included in the comparison. Performance was assessed at three levels: tissue identification accuracy in the μ-map, quantitative accuracy of reconstructed PET data in specific brain regions, and precision in diagnostic images at identifying hypometabolic areas. Quantitative regional errors of -20-10 % were obtained using the vendor's AC methods, whereas the novel methods produced errors in a margin of ±5 %. The obtained precision at identifying areas with abnormally low levels of glucose uptake, potentially regions affected by AD, were 62.9 and 79.5 % for the two vendor AC methods, the former ignoring bone and the latter including bone information. The precision increased to 87.5-93.3 % in average for the four new methods, exhibiting similar performances. We confirm that the AC methods based on the Dixon and UTE sequences provided by the vendor are

  17. A level set method for cupping artifact correction in cone-beam CT

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Shipeng; Li, Haibo; Ge, Qi [College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu 210003 (China); Li, Chunming, E-mail: li-chunming@hotmail.com [School of Electronic Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, Sichuan 611731 (China)

    2015-08-15

    Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts in CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts.

  18. Study on the Stiffness Correction Method of Novel Antivibration Bearing for Urban Rail Transit Viaduct

    Directory of Open Access Journals (Sweden)

    Weiping Xie

    2017-01-01

    Full Text Available A novel antivibration bearing is developed to reduce the train-induced vibrations for urban rail transit viaduct. It adopts four high-damping thick rubber blocks stacking slantingly to reduce the vibration and provide large lateral stiffness. But the existing stiffness calculation method of laminated rubber bearing aimed at horizontal seismic isolation is unsuitable for thick rubber bearing designed for vertical vibration reduction. First, the stiffness correction method has been proposed based on the characteristics of the novel bearing. Second, to validate the design method, mechanical property tests are performed on a specimen of the novel bearing with design frequency at 8 Hz and with 3500 kN bearing capacity. Third, damping effects of the novel bearing are investigated through impulse vibration tests on scaled models. Results show that the mechanical property of the novel bearing can satisfy the engineering demand, and the proposed method for calculating the stiffness agrees well with the test results. The overall insertion loss of the novel bearing is 13.49 dB which is 5.32 dB larger than that of steel bearing, showing that the novel bearing is very promising to be used in the field to mitigate train-induced vibrations.

  19. A correction method of encoder bias in satellite laser ranging system

    Directory of Open Access Journals (Sweden)

    Wang Peiyuan

    2013-08-01

    Full Text Available In a satellite laser ranging telescope system, well-aligned encoders of the elevation and azimuth axes are essential for tracking objects. However, it is very difficult and time-consuming to correct the bias between the absolute-position indices of the encoders and the astronomical coordinates, especially in the absence of a finder scope for our system. To solve this problem, a method is presented based on the phenomenon that all stars move anti-clockwise around Polaris in the northern hemisphere. Tests of the proposed adjustment procedure in a satellite laser ranging (SLR system demonstrated the effectiveness and the time saved by using the approach, which greatly facilitates the optimization of a tracking system.

  20. A density-adaptive SPH method with kernel gradient correction for modeling explosive welding

    Science.gov (United States)

    Liu, M. B.; Zhang, Z. L.; Feng, D. L.

    2017-09-01

    Explosive welding involves processes like the detonation of explosive, impact of metal structures and strong fluid-structure interaction, while the whole process of explosive welding has not been well modeled before. In this paper, a novel smoothed particle hydrodynamics (SPH) model is developed to simulate explosive welding. In the SPH model, a kernel gradient correction algorithm is used to achieve better computational accuracy. A density adapting technique which can effectively treat large density ratio is also proposed. The developed SPH model is firstly validated by simulating a benchmark problem of one-dimensional TNT detonation and an impact welding problem. The SPH model is then successfully applied to simulate the whole process of explosive welding. It is demonstrated that the presented SPH method can capture typical physics in explosive welding including explosion wave, welding surface morphology, jet flow and acceleration of the flyer plate. The welding angle obtained from the SPH simulation agrees well with that from a kinematic analysis.

  1. Determination of cascade summing correction for HPGe spectrometers by the Monte Carlo method

    CERN Document Server

    Takeda, M N

    2001-01-01

    The present work describes the methodology developed for calculating the cascade sum correction to be applied to experimental efficiencies obtained by means of HPGe spectrometers. The detection efficiencies have been numerically calculated by the Monte Carlo Method for point sources. Another Monte Carlo algorithm has been developed to follow the path in the decay scheme from the beginning state at the precursor radionuclide decay level, down to the ground state of the daughter radionuclide. Each step in the decay scheme is selected by random numbers taking into account the transition probabilities and internal transition coefficients. The selected transitions are properly tagged according to the type of interaction has occurred, giving rise to a total or partial energy absorption events inside the detector crystal. Once the final state has been reached, the selected transitions were accounted for verifying each pair of transitions which occurred simultaneously. With this procedure it was possible to calculate...

  2. A new method for joint susceptibility artefact correction and super-resolution for dMRI

    Science.gov (United States)

    Ruthotto, Lars; Mohammadi, Siawoosh; Weiskopf, Nikolaus

    2014-03-01

    Diffusion magnetic resonance imaging (dMRI) has become increasingly relevant in clinical research and neuroscience. It is commonly carried out using the ultra-fast MRI acquisition technique Echo-Planar Imaging (EPI). While offering crucial reduction of acquisition times, two limitations of EPI are distortions due to varying magnetic susceptibilities of the object being imaged and its limited spatial resolution. In the recent years progress has been made both for susceptibility artefact correction and increasing of spatial resolution using image processing and reconstruction methods. However, so far, the interplay between both problems has not been studied and super-resolution techniques could only be applied along one axis, the slice-select direction, limiting the potential gain in spatial resolution. In this work we describe a new method for joint susceptibility artefact correction and super-resolution in EPI-MRI that can be used to increase resolution in all three spatial dimensions and in particular increase in-plane resolutions. The key idea is to reconstruct a distortion-free, high-resolution image from a number of low-resolution EPI data that are deformed in different directions. Numerical results on dMRI data of a human brain indicate that this technique has the potential to provide for the first time in-vivo dMRI at mesoscopic spatial resolution (i.e. 500μm) a spatial resolution that could bridge the gap between white-matter information from ex-vivo histology (≍1μm) and in-vivo dMRI (≍2000μm).

  3. Image analysis method for the measurement of water saturation in a two-dimensional experimental flow tank

    Science.gov (United States)

    Belfort, Benjamin; Weill, Sylvain; Lehmann, François

    2017-07-01

    A novel, non-invasive imaging technique is proposed that determines 2D maps of water content in unsaturated porous media. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed, because calibration curve relating water content and reflected light intensities is established during the main monitoring phase of each experiment and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage/imbibition experiment in a 2D flow tank with inner dimensions of 40 cm × 14 cm × 6 cm (L × W × D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using a statistical framework to perform an error analysis and numerical simulations with a state-of-the-art computational code that solves the Richards' equation. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Finally, the photometric procedure has been developed expressly for its extension to heterogeneous media. Other processes may be investigated through different laboratory experiments which will serve as benchmark for numerical codes validation.

  4. Configuration Interaction-Corrected Tamm-Dancoff Approximation: A Time-Dependent Density Functional Method with the Correct Dimensionality of Conical Intersections.

    Science.gov (United States)

    Li, Shaohong L; Marenich, Aleksandr V; Xu, Xuefei; Truhlar, Donald G

    2014-01-16

    Linear response (LR) Kohn-Sham (KS) time-dependent density functional theory (TDDFT), or KS-LR, has been widely used to study electronically excited states of molecules and is the method of choice for large and complex systems. The Tamm-Dancoff approximation to TDDFT (TDDFT-TDA or KS-TDA) gives results similar to KS-LR and alleviates the instability problem of TDDFT near state intersections. However, KS-LR and KS-TDA share a debilitating feature; conical intersections of the reference state and a response state occur in F - 1 instead of the correct F - 2 dimensions, where F is the number of internal degrees of freedom. Here, we propose a new method, named the configuration interaction-corrected Tamm-Dancoff approximation (CIC-TDA), that eliminates this problem. It calculates the coupling between the reference state and an intersecting response state by interpreting the KS reference-state Slater determinant and linear response as if they were wave functions. Both formal analysis and test results show that CIC-TDA gives similar results to KS-TDA far from a conical intersection, but the intersection occurs with the correct dimensionality. We anticipate that this will allow more realistic application of TDDFT to photochemistry.

  5. Comparing measurement error correction methods for rate-of-change exposure variables in survival analysis.

    Science.gov (United States)

    Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E

    2013-12-01

    In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.

  6. A meshless scheme for incompressible fluid flow using a velocity-pressure correction method

    KAUST Repository

    Bourantas, Georgios

    2013-12-01

    A meshless point collocation method is proposed for the numerical solution of the steady state, incompressible Navier-Stokes (NS) equations in their primitive u-v-p formulation. The flow equations are solved in their strong form using either a collocated or a semi-staggered "grid" configuration. The developed numerical scheme approximates the unknown field functions using the Moving Least Squares approximation. A velocity, along with a pressure correction scheme is applied in the context of the meshless point collocation method. The proposed meshless point collocation (MPC) scheme has the following characteristics: (i) it is a truly meshless method, (ii) there is no need for pressure boundary conditions since no pressure constitutive equation is solved, (iii) it incorporates simplicity and accuracy, (iv) results can be obtained using collocated or semi-staggered "grids", (v) there is no need for the usage of a curvilinear system of coordinates and (vi) it can solve steady and unsteady flows. The lid-driven cavity flow problem, for Reynolds numbers up to 5000, has been considered, by using both staggered and collocated grid configurations. Following, the Backward-Facing Step (BFS) flow problem was considered for Reynolds numbers up to 800 using a staggered grid. As a final example, the case of a laminar flow in a two-dimensional tube with an obstacle was examined. © 2013 Elsevier Ltd.

  7. Adjoint complement to viscous finite-volume pressure-correction methods

    Science.gov (United States)

    Stück, Arthur; Rung, Thomas

    2013-09-01

    A hybrid-adjoint Navier-Stokes method for the pressure-based computation of hydrodynamic objective functional derivatives with respect to the shape is systematically derived in three steps: The underlying adjoint partial differential equations and boundary conditions for the frozen-turbulence Reynolds-averaged Navier-Stokes equations are considered in the first step. In step two, the adjoint discretisation is developed from the primal, unstructured finite-volume discretisation, such that adjoint-consistent approximations to the adjoint partial differential equations are obtained following a so-called hybrid-adjoint approach. A unified, discrete boundary description is outlined that supports high- and low-Reynolds number turbulent wall-boundary treatments for both the adjoint boundary condition and the boundary-based gradient formula. The third component focused in the development of the industrial adjoint CFD method is the adjoint counterpart to the primal pressure-correction algorithm. The approach is verified against the direct-differentiation method and an application to internal flow problems is presented.

  8. Monte Carlo-based diffusion tensor tractography with a geometrically corrected voxel-centre connecting method

    Science.gov (United States)

    Bodammer, N. C.; Kaufmann, J.; Kanowski, M.; Tempelmann, C.

    2009-02-01

    Diffusion tensor tractography (DTT) allows one to explore axonal connectivity patterns in neuronal tissue by linking local predominant diffusion directions determined by diffusion tensor imaging (DTI). The majority of existing tractography approaches use continuous coordinates for calculating single trajectories through the diffusion tensor field. The tractography algorithm we propose is characterized by (1) a trajectory propagation rule that uses voxel centres as vertices and (2) orientation probabilities for the calculated steps in a trajectory that are obtained from the diffusion tensors of either two or three voxels. These voxels include the last voxel of each previous step and one or two candidate successor voxels. The precision and the accuracy of the suggested method are explored with synthetic data. Results clearly favour probabilities based on two consecutive successor voxels. Evidence is also provided that in any voxel-centre-based tractography approach, there is a need for a probability correction that takes into account the geometry of the acquisition grid. Finally, we provide examples in which the proposed fibre-tracking method is applied to the human optical radiation, the cortico-spinal tracts and to connections between Broca's and Wernicke's area to demonstrate the performance of the proposed method on measured data.

  9. A level set method for cupping artifact correction in cone-beam CT.

    Science.gov (United States)

    Xie, Shipeng; Li, Chunming; Li, Haibo; Ge, Qi

    2015-08-01

    To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. The cupping artifacts in CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts.

  10. Measurement and correction method of the system time offset of multi-mode satellite navigation

    Science.gov (United States)

    Zhu, Lin; Zhang, Huijun; Li, Xiaohui; Xu, Longxia

    2013-01-01

    Multi-mode satellite navigation is an important development direction of Global Navigation Satellite Systems (GNSS). Because of each satellite navigation system owing an independent and stable operating system time scale, one of key issues that must be solved to implement multi-mode navigation is to determine the system time offset between two satellite navigation systems. National Time Service Center (NTSC) keeps our country's standard time (UTC (NTSC)). It is an available resource for us to monitor the system time offset of satellite navigation systems by means of receiving signal-in-space using the geodetic time receiver. The monitoring principle and main measurement errors are discussed. The correction method of system time offset measuring results is studied with the IGS precise orbit ephemeris. In order to test rationality of the measurement method, circular T bulletin data published by Bureau International des Poids et Mesures (BIPM) is applied to compare with the monitoring data and revised data. Data Processing results are given and shown that this monitoring method is practical and can be applied to multi-mode navigation.

  11. Regression calibration method for correcting measurement-error bias in nutritional epidemiology.

    Science.gov (United States)

    Spiegelman, D; McDermott, A; Rosner, B

    1997-04-01

    Regression calibration is a statistical method for adjusting point and interval estimates of effect obtained from regression models commonly used in epidemiology for bias due to measurement error in assessing nutrients or other variables. Previous work developed regression calibration for use in estimating odds ratios from logistic regression. We extend this here to estimating incidence rate ratios from Cox proportional hazards models and regression slopes from linear-regression models. Regression calibration is appropriate when a gold standard is available in a validation study and a linear measurement error with constant variance applies or when replicate measurements are available in a reliability study and linear random within-person error can be assumed. In this paper, the method is illustrated by correction of rate ratios describing the relations between the incidence of breast cancer and dietary intakes of vitamin A, alcohol, and total energy in the Nurses' Health Study. An example using linear regression is based on estimation of the relation between ultradistal radius bone density and dietary intakes of caffeine, calcium, and total energy in the Massachusetts Women's Health Study. Software implementing these methods uses SAS macros.

  12. Electronic Transport as a Driver for Self-Interaction-Corrected Methods

    KAUST Repository

    Pertsova, Anna

    2015-01-01

    © 2015 Elsevier Inc. While spintronics often investigates striking collective spin effects in large systems, a very important research direction deals with spin-dependent phenomena in nanostructures, reaching the extreme of a single spin confined in a quantum dot, in a molecule, or localized on an impurity or dopant. The issue considered in this chapter involves taking this extreme to the nanoscale and the quest to use first-principles methods to predict and control the behavior of a few "spins" (down to 1 spin) when they are placed in an interesting environment. Particular interest is on environments for which addressing these systems with external fields and/or electric or spin currents is possible. The realization of such systems, including those that consist of a core of a few transition-metal (TM) atoms carrying a spin, connected and exchanged-coupled through bridging oxo-ligands has been due to work by many experimental researchers at the interface of atomic, molecular and condensed matter physics. This chapter addresses computational problems associated with understanding the behaviors of nano- and molecular-scale spin systems and reports on how the computational complexity increases when such systems are used for elements of electron transport devices. Especially for cases where these elements are attached to substrates with electronegativities that are very different than the molecule, or for coulomb blockade systems, or for cases where the spin-ordering within the molecules is weakly antiferromagnetic, the delocalization error in DFT is particularly problematic and one which requires solutions, such as self-interaction corrections, to move forward. We highlight the intersecting fields of spin-ordered nanoscale molecular magnets, electron transport, and coulomb blockade and highlight cases where self-interaction corrected methodologies can improve our predictive power in this emerging field.

  13. A method for detecting and correcting feature misidentification on expression microarrays

    Directory of Open Access Journals (Sweden)

    Brown Patrick O

    2004-09-01

    Full Text Available Abstract Background Much of the microarray data published at Stanford is based on mouse and human arrays produced under controlled and monitored conditions at the Brown and Botstein laboratories and at the Stanford Functional Genomics Facility (SFGF. Nevertheless, as large datasets based on the Stanford Human array began to accumulate, a small but significant number of discrepancies were detected that required a serious attempt to track down the original source of error. Due to a controlled process environment, sufficient data was available to accurately track the entire process leading to up to the final expression data. In this paper, we describe our statistical methods to detect the inconsistencies in microarray data that arise from process errors, and discuss our technique to locate and fix these errors. Results To date, the Brown and Botstein laboratories and the Stanford Functional Genomics Facility have together produced 40,000 large-scale (10–50,000 feature cDNA microarrays. By applying the heuristic described here, we have been able to check most of these arrays for misidentified features, and have been able to confidently apply fixes to the data where needed. Out of the 265 million features checked in our database, problems were detected and corrected on 1.3 million of them. Conclusion Process errors in any genome scale high throughput production regime can lead to subsequent errors in data analysis. We show the value of tracking multi-step high throughput operations by using this knowledge to detect and correct misidentified data on gene expression microarrays.

  14. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    Science.gov (United States)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  15. Optical methods for correction of oxygen-transport characteristics of blood and their biomedical applications

    Science.gov (United States)

    Zalesskaya, G. A.; Akulich, N. V.; Marochkov, A. V.; Laskina, O. V.; Mit'kovskaya, N. P.

    2010-07-01

    We have carried out a comprehensive analysis of the spectral characteristics of blood and blood components, gas-exchange and oximetry parameters for venous and arterial blood, central hemodynamic parameters, and the results of a complete blood count and chemistry panel before and after extracorporeal UV irradiation of the blood (UBI, ultraviolet blood irradiation) or intravenous exposure of blood to low-intensity emission from an He-Ne laser (LBI, laser blood irradiation). We have demonstrated the possibility of correcting the oxygentransport characteristics of blood by laser optical methods based on photodissociation of blood oxyhemoglobin. We have shown that the therapeutic effects initiated both by UBI and LBI are based on a single mechanism: a change in the balance between production of active oxygen species and their inhibition by antioxidants. The data obtained are of interest not only for studying the primary (molecular) mechanisms of action for photohemotherapy and their effect on processes occurring in the living body, but also can provide a basis for designing next-generation laser optical instruments and for development of not yet existing methods for assessing the therapeutic efficacy of photohemotherapy.

  16. CMF Signal Processing Method Based on Feedback Corrected ANF and Hilbert Transformation

    Directory of Open Access Journals (Sweden)

    Tu Yaqing

    2014-02-01

    Full Text Available In this paper, we focus on CMF signal processing and aim to resolve the problems of precision sharp-decline occurrence when using adaptive notch filters (ANFs for tracking the signal frequency for a long time and phase difference calculation depending on frequency by the sliding Goertzel algorithm (SGA or the recursive DTFT algorithm with negative frequency contribution. A novel method is proposed based on feedback corrected ANF and Hilbert transformation. We design an index to evaluate whether the ANF loses the signal frequency or not, according to the correlation between the output and input signals. If the signal frequency is lost, the ANF parameters will be adjusted duly. At the same time, singular value decomposition (SVD algorithm is introduced to reduce noise. And then, phase difference between the two signals is detected through trigonometry and Hilbert transformation. With the frequency and phase difference obtained, time interval of the two signals is calculated. Accordingly, the mass flow rate is derived. Simulation and experimental results show that the proposed method always preserves a constant high precision of frequency tracking and a better performance of phase difference measurement compared with the SGA or the recursive DTFT algorithm with negative frequency contribution

  17. The method of regions and next-to-soft corrections in Drell–Yan production

    Directory of Open Access Journals (Sweden)

    D. Bonocore

    2015-03-01

    Full Text Available We perform a case study of the behaviour of gluon radiation beyond the soft approximation, using as an example the Drell–Yan production cross section at NNLO. We draw a careful distinction between the eikonal expansion, which is in powers of the soft gluon energies, and the expansion in powers of the threshold variable 1−z, which involves important hard-collinear effects. Focusing on the contribution to the NNLO Drell–Yan K-factor arising from real–virtual interference, we use the method of regions to classify all relevant contributions up to next-to-leading power in the threshold expansion. With this method, we reproduce the exact two-loop result to the required accuracy, including z-independent non-logarithmic contributions, and we precisely identify the origin of the soft-collinear interference which breaks simple soft-gluon factorisation at next-to-eikonal level. Our results pave the way for the development of a general factorisation formula for next-to-leading-power threshold logarithms, and clarify the nature of loop corrections to a set of recently proposed next-to-soft theorems.

  18. An efficient method for correcting the edge artifact due to smoothing.

    Science.gov (United States)

    Maisog, J M; Chmielowska, J

    1998-01-01

    Spatial smoothing is a common pre-processing step in the analysis of functional brain imaging data. It can increase sensitivity to signals of specific shapes and sizes (Rosenfeld and Kak [1982]: Digital Picture Processing, vol. 2. Orlando, Fla.: Academic; Worsley et al. [1996]: Hum Brain Mapping 4:74-90). Also, some amount of spatial smoothness is required if methods from the theory of Gaussian random fields are to be used (Holmes [1994]: Statistical Issues in Functional Brain Mapping. PhD thesis, University of Glasgow). Smoothing is most often implemented as a convolution of the imaging data with a smoothing kernel, and convolution is most efficiently performed using the Convolution Theorem and the Fast Fourier Transform (Cooley and Tukey [1965]: Math Comput 19:297-301; Priestly [1981]: Spectral Analysis and Time Series. San Diego: Academic; Press et al. [1992]: Numerical Recipes in C: The Art of Scientific Computing, 2nd ed. Cambridge: Cambridge University Press). An undesirable side effect of smoothing is an artifact along the edges of the brain, where brain voxels become smoothed with non-brain voxels. This results in a dark rim which might be mistaken for hypoactivity. In this short methodological paper, we present a method for correcting functional brain images for the edge artifact due to smoothing, while retaining the use of the Convolution Theorem and the Fast Fourier Transform for efficient calculation of convolutions.

  19. Scaled Second Order Perturbation Corrections to Configuration Interaction Singles: Efficient and Reliable Excitation Energy Methods

    Energy Technology Data Exchange (ETDEWEB)

    Rhee, Young Min; Head-Gordon, Martin

    2007-02-01

    Two modifications of the perturbative doubles correction to configuration interaction with single substitutions (CIS(D)) are suggested, which are excited state analogs of ground state scaled second order Moeller-Plesset (MP2) methods. The first approach employs two parameters to scale the two spin components of the direct term of CIS(D), starting from the two-parameter spin-component scaled (SCS) MP2 ground state, and is termed SCS-CIS(D). An efficient resolution-of-the-identity (RI) implementation of this approach is described. The second approach employs a single parameter to scale only the opposite-spin direct term of CIS(D), starting from the one-parameter scaled opposite spin (SOS) MP2 ground state, and is called SOS-CIS(D). By utilizing auxiliary basis expansions and a Laplace transform, a fourth order algorithm for SOS-CIS(D) is described and implemented. The parameters describing SCS-CIS(D) and SOS-CIS(D) are optimized based on a training set including valence excitations of various organic molecules and Rydberg transitions of water and ammonia, and they significantly improve upon CIS(D) itself. The accuracy of the two methods is found to be comparable. This arises from a strong correlation between the same-spin and opposite-spin portions of the excitation energy terms. The methods are successfully applied to the zincbacteriochlorin-bacteriochlorin charge transfer transition, for which time-dependent density functional theory, with presently available exchange-correlation functionals, is known to fail. The methods are also successfully applied to describe various electronic transitions outside of the training set. The efficiency of SOS-CIS(D) and the auxiliary basis implementation of CIS(D) and SCS-CIS(D) are confirmed with a series of timing tests.

  20. Effects of ESP forecast bias-correction on deterministic optimization method biases.

    Science.gov (United States)

    Arsenault, R.; Côté, P.; Latraverse, M.

    2016-12-01

    Rio Tinto is a multinational metal and natural resources producer with energy-intensive aluminium smelters in Quebec, Canada. Rio Tinto also owns and operates power houses on the Péribonka and Saguenay Rivers in Quebec. The system, which is run by Rio Tinto's Quebec Power Operations Division, consists of 6 generating stations and 3 major reservoirs. One of the significant issues that had to be resolved for effective operation of this system was to determine the volume of weekly water releases for all generating stations. Several challenges had to be dealt with before a suitable solution could be found. In the past two years, we developed a rolling horizon test bed that mimics the day to day system operation. With this test bed, we compared four different algorithms (three stochastic and one deterministic) and found that using an anticipative deterministic approach to calculate the release decisions for the first period is an inadequate strategy. The results also showed that methods based on scenarios prove superior to methods based on probability distributions. The test bed was also used to assess the quality of the Ensemble Streamflow Prediction (ESP) forecast. It was found that under-dispersion in ESPs impacted the quality of the results and that the optimization methods did not all react in the same way. We show that the bias introduced by the use of a deterministic optimization method can hinder the efforts placed in bias-correcting ESP forecasts. We also show that biasing the ESP members (or alternatively by selecting a decision scenario other than the median one) it was possible to negate the deterministic bias and consistently improve the overall power generation.

  1. Calibration of EBT2 film by the PDD method with scanner non-uniformity correction

    Science.gov (United States)

    Chang, Liyun; Chui, Chen-Shou; Ding, Hueisch-Jy; Hwang, Ing-Ming; Ho, Sheng-Yow

    2012-09-01

    The EBT2 film together with a flatbed scanner is a convenient dosimetry QA tool for verification of clinical radiotherapy treatments. However, it suffers from a relatively high degree of uncertainty and a tedious film calibration process for every new lot of films, including cutting the films into several small pieces, exposing with different doses, restoring them back and selecting the proper region of interest (ROI) for each piece for curve fitting. In this work, we present a percentage depth dose (PDD) method that can accurately calibrate the EBT2 film together with the scanner non-uniformity correction and provide an easy way to perform film dosimetry. All films were scanned before and after the irradiation in one of the two homemade 2 mm thick acrylic frames (one portrait and the other landscape), which was located at a fixed position on the scan bed of an Epson 10 000XL scanner. After the pre-irradiated scan, the film was placed parallel to the beam central axis and sandwiched between six polystyrene plates (5 cm thick each), followed by irradiation of a 20 × 20 cm2 6 MV photon beam. Two different beams on times were used on two different films to deliver a dose to the film ranging from 32 to 320 cGy. After the post-irradiated scan, the net optical densities for a total of 235 points on the beam central axis on the films were auto-extracted and compared with the corresponding depth doses that were calculated through the measurement of a 0.6 cc farmer chamber and the related PDD table to perform the curve fitting. The portrait film location was selected for routine calibration, since the central beam axis on the film is parallel to the scanning direction, where non-uniformity correction is not needed (Ferreira et al 2009 Phys. Med. Biol. 54 1073-85). To perform the scanner non-uniformity calibration, the cross-beam profiles of the film were analysed by referencing the measured profiles from a Profiler™. Finally, to verify our method, the films were

  2. Calibration of EBT2 film by the PDD method with scanner non-uniformity correction.

    Science.gov (United States)

    Chang, Liyun; Chui, Chen-Shou; Ding, Hueisch-Jy; Hwang, Ing-Ming; Ho, Sheng-Yow

    2012-09-21

    The EBT2 film together with a flatbed scanner is a convenient dosimetry QA tool for verification of clinical radiotherapy treatments. However, it suffers from a relatively high degree of uncertainty and a tedious film calibration process for every new lot of films, including cutting the films into several small pieces, exposing with different doses, restoring them back and selecting the proper region of interest (ROI) for each piece for curve fitting. In this work, we present a percentage depth dose (PDD) method that can accurately calibrate the EBT2 film together with the scanner non-uniformity correction and provide an easy way to perform film dosimetry. All films were scanned before and after the irradiation in one of the two homemade 2 mm thick acrylic frames (one portrait and the other landscape), which was located at a fixed position on the scan bed of an Epson 10 000XL scanner. After the pre-irradiated scan, the film was placed parallel to the beam central axis and sandwiched between six polystyrene plates (5 cm thick each), followed by irradiation of a 20 × 20 cm² 6 MV photon beam. Two different beams on times were used on two different films to deliver a dose to the film ranging from 32 to 320 cGy. After the post-irradiated scan, the net optical densities for a total of 235 points on the beam central axis on the films were auto-extracted and compared with the corresponding depth doses that were calculated through the measurement of a 0.6 cc farmer chamber and the related PDD table to perform the curve fitting. The portrait film location was selected for routine calibration, since the central beam axis on the film is parallel to the scanning direction, where non-uniformity correction is not needed (Ferreira et al 2009 Phys. Med. Biol. 54 1073-85). To perform the scanner non-uniformity calibration, the cross-beam profiles of the film were analysed by referencing the measured profiles from a Profiler™. Finally, to verify our method, the films were

  3. Correction factors for the INER-improved free-air ionization chambers calculated with the Monte Carlo method.

    Science.gov (United States)

    Lin, Uei-Tyng; Chu, Chien-Hau

    2006-05-01

    Monte Carlo method was used to simulate the correction factors for electron loss and scattered photons for two improved cylindrical free-air ionization chambers (FACs) constructed at the Institute of Nuclear Energy Research (INER, Taiwan). The method is based on weighting correction factors for mono-energetic photons with X-ray spectra. The newly obtained correction factors for the medium-energy free-air chamber were compared with the current values, which were based on a least-squares fit to experimental data published in the NBS Handbook 64 [Wyckoff, H.O., Attix, F.H., 1969. Design of free-air ionization chambers. National Bureau Standards Handbook, No. 64. US Government Printing Office, Washington, DC, pp. 1-16; Chen, W.L., Su, S.H., Su, L.L., Hwang, W.S., 1999. Improved free-air ionization chamber for the measurement of X-rays. Metrologia 36, 19-24]. The comparison results showed the agreement between the Monte Carlo method and experimental data is within 0.22%. In addition, mono-energetic correction factors for the low-energy free-air chamber were calculated. Average correction factors were then derived for measured and theoretical X-ray spectra at 30-50 kVp. Although the measured and calculated spectra differ slightly, the resulting differences in the derived correction factors are less than 0.02%.

  4. Methods for Motion Correction Evaluation Using 18F-FDG Human Brain Scans on a High-Resolution PET Scanner

    DEFF Research Database (Denmark)

    Keller, Sune H.; Sibomana, Merence; Olesen, Oline Vinter

    2012-01-01

    Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias in the reconstr...... and measures improved after MC with AIR, whereas EMT MC performed less well. Conclusion: The 3 presented QC methods produced similar results and are useful for evaluating tracer-independent external-tracking motion-correction methods for human brain scans....... in the reconstructed emission images. The purpose of this work was the development of quality control (QC) methods for MC procedures based on external motion tracking (EMT) for human scanning using an optical motion tracking system. Methods: Two scans with minor motion and 5 with major motion (as reported...

  5. Resistivity Correction Factor for the Four-Probe Method: Experiment III

    Science.gov (United States)

    Yamashita, Masato; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo; Iwata, Atsushi

    1990-04-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F is applied to a system consisting of a rectangular parallelepiped sample and a square four-probe array. Resistivity and sheet resistance measurements are made on isotropic graphites and crystalline ITO films. Factor F corrects experimental data and leads to reasonable resistivity and sheet resistance.

  6. Determination of saturation functions and wettability for chalk based on measured fluid saturations

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, D.; Bech, N.; Moeller Nielsen, C.

    1998-08-01

    The end effect of displacement experiments on low permeable porous media is used for determination of relative permeability functions and capillary pressure functions. Saturation functions for a drainage process are determined from a primary drainage experiment. A reversal of the flooding direction creates an intrinsic imbibition process in the sample, which enables determination if imbibition saturation functions. The saturation functions are determined by a parameter estimation technique. Scanning effects are modelled by the method of Killough. Saturation profiles are determined by NMR. (au)

  7. Correction of the slope-intercept method for the measurement of glomerular filtration rate.

    Science.gov (United States)

    Blake, Glen M; Barnfield, Mark C; Burniston, Maria T; Fleming, John S; Cosgriff, Philip S; Siddique, Musib

    2014-12-01

    Glomerular filtration rate (GFR) is frequently assessed using the slope-intercept method by fitting a single exponential to plasma samples obtained 2-5 h after injection. The body surface area (BSA)-corrected one-pool clearance (CO,BSA) overestimates true GFR (CT,BSA) because it fails to sample the full plasma curve, and values of CT,BSA are usually estimated from CO,BSA using the Brøchner-Mortensen (BM) equation. An improved equation, CT,BSA=CO,BSA/(1+fBSA×CO,BSA), with fBSA a fixed constant, was proposed by Fleming, but subsequently Jødal and Brøchner-Mortensen (JBM) reported that fBSA varies with BSA. We report data for a large group of individuals who underwent GFR investigations with sampling of the full plasma curve. The aims were to validate the JBM equation with independent data and assess whether replacing the BM equation with a BSA-dependent correction based on Fleming's equation can increase the accuracy of the slope-intercept method. Plasma data were analysed for 142 children and adults aged 0.6-56 years who underwent technetium-99m-diethylenetriaminepentaacetic acid GFR investigations with blood samples taken between 5 min and 8 h after injection. Values of CO,BSA were calculated using the 2, 3 and 4 h data. Values of CT,BSA were calculated by integrating the plasma curve between 5 min and 4 h and extrapolating the terminal exponential. Individual values of fBSA were calculated using the relationship fBSA=1/CT,BSA-1/CO,BSA. Nonlinear regression was used to fit the function fBSA=f1×BSA and find the best-fit values for f1 and n. Scatter and Bland-Altman plots were drawn comparing the various formulae for correcting slope-intercept GFR. The trend for fBSA to decrease with increasing BSA was highly significant (Spearman's test: RS=-0.31; P=0.0002). When the data were fitted by nonlinear regression, the best-fit values (95% confidence interval) of the model parameters were n=-0.13 (from -0.21 to -0.04) and f1=0.00191 (from 0.00183 to 0

  8. Saturated Switching Systems

    CERN Document Server

    Benzaouia, Abdellah

    2012-01-01

    Saturated Switching Systems treats the problem of actuator saturation, inherent in all dynamical systems by using two approaches: positive invariance in which the controller is designed to work within a region of non-saturating linear behaviour; and saturation technique which allows saturation but guarantees asymptotic stability. The results obtained are extended from the linear systems in which they were first developed to switching systems with uncertainties, 2D switching systems, switching systems with Markovian jumping and switching systems of the Takagi-Sugeno type. The text represents a thoroughly referenced distillation of results obtained in this field during the last decade. The selected tool for analysis and design of stabilizing controllers is based on multiple Lyapunov functions and linear matrix inequalities. All the results are illustrated with numerical examples and figures many of them being modelled using MATLAB®. Saturated Switching Systems will be of interest to academic researchers in con...

  9. Cross-talk correction method for knee kinematics in gait analysis using principal component analysis (PCA: a new proposal.

    Directory of Open Access Journals (Sweden)

    Audrey Baudet

    Full Text Available In 3D gait analysis, the knee joint is usually described by the Eulerian way. It consists in breaking down the motion between the articulating bones of the knee into three rotations around three axes: flexion/extension, abduction/adduction and internal/external rotation. However, the definition of these axes is prone to error, such as the "cross-talk" effect, due to difficult positioning of anatomical landmarks. This paper proposes a correction method, principal component analysis (PCA, based on an objective kinematic criterion for standardization, in order to improve knee joint kinematic analysis.The method was applied to the 3D gait data of two different groups (twenty healthy subjects and four with knee osteoarthritis. Then, this method was evaluated with respect to three main criteria: (1 the deletion of knee joint angle cross-talk (2 the reduction of variance in the varus/valgus kinematic profile (3 the posture trial varus/valgus deformation matching the X-ray value for patients with knee osteoarthritis. The effect of the correction method was tested statistically on variabilities and cross-talk during gait.Cross-talk was lower (p<0.05 after correction (the correlation between the flexion-extension and varus-valgus kinematic profiles being annihilated. Additionally, the variance in the kinematic profile for knee varus/valgus and knee flexion/extension was found to be lower and higher (p<0.05, respectively, after correction for both the left and right side. Moreover, after correction, the posture trial varus/valgus angles were much closer to x-ray grading.The results show that the PCA correction applied to the knee joint eliminates the cross-talk effect, and does not alter the radiological varus/valgus deformation for patients with knee osteoarthritis. These findings suggest that the proposed correction method produces new rotational axes that better fit true knee motion.

  10. Computational methods for the construction, editing, and error correction of DNA molecules and their libraries.

    Science.gov (United States)

    Raz, Ofir; Ben Yehezkel, Tuval

    2015-01-01

    The field of synthetic biology is fueled by steady advances in our ability to produce designer genetic material on demand. This relatively new technological capability stems from advancements in DNA construction biochemistry as well as supporting computational technologies such as tools for specifying large DNA libraries, as well as planning and optimizing their actual physical construction. In particular, the design, planning, and construction of user specified, combinatorial DNA libraries are of increasing interest. Here we present some of the computational tools we have built over the past decade to support the multidisciplinary task of constructing DNA molecules and their libraries. These technologies encompass computational methods for [1] planning and optimizing the construction of DNA molecules and libraries, [2] the utilization of existing natural or synthetic fragments, [3] identification of shared fragments, [4] planning primers and overlaps, [5] minimizing the number of assembly steps required, and (6) correcting erroneous constructs. Other computational technologies that are important in the overall process of DNA construction, such as [1] computational tools for efficient specification and intuitive visualization of large DNA libraries (which aid in debugging library design pre-construction) and [2] automated liquid handling robotic programming [Linshiz et al., Mol Syst Biol 4:191, 2008; Shabi et al., Syst Synth Biol 4:227-236, 2010], which aid in the construction process itself, have been omitted due to length limitations.

  11. A comparison of robust Kalman filtering methods for artifact correction in heart rate variability analysis.

    Directory of Open Access Journals (Sweden)

    Carlos D. Zuluaga-Ríos

    2015-01-01

    Full Text Available Heart rate variability (HRV has received considerable attention for many years, since it provides a quantitative marker for examining the sinus rhythm modulated by the autonomic nervous system (ANS. The ANS plays an important role in clinical and physiological fields. HRV analysis can be performed by computing several time and frequency domain measurements. However, the computation of such measurements can be affected by the presence of artifacts or ectopic beats in the electrocardiogram (ECG recording. This is particularly true for ECG recordings from Holter monitors. The aim of this work was to study the performance of several robust Kalman filters for artifact correction in Inter-beat (RR interval time series. For our experiments, two data sets were used: the first data set included 10 RR interval time series from a realistic RR interval time series generator. The second database contains 10 sets of RR interval series from five healthy patients and five patients suffering from congestive heart failure. The standard deviation of the RR interval was computed over the filtered signals. Results were compared with a state of the art processing software, showing similar values and behavior. In addition, the proposed methods offer satisfactory results in contrast to standard Kalman filtering.

  12. Estimating Chlorophyll Fluorescence Parameters Using the Joint Fraunhofer Line Depth and Laser-Induced Saturation Pulse (FLD-LISP Method in Different Plant Species

    Directory of Open Access Journals (Sweden)

    Parinaz Rahimzadeh-Bajgiran

    2017-06-01

    Full Text Available A comprehensive evaluation of the recently developed Fraunhofer line depth (FLD and laser-induced saturation pulse (FLD-LISP method was conducted to measure chlorophyll fluorescence (ChlF parameters of the quantum yield of photosystem II (ΦPSII, non-photochemical quenching (NPQ, and the photosystem II-based electron transport rate (ETR in three plant species including paprika (C3 plant, maize (C4 plant, and pachira (C3 plant. First, the relationships between photosynthetic photon flux density (PPFD and ChlF parameters retrieved using FLD-LISP and the pulse amplitude-modulated (PAM methods were analyzed for all three species. Then the relationships between ChlF parameters measured using FLD-LISP and PAM were evaluated for the plants in different growth stages of leaves from mature to aging conditions. The relationships of ChlF parameters/PPFD were similar in both FLD-LISP and PAM methods in all plant species. ΦPSII showed a linear relationship with PPFD in all three species whereas NPQ was found to be linearly related to PPFD in paprika and maize, but not for pachira. The ETR/PPFD relationship was nonlinear with increasing values observed for PPFDs lower than about 800 μmol m−2 s−1 for paprika, lower than about 1200 μmol m−2 s−1 for maize, and lower than about 800 μmol m−2 s−1 for pachira. The ΦPSII, NPQ, and ETR of both the FLD-LISP and PAM methods were very well correlated (R2 = 0.89, RMSE = 0.05, (R2 = 0.86, RMSE = 0.44, and (R2 = 0.88, RMSE = 24.69, respectively, for all plants. Therefore, the FLD-LISP method can be recommended as a robust technique for the estimation of ChlF parameters.

  13. Boundary interference assessment and correction for open jet wind tunnels using panel methods

    Science.gov (United States)

    Mokhtar, Wael Ahmed

    The presence of nearby boundaries in a wind tunnel can lead to aerodynamic measurements on a model in the wind tunnel that differ from those that would be made when the boundaries of the moving fluid were infinitely far away. The differences, referred to as boundary interference or wall interference, can be quite large, such as when testing aircraft models developing high lift forces, or whose wingspan is a large fraction of the wind tunnel width, or high drag models whose frontal area is a large fraction of the tunnel cross section. Correction techniques for closed test section (solid walled) wind tunnels are fairly well developed, but relatively little recent work has addressed the case of open jet tunnels specifically for aeronautical applications. A method to assess the boundary interferences for open jet test sections is introduced. The main objective is to overcome some of the limitations in the classical and currently used methods for aeronautical and automotive wind tunnels, particularly where the levels of interference are large and distortion of the jet boundary becomes significant. The starting point is to take advantage of two well-developed approaches used in closed wall test sections, namely the boundary measurement approach and adaptive wall wind tunnels. A low-order panel code is developed because it offers a relatively efficient approach from the computational point of view, within the required accuracy. It also gives the method more flexibility to deal with more complex model geometries and test section cross sections. The method is first compared to the method of images. Several lifting and non-lifting model representations are used for both two- and three-dimensional studies. Then the method is applied to results of a test of a full-scale Wright Flyer replica inside the Langley Full Scale Tunnel. The study is extended to include the effect of model representation and the test section boundaries (closed, open and 3/4 open) on the interference

  14. Correction method of wavefront aberration on signal quality in holographic memory

    Science.gov (United States)

    Kimura, Eri; Nakajima, Akihito; Akieda, Kensuke; Ohori, Tomohiro; Katakura, Kiyoto; Kondo, Yo; Yamamoto, Manabu

    2011-02-01

    One of the problems that affects the practical use of holographic memory is deterioration of the reproduced images due to aberration in the optical system. The medium must be interchangeable, and hence it is necessary to clarify the influence of aberration in the optical system on the signal quality and perform aberration correction for drive compatibility. In this study, aberration is introduced in the reference light beam during image reproduction, and the deterioration of the reproduced image signal is examined. In addition, for a basic study of aberration correction, the correction technique using a two-dimensional signal processing is studied.

  15. A recovery coefficient method for partial volume correction of PET images

    National Research Council Canada - National Science Library

    Srinivas, Shyam M; Dhurairaj, Thiruvenkatasamy; Basu, Sandip; Bural, Gonca; Surti, Suleman; Alavi, Abass

    2009-01-01

    Correction of the "partial volume effect" has been an area of great interest in the recent times in quantitative PET imaging and has been mainly studied with count recovery models based upon phantoms...

  16. A statistical method for correcting salinity observations from autonomous profiling floats: An ARGO perspective

    Digital Repository Service at National Institute of Oceanography (India)

    Durand, F.; Reverdin, G.

    The Profiling Autonomous Lagrangian Circulation Explorer (PALACE) float is used to implement the Array for Real-Time Geostrophic Oceanography (ARGO). This study presents a statistical approach to correct salinity measurement errors of an ARGO...

  17. Airway area by acoustic reflection: a corrected derivation for the two-microphone method.

    Science.gov (United States)

    Poort, K L; Fredberg, J J

    1999-12-01

    A corrected derivation is provided for the relationship between the impulse response of a wave tube termination and pressure signals measured at two different locations within the tube. This derivation yields exactly the same final result as was reported previously by Louis et al. (1993), despite the omission of the active source term in that earlier derivation. This technique has become the basis of an important medical diagnostic technology. This report revises and corrects the earlier theory upon which that technology rests.

  18. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    Directory of Open Access Journals (Sweden)

    Haris Akram Bhatti

    2016-06-01

    Full Text Available With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA Climate Prediction Centre (CPC morphing technique (CMORPH satellite rainfall product (CMORPH in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW sizes and for sequential windows (SW’s of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE. To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r and standard deviation (SD. Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  19. Enhancement of image quality with a fast iterative scatter and beam hardening correction method for kV CBCT

    Energy Technology Data Exchange (ETDEWEB)

    Reitz, Irmtraud; Hesse, Bernd-Michael; Nill, Simeon; Tuecking, Thomas; Oelfke, Uwe [DKFZ, Heidelberg (Germany)

    2009-07-01

    The problem of the enormous amount of scattered radiation in kV CBCT (kilo voltage cone beam computer tomography) is addressed. Scatter causes undesirable streak- and cup-artifacts and results in a quantitative inaccuracy of reconstructed CT numbers, so that an accurate dose calculation might be impossible. Image contrast is also significantly reduced. Therefore we checked whether an appropriate implementation of the fast iterative scatter correction algorithm we have developed for MV (mega voltage) CBCT reduces the scatter contribution in a kV CBCT as well. This scatter correction method is based on a superposition of pre-calculated Monte Carlo generated pencil beam scatter kernels. The algorithm requires only a system calibration by measuring homogeneous slab phantoms with known water-equivalent thicknesses. In this study we compare scatter corrected CBCT images of several phantoms to the fan beam CT images acquired with a reduced cone angle (a slice-thickness of 14 mm in the isocenter) at the same system. Additional measurements at a different CBCT system were made (different energy spectrum and phantom-to-detector distance) and a first order approach of a fast beam hardening correction will be introduced. The observed, image quality of the scatter corrected CBCT images is comparable concerning resolution, noise and contrast-to-noise ratio to the images acquired in fan beam geometry. Compared to the CBCT without any corrections the contrast of the contrast-and-resolution phantom with scatter correction and additional beam hardening correction is improved by a factor of about 1.5. The reconstructed attenuation coefficients and the CT numbers of the scatter corrected CBCT images are close to the values of the images acquired in fan beam geometry for the most pronounced tissue types. Only for extreme dense tissue types like cortical bone we see a difference in CT numbers of 5.2%, which can be improved to 4.4% with the additional beam hardening correction. Cupping

  20. Enhancement of image quality with a fast iterative scatter and beam hardening correction method for kV CBCT.

    Science.gov (United States)

    Reitz, Irmtraud; Hesse, Bernd-Michael; Nill, Simeon; Tücking, Thomas; Oelfke, Uwe

    2009-01-01

    The problem of the enormous amount of scattered radiation in kV CBCT (kilo voltage cone beam computer tomography) is addressed. Scatter causes undesirable streak- and cup-artifacts and results in a quantitative inaccuracy of reconstructed CT numbers, so that an accurate dose calculation might be impossible. Image contrast is also significantly reduced. Therefore we checked whether an appropriate implementation of the fast iterative scatter correction algorithm we have developed for MV (mega voltage) CBCT reduces the scatter contribution in a kV CBCT as well. This scatter correction method is based on a superposition of pre-calculated Monte Carlo generated pencil beam scatter kernels. The algorithm requires only a system calibration by measuring homogeneous slab phantoms with known water-equivalent thicknesses. In this study we compare scatter corrected CBCT images of several phantoms to the fan beam CT images acquired with a reduced cone angle (a slice-thickness of 14 mm in the isocenter) at the same system. Additional measurements at a different CBCT system were made (different energy spectrum and phantom-to-detector distance) and a first order approach of a fast beam hardening correction will be introduced. The observed image quality of the scatter corrected CBCT images is comparable concerning resolution, noise and contrast-to-noise ratio to the images acquired in fan beam geometry. Compared to the CBCT without any corrections the contrast of the contrast-and-resolution phantom with scatter correction and additional beam hardening correction is improved by a factor of about 1.5. The reconstructed attenuation coefficients and the CT numbers of the scatter corrected CBCT images are close to the values of the images acquired in fan beam geometry for the most pronounced tissue types. Only for extreme dense tissue types like cortical bone we see a difference in CT numbers of 5.2%, which can be improved to 4.4% with the additional beam hardening correction. Cupping is

  1. A comparative study of k-spectrum-based error correction methods for next-generation sequencing data analysis.

    Science.gov (United States)

    Akogwu, Isaac; Wang, Nan; Zhang, Chaoyang; Gong, Ping

    2016-07-25

    Innumerable opportunities for new genomic research have been stimulated by advancement in high-throughput next-generation sequencing (NGS). However, the pitfall of NGS data abundance is the complication of distinction between true biological variants and sequence error alterations during downstream analysis. Many error correction methods have been developed to correct erroneous NGS reads before further analysis, but independent evaluation of the impact of such dataset features as read length, genome size, and coverage depth on their performance is lacking. This comparative study aims to investigate the strength and weakness as well as limitations of some newest k-spectrum-based methods and to provide recommendations for users in selecting suitable methods with respect to specific NGS datasets. Six k-spectrum-based methods, i.e., Reptile, Musket, Bless, Bloocoo, Lighter, and Trowel, were compared using six simulated sets of paired-end Illumina sequencing data. These NGS datasets varied in coverage depth (10× to 120×), read length (36 to 100 bp), and genome size (4.6 to 143 MB). Error Correction Evaluation Toolkit (ECET) was employed to derive a suite of metrics (i.e., true positives, false positive, false negative, recall, precision, gain, and F-score) for assessing the correction quality of each method. Results from computational experiments indicate that Musket had the best overall performance across the spectra of examined variants reflected in the six datasets. The lowest accuracy of Musket (F-score = 0.81) occurred to a dataset with a medium read length (56 bp), a medium coverage (50×), and a small-sized genome (5.4 MB). The other five methods underperformed (F-score error correction methods. Thus, efforts have to be paid in choosing appropriate methods for error correction of specific NGS datasets. Based on our comparative study, we recommend Musket as the top choice because of its consistently superior performance across all six testing datasets

  2. Software Design of Mobile Antenna for Auto Satellite Tracking Using Modem Correction and Elevation Azimuth Method

    Directory of Open Access Journals (Sweden)

    Djamhari Sirat

    2010-10-01

    Full Text Available Pointing accuracy is an important thing in satellite communication. Because the satellite’s distance to the surface of the earth's satellite is so huge, thus 1 degree of pointing error will make the antenna can not send data to satellites. To overcome this, the auto-tracking satellite controller is made. This system uses a microcontroller as the controller, with the GPS as the indicator location of the antenna, digital compass as the beginning of antenna pointing direction, rotary encoder as sensor azimuth and elevation, and modem to see Eb/No signal. The microcontroller use serial communication to read the input. Thus the programming should be focused on in the UART and serial communication software UART. This controller use 2 phase in the process of tracking satellites. Early stages is the method Elevation-Azimuth, where at this stage with input from GPS, Digital Compass, and the position of satellites (both coordinates, and height that are stored in microcontroller. Controller will calculate the elevation and azimuth angle, then move the antenna according to the antenna azimuth and elevation angle. Next stages is correction modem, where in this stage controller only use modem as the input, and antenna movement is set up to obtain the largest value of Eb/No signal. From the results of the controller operation, there is a change in the value of the original input level from -81.7 dB to -30.2 dB with end of Eb/No value, reaching 5.7 dB.

  3. Validation of phenol red versus gravimetric method for water reabsorption correction and study of gender differences in Doluisio's absorption technique.

    Science.gov (United States)

    Tuğcu-Demiröz, Fatmanur; Gonzalez-Alvarez, Isabel; Gonzalez-Alvarez, Marta; Bermejo, Marival

    2014-10-01

    The aim of the present study was to develop a method for water flux reabsorption measurement in Doluisio's Perfusion Technique based on the use of phenol red as a non-absorbable marker and to validate it by comparison with gravimetric procedure. The compounds selected for the study were metoprolol, atenolol, cimetidine and cefadroxil in order to include low, intermediate and high permeability drugs absorbed by passive diffusion and by carrier mediated mechanism. The intestinal permeabilities (Peff) of the drugs were obtained in male and female Wistar rats and calculated using both methods of water flux correction. The absorption rate coefficients of all the assayed compounds did not show statistically significant differences between male and female rats consequently all the individual values were combined to compare between reabsorption methods. The absorption rate coefficients and permeability values did not show statistically significant differences between the two strategies of concentration correction. The apparent zero order water absorption coefficients were also similar in both correction procedures. In conclusion gravimetric and phenol red method for water reabsorption correction are accurate and interchangeable for permeability estimation in closed loop perfusion method. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. A new fast-rise kicker magnet system by a waveform correction method using auxiliary magnets and a three bump orbit correction method

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Eiji, E-mail: eiji.nakamura@kek.jp [High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); The Graduate University for Advanced Studies (SOKENDAI), Hayama, Kanagawa 240-0193 (Japan); Sakai, Yuya; Sakai, Izumi [Fukui University, 3-9-1 Bunkyo, Fukui 910-8507 (Japan); Takayama, Masakazu [Akita Prefectural University, 84-4 Aza Ebinokuchi, Tsuchiya, Yurihonjo, Akita 015-0055 (Japan); Yabukami, Shin [Tohoku Gakuin University, 1-13-1 Chuo, Tagajo, Miyagi 985-8537 (Japan); Ishi, Yoshihiro; Uesugi, Tomonori [Kyoto University Research Reactor Institute, 2 Asashiro-Nishi, Kumatori, Sennan, Osaka 590-0494 (Japan); Nakamura, Tsukasa [Pulse Electric Engineering Special Device Industry Co. Ltd., 201B, 5-4-19 Kashiwanoha, Kashiwa, Chiba 277-0882 (Japan); Nakao, Yoshiaki [Fujikura Ltd., 1-5-1 Kiba, Koto-ku, Tokyo 135-8512 (Japan); Inagaki, Shigeru [Kyushu University, 6-1 Kasuga-Koen, Kasuga, Fukuoka 816-8580 (Japan)

    2011-02-11

    A fast-rise trial has been performed using auxiliary magnets, which are excited by damped oscillation. There is a possibility to achieve faster rise-time using a simple structure than a delayed line type magnet. A favorable result was obtained by the method. The outline of this scheme and the representative experimental results are described in this paper.

  5. Implementing a generic method for bias correction in statistical models using random effects, with spatial and population dynamics examples

    DEFF Research Database (Denmark)

    Thorson, James T.; Kristensen, Kasper

    2016-01-01

    Statistical models play an important role in fisheries science when reconciling ecological theory with available data for wild populations or experimental studies. Ecological models increasingly include both fixed and random effects, and are often estimated using maximum likelihood techniques...... configurations of an age-structured population dynamics model. This simulation experiment shows that the epsilon-method and the existing bias-correction method perform equally well in data-rich contexts, but the epsilon-method is slightly less biased in data-poor contexts. We then apply the epsilon......-method to a spatial regression model when estimating an index of population abundance, and compare results with an alternative bias-correction algorithm that involves Markov-chain Monte Carlo sampling. This example shows that the epsilon-method leads to a biologically significant difference in estimates of average...

  6. Lipid Metabolic Disturbances in Severe Sepsis: Clinical Significance and New Methods of Correction

    Directory of Open Access Journals (Sweden)

    O. G. Malkova

    2009-01-01

    Full Text Available Objective: to reveal the basic regularities in the development of lipid metabolic disturbances in severe sepsis and to evaluate the efficiency of parenteral use of new balanced lipid emulsions in this cohort of patients. Subjects and methods. A prospective study was conducted in 88 patients with severe sepsis of different etiologies in the intensive care unit (ICI, Sverdlovsk Regional Clinical Hospital One. Among the lipid metabolic parameters, serum cholesterol, triglycerides (TG, high-density lipoproteins, low-density lipoproteins, and atherogenicity index were measured. Out of the systemic inflammatory markers and the additional criteria for sepsis severity, the serum levels of C-reactive protein, nitric oxide, lactate, D-dimers, the anti-inflammatory cytokine IL-4 and the proinflammatory cytokine IL-8 were determined. Serum was taken on days 1, 3, 5, and 7 after admission to the ICI. The quantitative attributes were comparatively analyzed by the statistical program «Statistica 6.0». Results. The severity assessed by the APACHE II scale, the degree of multiple organ failures (MOF evaluated by the SOFA scale, and lung lesion according to the MURREY scale were found to be closely related to the baseline serum TG levels in patients with severe sepsis. The findings suggest that patients with high TG levels have much higher resuscitative mortality rates. The clinical evaluation of the efficiency of the new method for correction of lipid metabolic disturbances in severe sepsis has indicated that the patients receiving balanced (omega-3 fatty acids-enriched fat emulsions as 20% Lipoplus solution had lower APACHE II scores within the first 7 days of intensive therapy and significantly more positive SOFA MOF changes. Specific changes were revealed in the presence of systemic inflammatory markers, such as C-reactive protein, IL-8, and IL-4, which confirms that balanced lipid emulsions are able to affect the system of pro- and anti

  7. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction.

    Science.gov (United States)

    Morel, Yann G; Favoretto, Fabio

    2017-07-21

    All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a "near-nadir" view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.

  8. Topographic Correction to Landsat Imagery through Slope Classification by Applying the SCS + C Method in Mountainous Forest Areas

    Directory of Open Access Journals (Sweden)

    René Vázquez-Jiménez

    2017-09-01

    Full Text Available The aim of the topographic normalization of remotely sensed imagery is to reduce reflectance variability caused by steep terrain and thus improve further processing of images. A process of topographic correction was applied to Landsat imagery in a mountainous forest area in the south of Mexico. The method used was the Sun Canopy Sensor + C correction (SCS + C where the C parameter was differently determined according to a classification of the topographic slopes of the studied area in nine classes for each band, instead of using a single C parameter for each band. A comparative, visual, and numerical analysis of the normalized reflectance was performed based on the corrected images. The results showed that the correction by slope classification improves the elimination of the effect of shadows and relief, especially in steep slope areas, modifying the normalized reflectance values according to the combination of slope, aspect, and solar geometry, obtaining reflectance values more suitable than the correction by non-slope classification. The application of the proposed method can be generalized, improving its performance in forest mountainous areas.

  9. Development of a sampling method for the simultaneous monitoring of straight-chain alkanes, straight-chain saturated carbonyl compounds and monoterpenes in remote areas.

    Science.gov (United States)

    Detournay, Anaïs; Sauvage, Stéphane; Locoge, Nadine; Gaudion, Vincent; Leonardis, Thierry; Fronval, Isabelle; Kaluzny, Pascal; Galloo, Jean-Claude

    2011-04-01

    Studies have shown that biogenic compounds, long chain secondary compounds and long lifetime anthropogenic compounds are involved in the formation of organic aerosols in both polluted areas and remote places. This work aims at developing an active sampling method to monitor these compounds (i.e. 6 straight-chain saturated aldehydes from C6 to C11; 8 straight-chain alkanes from C9 to C16; 6 monoterpenes: α-pinene, β-pinene, camphene, limonene, α-terpinene, & γ-terpinene; and 5 aromatic compounds: toluene, ethylbenzene, meta-, para- and ortho-xylenes) in remote areas. Samples are collected onto multi-bed sorbent cartridges at 200 mL min(-1) flow rate, using the automatic sampler SyPAC (TERA-Environnement, Crolles, France). No breakthrough was observed for sampling volumes up to 120 L (standard mixture at ambient temperature, with a relative humidity of 75%). As ozone has been shown to alter the samples (losses of 90% of aldehydes and up to 95% of terpenes were observed), the addition of a conditioned manganese dioxide (MnO(2)) scrubber to the system has been validated (full recovery of the affected compounds for a standard mixture at 50% relative humidity--RH). Samples are first thermodesorbed and then analysed by GC/FID/MS. This method allows suitable detection limits (from 2 ppt for camphene to 13 ppt for octanal--36 L sampled), and reproducibility (from 1% for toluene to 22% for heptanal). It has been successfully used to determine the diurnal variation of the target compounds (six 3 h samples a day) during winter and summer measurement campaigns at a remote site in the south of France.

  10. Te Inclusions in CZT Detectors: New Method for Correcting Their Adverse Effects

    Energy Technology Data Exchange (ETDEWEB)

    Bolotnikov, A.E.; Babalola, S.; Camarda, G.S.; Cui, Y.; Egarievwe, S.U.; Hawrami, R.; Hossain, A.; Yang, G.; James, R.B.

    2009-10-25

    Both Te inclusions and point defects can trap the charge carriers generated by ionizing particles in CdZnTe (CZT) detectors. The amount of charge trapped by point defects is proportional to the carriers’ drift time and can be corrected electronically. In the case of Te inclusions, the charge loss depends upon their random locations with respect to the electron cloud. Consequently, inclusions introduce fluctuations in the charge signals, which cannot be easily corrected. In this paper, we describe direct measurements of the cumulative effect of Te inclusions and its influence on the response of CZT detectors of different thicknesses and different sizes and concentrations of Te inclusions. We also discuss a means of partially correcting their adverse effects.

  11. Reliability analysis of offshore jacket structures with wave load on deck using the Model Correction Factor Method

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Friis-Hansen, P.; Nielsen, J.S.

    2007-01-01

    failure/collapse of jacket type platforms with wave-in-deck loads using the so-called Model Correction Factor Method (MCFM). A simple representative model for the RSR measurement is developed and used in the MCFM technique. A realistic example is evaluated, and it is seen that it is possible to perform...

  12. QT correction methods in infants and children: effects of age and gender.

    Science.gov (United States)

    Benatar, Abraham; Feenstra, Arjen

    2015-03-01

    Accurate determination of the QTc interval in children is important especially when using drugs which can prolong cardiac repolarization. Previous work suggests the most appropriate correction formula to be QTc = QT/RR(0.38) . We set out to compute the best population-derived age and gender-related QT correction formula factor in normal children. We evaluated a cohort of 1400 healthy children. From a resting 12-lead electrocardiogram, QT and RR intervals were measured. Subjects were divided into four age and gender groups: 0-1 years (n = 540); 1-5 years (n = 281); 5-10 years (n = 277), and > 10 years (n = 302). QT/RR intervals were plotted and fitted with two regression analyses, linear regression obtaining constant α (QTc = QT + α x (1-RR)), and log-linear analysis deriving constant β (QTc = QT/RR(β) ). Furthermore, regression analysis of QTc/RR for the two formulas was performed obtaining slope and R(2) . Correction constant α decreased steadily with increasing age, genders remained on par until 10 years of age followed by more pronounced decrease in females (range 0.24-0.18). The β constant showed a similar trend however with more pronounced decline (range 0.45-0.31). Regression slopes of QTc/RR plots (all ages and both genders) were close to zero (both formulas). For the full range of pediatric subjects, the optimum population-based correction factors α and β decreased with increasing age and gender, digressing more so in adolescent girls. More specific correction factors, based on age and gender, are necessary in QT correction. © 2014 Wiley Periodicals, Inc.

  13. A method for mapping topsoil field-saturated hydraulic conductivity in the Cévennes-Vivarais region using infiltration tests conducted with different techniques

    Science.gov (United States)

    Braud, Isabelle; Desprats, Jean-François; Ayral, Pierre-Alain; Bouvier, Christophe; Vandervaere, Jean-Pierre

    2017-04-01

    Topsoil field-saturated hydraulic conductivity, Kfs, is a parameter that controls the partition of rainfall between infiltration and runoff. It is a key parameter in most distributed hydrological models. However, there is a mismatch between the scale of local in situ measurements and the scale at which the parameter is required in models. Therefore it is necessary to design methods to regionally map this parameter at the model scale. The paper propose a method for mapping Kfs in the Cévennes-Vivarais region, south-east France, using more easily available GIS data: geology and land cover. The mapping is based on a data set gathering infiltration tests performed in the area or close to it for more than ten years. The data set is composed of infiltration tests performed using various techniques: Guelph permeameter, double ring and single ring infiltration tests, infiltrometers with multiple suctions. The different methods lead to different orders of magnitude for Kfs rendering the pooling of all the data challenging. Therefore, a method is first proposed to pool the data from the different infiltration methods, leading to a homogenized set of Kfs, based on an equivalent double ring/tension disk infiltration value. Statistical tests showed significant differences in distributions among different geologies and land covers. Thus those variables were retained as proxy for mapping Kfs at the regional scale. This map was compared to a map based on the Rawls and Brakensiek (RB) pedo-transfer function (Manus et al., 2009, Vannier et al., 2016), showing very different patterns between both maps. In addition, RB values did not fit observed values at the plot scale, highlighting that soil texture only is not a good predictor of Kfs. References Manus, C., Anquetin, S., Braud, I., Vandervaere, J.P., Viallet, P., Creutin, J.D., Gaume, E., 2009. A modelling approach to assess the hydrological response of small Mediterranean catchments to the variability of soil characteristics in a

  14. Evaluation of a method for correction of scatter radiation in thorax cone beam CT; Evaluation d'une methode de correction du rayonnement diffuse en tomographie du thorax avec faisceau conique

    Energy Technology Data Exchange (ETDEWEB)

    Rinkel, J.; Dinten, J.M. [CEA Grenoble (DTBS/STD), Lab. d' Electronique et de Technologie de l' Informatique, LETI, 38 (France); Esteve, F. [European Synchrotron Radiation Facility (ESRF), 38 - Grenoble (France)

    2004-07-01

    Purpose: Cone beam CT (CBCT) enables three-dimensional imaging with isotropic resolution. X-ray scatter estimation is a big challenge for quantitative CBCT imaging of thorax: scatter level is significantly higher on cone beam systems compared to collimated fan beam systems. The effects of this scattered radiation are cupping artefacts, streaks, and quantification inaccuracies. The beam stops conventional scatter estimation approach can be used for CBCT but leads to a significant increase in terms of dose and acquisition time. At CEA-LETI has been developed an original scatter management process without supplementary acquisition. Methods and Materials: This Analytical Plus Indexing-based method (API) of scatter correction in CBCT is based on scatter calibration through offline acquisitions with beam stops on lucite plates, combined to an analytical transformation issued from physical equations. This approach has been applied with success in bone densitometry and mammography. To evaluate this method in CBCT, acquisitions from a thorax phantom with and without beam stops have been performed. To compare different scatter correction approaches, Feldkamp algorithm has been applied on rough data corrected from scatter by API and by beam stops approaches. Results: The API method provides results in good agreement with the beam stops array approach, suppressing cupping artefact. Otherwise influence of the scatter correction method on the noise in the reconstructed images has been evaluated. Conclusion: The results indicate that the API method is effective for quantitative CBCT imaging of thorax. Compared to a beam stops array method it needs a lower x-ray dose and shortens acquisition time. (authors)

  15. Proton dose distribution measurements using a MOSFET detector with a simple dose-weighted correction method for LET effects.

    Science.gov (United States)

    Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-04-04

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors.

  16. Analysis of an automated background correction method for cardiovascular MR phase contrast imaging in children and young adults

    Energy Technology Data Exchange (ETDEWEB)

    Rigsby, Cynthia K.; Hilpipre, Nicholas; Boylan, Emma E.; Popescu, Andrada R.; Deng, Jie [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Medical Imaging, Chicago, IL (United States); McNeal, Gary R. [Siemens Medical Solutions USA Inc., Customer Solutions Group, Cardiovascular MR R and D, Chicago, IL (United States); Zhang, Gang [Ann and Robert H. Lurie Children' s Hospital of Chicago Research Center, Biostatistics Research Core, Chicago, IL (United States); Choi, Grace [Ann and Robert H. Lurie Children' s Hospital of Chicago, Department of Pediatrics, Chicago, IL (United States); Greiser, Andreas [Siemens AG Healthcare Sector, Erlangen (Germany)

    2014-03-15

    Phase contrast magnetic resonance imaging (MRI) is a powerful tool for evaluating vessel blood flow. Inherent errors in acquisition, such as phase offset, eddy currents and gradient field effects, can cause significant inaccuracies in flow parameters. These errors can be rectified with the use of background correction software. To evaluate the performance of an automated phase contrast MRI background phase correction method in children and young adults undergoing cardiac MR imaging. We conducted a retrospective review of patients undergoing routine clinical cardiac MRI including phase contrast MRI for flow quantification in the aorta (Ao) and main pulmonary artery (MPA). When phase contrast MRI of the right and left pulmonary arteries was also performed, these data were included. We excluded patients with known shunts and metallic implants causing visible MRI artifact and those with more than mild to moderate aortic or pulmonary stenosis. Phase contrast MRI of the Ao, mid MPA, proximal right pulmonary artery (RPA) and left pulmonary artery (LPA) using 2-D gradient echo Fast Low Angle SHot (FLASH) imaging was acquired during normal respiration with retrospective cardiac gating. Standard phase image reconstruction and the automatic spatially dependent background-phase-corrected reconstruction were performed on each phase contrast MRI dataset. Non-background-corrected and background-phase-corrected net flow, forward flow, regurgitant volume, regurgitant fraction, and vessel cardiac output were recorded for each vessel. We compared standard non-background-corrected and background-phase-corrected mean flow values for the Ao and MPA. The ratio of pulmonary to systemic blood flow (Qp:Qs) was calculated for the standard non-background and background-phase-corrected data and these values were compared to each other and for proximity to 1. In a subset of patients who also underwent phase contrast MRI of the MPA, RPA, and LPA a comparison was made between standard non-background-corrected

  17. Comparison between the Gauss' law method and the zero current method to calculate multi-species ionic diffusion in saturated uncharged porous materials

    DEFF Research Database (Denmark)

    Johannesson, Björn

    2010-01-01

    for the mass density flow of ionic species. The other is based on using the Gauss’ law and assuming no polarization, in order to obtain an equation for the determination of the electrical field, together with adding the electrical field in the constitution of the ionic mass density flows. The important......There exist, mainly, two different continuum approaches to calculate transient multi species ionic diffusion. One of them is based on explicitly assuming a zero current in the diffusing mixture together with an introduction of a streaming electrical potential in the constitutive equations...... are compared with existing results from the solutions of the Gauss’ law method. For the studied case the calculated concentrations of the ionic species, using the two different methods, differed very little....

  18. Device and method for creating Gaussian aberration-corrected electron beams

    Science.gov (United States)

    McMorran, Benjamin; Linck, Martin

    2016-01-19

    Electron beam phase gratings have phase profiles that produce a diffracted beam having a Gaussian or other selected intensity profile. Phase profiles can also be selected to correct or compensate electron lens aberrations. Typically, a low diffraction order produces a suitable phase profile, and other orders are discarded.

  19. Correction on the influence of thermal contact resistance in thermal conductivity measurements using the guarded hot plate method

    Directory of Open Access Journals (Sweden)

    Stepanić Nenad

    2009-01-01

    Full Text Available This work considers the influence of finite thermal contact resistances which exist in thermal conductivity measurements of homogeneous and poor thermal conductive materials using the guarded hot plate method. As an example of correction method proposed in this work, different experimental results obtained from a standard reference material sample (with the conductivity of about 1 W/mK have been presented.

  20. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples.

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-05

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Implementation and benchmark of a long-range corrected functional in the density functional based tight-binding method

    Energy Technology Data Exchange (ETDEWEB)

    Lutsker, V.; Niehaus, T. A., E-mail: thomas.niehaus@physik.uni-regensburg.de [Department of Theoretical Physics, University of Regensburg, 93040 Regensburg (Germany); Aradi, B. [BCCMS, University of Bremen, 28359 Bremen (Germany)

    2015-11-14

    Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply the method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data.

  2. Multi-step-ahead Method for Wind Speed Prediction Correction Based on Numerical Weather Prediction and Historical Measurement Data

    Science.gov (United States)

    Wang, Han; Yan, Jie; Liu, Yongqian; Han, Shuang; Li, Li; Zhao, Jing

    2017-11-01

    Increasing the accuracy of wind speed prediction lays solid foundation to the reliability of wind power forecasting. Most traditional correction methods for wind speed prediction establish the mapping relationship between wind speed of the numerical weather prediction (NWP) and the historical measurement data (HMD) at the corresponding time slot, which is free of time-dependent impacts of wind speed time series. In this paper, a multi-step-ahead wind speed prediction correction method is proposed with consideration of the passing effects from wind speed at the previous time slot. To this end, the proposed method employs both NWP and HMD as model inputs and the training labels. First, the probabilistic analysis of the NWP deviation for different wind speed bins is calculated to illustrate the inadequacy of the traditional time-independent mapping strategy. Then, support vector machine (SVM) is utilized as example to implement the proposed mapping strategy and to establish the correction model for all the wind speed bins. One Chinese wind farm in northern part of China is taken as example to validate the proposed method. Three benchmark methods of wind speed prediction are used to compare the performance. The results show that the proposed model has the best performance under different time horizons.

  3. Measurement correction method for force sensor used in dynamic pressure calibration based on artificial neural network optimized by genetic algorithm

    Science.gov (United States)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2017-12-01

    We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.

  4. Simultaneous correction of congenital vertical talus and talipes equinovarus using the Ponseti method.

    Science.gov (United States)

    David, Michael G

    2011-01-01

    Talipes equinovarus (clubfoot) and congenital vertical talus are commonly seen as isolated deformities in the newborn; however, the case that we described in this article entailed a classic talipes equino varus on the left and a calcaneovalgus on the right. Both deformities were successfully corrected with manipulation therapy and, ultimately, surgical release of the tendoAchillis. Copyright © 2011 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  5. Experimental aspects of buoyancy correction in measuring reliable high-pressure excess adsorption isotherms using the gravimetric method

    Science.gov (United States)

    Nguyen, Huong Giang T.; Horn, Jarod C.; Thommes, Matthias; van Zee, Roger D.; Espinal, Laura

    2017-12-01

    Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO2 and supercritical N2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.

  6. A novel baseline correction method using convex optimization framework in laser-induced breakdown spectroscopy quantitative analysis

    Science.gov (United States)

    Yi, Cancan; Lv, Yong; Xiao, Han; Ke, Ke; Yu, Xun

    2017-12-01

    For laser-induced breakdown spectroscopy (LIBS) quantitative analysis technique, baseline correction is an essential part for the LIBS data preprocessing. As the widely existing cases, the phenomenon of baseline drift is generated by the fluctuation of laser energy, inhomogeneity of sample surfaces and the background noise, which has aroused the interest of many researchers. Most of the prevalent algorithms usually need to preset some key parameters, such as the suitable spline function and the fitting order, thus do not have adaptability. Based on the characteristics of LIBS, such as the sparsity of spectral peaks and the low-pass filtered feature of baseline, a novel baseline correction and spectral data denoising method is studied in this paper. The improved technology utilizes convex optimization scheme to form a non-parametric baseline correction model. Meanwhile, asymmetric punish function is conducted to enhance signal-noise ratio (SNR) of the LIBS signal and improve reconstruction precision. Furthermore, an efficient iterative algorithm is applied to the optimization process, so as to ensure the convergence of this algorithm. To validate the proposed method, the concentration analysis of Chromium (Cr),Manganese (Mn) and Nickel (Ni) contained in 23 certified high alloy steel samples is assessed by using quantitative models with Partial Least Squares (PLS) and Support Vector Machine (SVM). Because there is no prior knowledge of sample composition and mathematical hypothesis, compared with other methods, the method proposed in this paper has better accuracy in quantitative analysis, and fully reflects its adaptive ability.

  7. Correction Methods for Organic Carbon Artifacts when Using Quartz-Fiber Filters in Large Particulate Matter Monitoring Networks: The Regression Method and Other Options

    Science.gov (United States)

    Sampling and handling artifacts can bias filter-based measurements of particulate organic carbon (OC). Several measurement-based methods for OC artifact reduction and/or estimation are currently used in research-grade field studies. OC frequently is not artifact-corrected in larg...

  8. Study on fault diagnosis method for nuclear power plant based on hadamard error-correcting output code

    Science.gov (United States)

    Mu, Y.; Sheng, G. M.; Sun, P. N.

    2017-05-01

    The technology of real-time fault diagnosis for nuclear power plants(NPP) has great significance to improve the safety and economy of reactor. The failure samples of nuclear power plants are difficult to obtain, and support vector machine is an effective algorithm for small sample problem. NPP is a very complex system, so in fact the type of NPP failure may occur very much. ECOC is constructed by the Hadamard error correction code, and the decoding method is Hamming distance method. The base models are established by lib-SVM algorithm. The result shows that this method can diagnose the faults of the NPP effectively.

  9. On the Stability Criterion in a Saturated Atmosphere

    National Research Council Canada - National Science Library

    Richiardone, R; Giusti, F

    2001-01-01

      The expression of the moist buoyancy frequency indicated that it is not completely correct ot use the moist adiabatic lapse rate as a static stability parameter of a saturated atmosphere, because...

  10. 4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction

    Directory of Open Access Journals (Sweden)

    Yann G. Morel

    2017-07-01

    Full Text Available All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i use only the relative radiance data in the image along with published data, and several new assumptions; (ii in order to specify and operate the simplified radiative transfer equation (RTE; (iii for the purpose of retrieving both the satellite derived bathymetry (SDB and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i formal atmospheric correction; (ii conversion of relative radiance into calibrated reflectance; or (iii existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM. This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a “near-nadir” view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint.

  11. Application of the spectral correction method to reanalysis data in South Africa

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Kruger, Andries C.

    2014-01-01

    In connection with applying reanalysis data for extreme wind estimation, this study investigates the use of a simple approach that corrects the smoothing effect in numerical modeling through adding in missing spectral information for relatively high, mesoscale frequencies. This approach, called...... the country to a temporal resolution of 1 h. However, the modeled data tend to underestimate the diurnal peaks in the coastal areas, with a resultant underestimation of the 1:50-year wind speed. Measurements, even of limited length, could improve the estimate. Lastly, the validity of using the spectral model...

  12. A new method for gravity correction of dynamometer data and determining passive elastic moments at the joint

    Science.gov (United States)

    Anderson, Dennis E.; Nussbaum, Maury A.; Madigan, Michael L.

    2010-01-01

    Moments measured by a dynamometer in biomechanics testing often include the gravitational moment and the passive elastic moment in addition to the moment caused by muscle contraction. Gravitational moments result from the weight of body segments and dynamometer attachment, whereas passive elastic moments are caused by the passive elastic deformation of tissues crossing the joint being assessed. Gravitational moments are a major potential source of error in dynamometer measurements and must be corrected for, a procedure often called gravity correction. While several approaches to gravity correction have been presented in the literature, they generally assume that the gravitational moment can be adequately modeled as a simple sine or cosine function. With this approach, a single passive data point may be used to specify the model, assuming that passive elastic moments are negligible at that point. A new method is presented here for the gravity correction of dynamometer data. Gravitational moment is represented using a generalized sinusoid, which is fit to passive data obtained over the entire joint range of motion. The model also explicitly accounts for the presence of passive elastic moments. The model was tested for cases of hip flexion-extension, knee flexion-extension, and ankle plantar flexion-dorsiflexion, and provided good fits in all cases. PMID:20047749

  13. Method of B0 mapping with magnitude-based correction for bipolar two-point Dixon cardiac MRI.

    Science.gov (United States)

    Liu, Junmin; Peters, Dana C; Drangova, Maria

    2017-11-01

    The conventional two-point (2pt) Dixon technique explicitly estimates B0 map by performing phase unwrapping. When signal loss, phase singularity, artifacts, or spatially isolated regions corrupt the measured phase images, this unwrapping-based technique will face difficulty. This work aims to improve the reliability of B0 mapping by performing unwrapping error correction. To detect the unwrapping-caused phase errors, we determined a magnitude-based fat/water mask and used it as reference to identify pixels being mismatched by the phase-based mask, which was derived from the B0-corrected phase term of the Hermitian product between echoes. Then, we corrected the afore-determined phase error on a region-by-region basis. We tested the developed method with nine patients' data, and the results were compared with a well-established region-growing technique. By adding the step to correct unwrapping-caused error, we improved the robustness of B0 mapping, resulting in better fat-water separation when compared with the conventional 2pt and the phasor-based region-growing techniques. We showed the feasibility of B0 mapping with bipolar 2pt human cardiac data. The software is freely available to the scientific community. Magn Reson Med 78:1862-1869, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  14. Marked effects of intracranial volume correction methods on sex differences in neuroanatomical structures: a HUNT MRI study

    Directory of Open Access Journals (Sweden)

    Carl Wolfgang S Pintzka

    2015-07-01

    Full Text Available To date, there is no consensus whether sexual dimorphism in the size of neuroanatomical structures exists, or if such differences are caused by choice of intracranial volume (ICV correction method. When investigating volume differences in neuroanatomical structures, corrections for variation in ICV are used. Commonly applied methods are the ICV-proportions, ICV-residuals and ICV as a covariate of no interest, ANCOVA. However, these different methods give contradictory results with regard to presence of sex differences. Our aims were to investigate presence of sexual dimorphism in 18 neuroanatomical volumes unrelated to ICV-differences by using a large ICV-matched subsample of 304 men and women from the HUNT-MRI general population study, and further to demonstrate in the entire sample of 966 healthy subjects, which of the ICV-correction methods gave results similar to the ICV-matched subsample. In addition, sex-specific subsamples were created to investigate whether differences were an effect of head size or sex. Most sex differences were related to volume scaling with ICV, independent of sex. Sex differences were detected in a few structures; amygdala, cerebellar cortex, and 3rd ventricle were larger in men, but the effect sizes were small. The residuals and ANCOVA methods were most effective at removing the effects of ICV. The proportions method suffered from systematic errors due to lack of proportionality between ICV and neuroanatomical volumes, leading to systematic mis-assignment of structures as either larger or smaller than their actual size. Adding additional sexual dimorphic covariates to the ANCOVA gave opposite results of those obtained in the ICV-matched subsample or with the residuals method. The findings in the current study explain some of the considerable variation in the literature on sexual dimorphisms in neuroanatomical volumes. In conclusion, sex plays a minor role for neuroanatomical volume differences; most differences are

  15. Marked effects of intracranial volume correction methods on sex differences in neuroanatomical structures: a HUNT MRI study.

    Science.gov (United States)

    Pintzka, Carl W S; Hansen, Tor I; Evensmoen, Hallvard R; Håberg, Asta K

    2015-01-01

    To date, there is no consensus whether sexual dimorphism in the size of neuroanatomical structures exists, or if such differences are caused by choice of intracranial volume (ICV) correction method. When investigating volume differences in neuroanatomical structures, corrections for variation in ICV are used. Commonly applied methods are the ICV-proportions, ICV-residuals and ICV as a covariate of no interest, ANCOVA. However, these different methods give contradictory results with regard to presence of sex differences. Our aims were to investigate presence of sexual dimorphism in 18 neuroanatomical volumes unrelated to ICV-differences by using a large ICV-matched subsample of 304 men and women from the HUNT-MRI general population study, and further to demonstrate in the entire sample of 966 healthy subjects, which of the ICV-correction methods gave results similar to the ICV-matched subsample. In addition, sex-specific subsamples were created to investigate whether differences were an effect of head size or sex. Most sex differences were related to volume scaling with ICV, independent of sex. Sex differences were detected in a few structures; amygdala, cerebellar cortex, and 3rd ventricle were larger in men, but the effect sizes were small. The residuals and ANCOVA methods were most effective at removing the effects of ICV. The proportions method suffered from systematic errors due to lack of proportionality between ICV and neuroanatomical volumes, leading to systematic mis-assignment of structures as either larger or smaller than their actual size. Adding additional sexual dimorphic covariates to the ANCOVA gave opposite results of those obtained in the ICV-matched subsample or with the residuals method. The findings in the current study explain some of the considerable variation in the literature on sexual dimorphisms in neuroanatomical volumes. In conclusion, sex plays a minor role for neuroanatomical volume differences; most differences are related to ICV.

  16. Correct-by-construction model composition: Application to the Invasive Software Composition method

    Directory of Open Access Journals (Sweden)

    Mounira Kezadri Hamiaz

    2014-04-01

    Full Text Available Composition technologies improve reuse in the development of large-scale complex systems. Safety critical systems require intensive validation and verification activities. These activities should be compositional in order to reduce the amount of residual verification activities that must be conducted on the composite in addition to the ones conducted on each components. In order to ensure the correctness of compositional verification and assess the minimality of the residual verification, the contribution proposes to use formal specification and verification at the composition operator level. A first experiment was conducted in [15] using proof assistants to formalize the generic composition technology ISC and prove that type checking was compositional. This contribution extends our early work to handle full model conformance and study the mandatory residual verification. It shows that ISC operators are not fully compositional with respect to conformance and provides the minimal preconditions on the operators mandatory to ensure compositional conformance. The appropriate operators from ISC (especially bind have been implemented in the COQ4MDE framework that provides a full implementation of MOF in the COQ proof assistant. Expected properties, respectively residual verification, are expressed as post, respectfully pre, conditions for the composition operators. The correctness of the compositional verification is proven in COQ.

  17. A modification to the standard ionospheric correction method used in GPS radio occultation

    Directory of Open Access Journals (Sweden)

    S. B. Healy

    2015-08-01

    Full Text Available A modification to the standard bending-angle correction used in GPS radio occultation (GPS-RO is proposed. The modified approach should reduce systematic residual ionospheric errors in GPS radio occultation climatologies. A new second-order term is introduced in order to account for a known source of systematic error, which is generally neglected. The new term has the form κ(a × (αL1(a-αL2(a2, where a is the impact parameter and (αL1, αL2 are the L1 and L2 bending angles, respectively. The variable κ is a weak function of the impact parameter, a, but it does depend on a priori ionospheric information. The theoretical basis of the new term is examined. The sensitivity of κ to the assumed ionospheric parameters is investigated in one-dimensional simulations, and it is shown that κ ≃ 10–20 rad−1. We note that the current implicit assumption is κ=0, and this is probably adequate for numerical weather prediction applications. However, the uncertainty in κ should be included in the uncertainty estimates for the geophysical climatologies produced from GPS-RO measurements. The limitations of the new ionospheric correction when applied to CHAMP (Challenging Minisatellite Payload measurements are noted. These arise because of the assumption that the refractive index is unity at the satellite, made when deriving bending angles from the Doppler shift values.

  18. An Accurate CT Saturation Classification Using a Deep Learning Approach Based on Unsupervised Feature Extraction and Supervised Fine-Tuning Strategy

    Directory of Open Access Journals (Sweden)

    Muhammad Ali

    2017-11-01

    Full Text Available Current transformer (CT saturation is one of the significant problems for protection engineers. If CT saturation is not tackled properly, it can cause a disastrous effect on the stability of the power system, and may even create a complete blackout. To cope with CT saturation properly, an accurate detection or classification should be preceded. Recently, deep learning (DL methods have brought a subversive revolution in the field of artificial intelligence (AI. This paper presents a new DL classification method based on unsupervised feature extraction and supervised fine-tuning strategy to classify the saturated and unsaturated regions in case of CT saturation. In other words, if protection system is subjected to a CT saturation, proposed method will correctly classify the different levels of saturation with a high accuracy. Traditional AI methods are mostly based on supervised learning and rely heavily on human crafted features. This paper contributes to an unsupervised feature extraction, using autoencoders and deep neural networks (DNNs to extract features automatically without prior knowledge of optimal features. To validate the effectiveness of proposed method, a variety of simulation tests are conducted, and classification results are analyzed using standard classification metrics. Simulation results confirm that proposed method classifies the different levels of CT saturation with a remarkable accuracy and has unique feature extraction capabilities. Lastly, we provided a potential future research direction to conclude this paper.

  19. A neural network-based method for spectral distortion correction in photon counting x-ray CT

    Science.gov (United States)

    Touch, Mengheng; Clark, Darin P.; Barber, William; Badea, Cristian T.

    2016-08-01

    Spectral CT using a photon counting x-ray detector (PCXD) shows great potential for measuring material composition based on energy dependent x-ray attenuation. Spectral CT is especially suited for imaging with K-edge contrast agents to address the otherwise limited contrast in soft tissues. We have developed a micro-CT system based on a PCXD. This system enables both 4 energy bins acquisition, as well as full-spectrum mode in which the energy thresholds of the PCXD are swept to sample the full energy spectrum for each detector element and projection angle. Measurements provided by the PCXD, however, are distorted due to undesirable physical effects in the detector and can be very noisy due to photon starvation in narrow energy bins. To address spectral distortions, we propose and demonstrate a novel artificial neural network (ANN)-based spectral distortion correction mechanism, which learns to undo the distortion in spectral CT, resulting in improved material decomposition accuracy. To address noise, post-reconstruction denoising based on bilateral filtration, which jointly enforces intensity gradient sparsity between spectral samples, is used to further improve the robustness of ANN training and material decomposition accuracy. Our ANN-based distortion correction method is calibrated using 3D-printed phantoms and a model of our spectral CT system. To enable realistic simulations and validation of our method, we first modeled the spectral distortions using experimental data acquired from 109Cd and 133Ba radioactive sources measured with our PCXD. Next, we trained an ANN to learn the relationship between the distorted spectral CT projections and the ideal, distortion-free projections in a calibration step. This required knowledge of the ground truth, distortion-free spectral CT projections, which were obtained by simulating a spectral CT scan of the digital version of a 3D-printed phantom. Once the training was completed, the trained ANN was used to perform

  20. Current ideas on the pathogenesis of uric acid nephrolithiasis and its correction methods in gouty patients

    Directory of Open Access Journals (Sweden)

    V G Barskova

    2011-12-01

    Full Text Available output The paper describes current ideas on the pathogenesis and treatment of nephrolithiasis, virtually a constant attendant of gout. The prevalence of nephrolithiasis is reported to be increasing worldwide. Among all cases of nephrolithiasis, the frequency of uric acid nephrolithiasis ranges from 5 to 40%; that of nephrolithiasis in gout is, according to the data by different authors, 7 to 10%. Hyperuricosuria, low urine volume, and low urine pH are considered to be classical risk factors for uric acid nephrolithiasis. Uric acid nephrolithiasis, including that in gout, even if asymptomatic, is noted to require active therapy. The paper presents the basic principles of treatment for uric acid nephrolithiasis: to normalize urine pH; to eliminate or neutralize the sequels of hyperuricosuria, to correct comorbidities, and to increase urine.

  1. Supplemental transmission method for improved PET attenuation correction on an integrated MR/PET

    Energy Technology Data Exchange (ETDEWEB)

    Watson, Charles C., E-mail: charles.c.watson@siemens.com

    2014-01-11

    Although MR image segmentation, combined with information from the PET emission data, has achieved a clinically usable PET attenuation correction (AC) on whole-body MR/PET systems, more accurate PET AC remains one of the main instrumental challenges for quantitative imaging. Incorporating a full conventional PET transmission system in these machines would be difficult, but even a small amount of transmission data might usefully complement the MR-based estimate of the PET attenuation image. In this paper we explore one possible configuration for such a system that uses a small number of fixed line sources placed around the periphery of the patient tunnel. These line sources are implemented using targeted positron beams. The sparse transmission (sTX) data are collected simultaneously with the emission (EM) acquisition. These data, plus a blank scan, are combined with a partially known attenuation image estimate in a modified version of the maximum likelihood for attenuation and activity (MLAA) algorithm, to estimate values of the linear attenuation coefficients (LAC) in unknown regions of the image. This algorithm was tested in two simple phantom experiments. We find that the use of supplemental transmission data can significantly improve the accuracy of the estimated LAC in a truncated region, as well as the estimate of the emitter concentration within the phantom. In the experiments, the bias in the EM+sTX estimate of emitter concentrations was 3–5%, compared to 15–20% with the use of EM-only data. Highlights: • MR-based PET attenuation correction (AC) on MR/PET scanners remains problematic. • We propose a supplemental sparse transmission (sTX) system to improve MR-AC. • The sTX sources were implemented very practically using targeted positron beams. • A novel MLAA-like algorithm was developed to reconstruct these data. • We show that sTX leads to more accurate emission images in two phantom studies.

  2. Patient-Specific Method of Generating Parametric Maps of Patlak Ki without Blood Sampling or Metabolite Correction: A Feasibility Study

    Directory of Open Access Journals (Sweden)

    George A. Sayre

    2011-01-01

    Full Text Available Currently, kinetic analyses using dynamic positron emission tomography (PET experience very limited use despite their potential for improving quantitative accuracy in several clinical and research applications. For targeted volume applications, such as radiation treatment planning, treatment monitoring, and cerebral metabolic studies, the key to implementation of these methods is the determination of an arterial input function, which can include time-consuming analysis of blood samples for metabolite correction. Targeted kinetic applications would become practical for the clinic if blood sampling and metabolite correction could be avoided. To this end, we developed a novel method (Patlak-P of generating parametric maps that is identical to Patlak Ki (within a global scalar multiple but does not require the determination of the arterial input function or metabolite correction. In this initial study, we show that Patlak-P (a mimics Patlak Ki images in terms of visual assessment and target-to-background (TB ratios of regions of elevated uptake, (b has higher visual contrast and (generally better image quality than SUV, and (c may have an important role in improving radiotherapy planning, therapy monitoring, and neurometabolism studies.

  3. Research on testing instrument and method for correction of the uniformity of image intensifier fluorescence screen brightness

    Science.gov (United States)

    Qiu, YaFeng; Chang, BenKang; Qian, YunSheng; Fu, RongGuo

    2011-09-01

    To test the parameters of image intensifier screen is the precondition for researching and developing the third generation image intensifier. The picture of brightness uniformity of tested fluorescence screen shows bright in middle and dark at edge. It is not so direct to evaluate the performance of fluorescence screen. We analyze the energy and density distribution of the electrons, After correction, the image in computer is very uniform. So the uniformity of fluorescence screen brightness can be judged directly. It also shows the correction method is reasonable and close to ideal image. When the uniformity of image intensifier fluorescence screen brightness is corrected, the testing instrument is developed. In a vacuum environment of better than 1×10-4Pa, area source electron gun emits electrons. Going through the electric field to be accelerated, the high speed electrons bombard the screen and the screen luminize. By using testing equipment such as imaging luminance meter, fast storage photometer, optical power meter, current meter and photosensitive detectors, the screen brightness, the uniformity, light-emitting efficiency and afterglow can be tested respectively. System performance are explained. Testing method is established; Test results are given.

  4. Venous oxygen saturation.

    Science.gov (United States)

    Hartog, Christiane; Bloos, Frank

    2014-12-01

    Early detection and rapid treatment of tissue hypoxia are important goals. Venous oxygen saturation is an indirect index of global oxygen supply-to-demand ratio. Central venous oxygen saturation (ScvO2) measurement has become a surrogate for mixed venous oxygen saturation (SvO2). ScvO2 is measured by a catheter placed in the superior vena cava. After results from a single-center study suggested that maintaining ScvO2 values >70% might improve survival rates in septic patients, international practice guidelines included this target in a bundle strategy to treat early sepsis. However, a recent multicenter study with >1500 patients found that the use of central hemodynamic and ScvO2 monitoring did not improve long-term survival when compared to the clinical assessment of the adequacy of circulation. It seems that if sepsis is recognized early, a rapid initiation of antibiotics and adequate fluid resuscitation are more important than measuring venous oxygen saturation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Metamaterial saturable absorber mirror.

    Science.gov (United States)

    Dayal, Govind; Ramakrishna, S Anantha

    2013-02-01

    We propose a metamaterial saturable absorber mirror at midinfrared wavelengths that can show a saturation of absorption with intensity of incident light and switch to a reflecting state. The design consists of an array of circular metallic disks separated by a thin film of vanadium dioxide (VO(2)) from a continuous metallic film. The heating due to the absorption in the absorptive state causes the VO(2) to transit to a metallic phase from the low temperature insulating phase. The metamaterial switches from an absorptive state (R≃0.1%) to a reflective state (R>95%) for a specific threshold intensity of the incident radiation corresponding to the phase transition of VO(2), resulting in the saturation of absorption in the metamaterial. The computer simulations show over 99.9% peak absorbance, a resonant bandwidth of about 0.8 μm at 10.22 μm wavelengths, and saturation intensity of 140 mW cm(-2) for undoped VO(2) at room temperature. We also carried out numerical simulations to investigate the effects of localized heating and temperature distribution by solving the heat diffusion problem.

  6. Method to determine the position-dependant metal correction factor for dose-rate equivalent laser testing of semiconductor devices

    Science.gov (United States)

    Horn, Kevin M.

    2013-07-09

    A method reconstructs the charge collection from regions beneath opaque metallization of a semiconductor device, as determined from focused laser charge collection response images, and thereby derives a dose-rate dependent correction factor for subsequent broad-area, dose-rate equivalent, laser measurements. The position- and dose-rate dependencies of the charge-collection magnitude of the device are determined empirically and can be combined with a digital reconstruction methodology to derive an accurate metal-correction factor that permits subsequent absolute dose-rate response measurements to be derived from laser measurements alone. Broad-area laser dose-rate testing can thereby be used to accurately determine the peak transient current, dose-rate response of semiconductor devices to penetrating electron, gamma- and x-ray irradiation.

  7. Primary outcomes of the congenital vertical talus correction using the Dobbs method of serial casting and limited surgery.

    Science.gov (United States)

    Aslani, Hossein; Sadigi, Ali; Tabrizi, Ali; Bazavar, Mohammadreza; Mousavi, Mehdi

    2012-08-01

    The traditional treatment for congenital vertical talus, which involves serial casting and extensive soft-tissue releases, has been associated with severe stiffness and other complications in adolescents and adults. Our hypothesis is that favorable results will be obtained using the Dobbs method of serial manipulation, casting, and limited surgery for vertical talus correction, even in older children and syndromic cases. Therefore, the present study aimed at evaluating the Dobbs method in such cases. We treated 15 feet of 10 patients (aged from 1 month to 9 years) using manipulation and serial casting or the reverse Ponseti method followed by percutaneous Achilles tenotomy and limited open reduction of the talonavicular joint. All patients were evaluated both clinically and radiologically in a mean follow-up period of 2 years. After 2 years, all patients had plantigrade and flexible feet with good radiographic correction. The mean talocalcaneal angle before (70.5° ± 10.5) and after (31° ± 5.2) treatment and the talar axis metatarsal base angle before (60° ± 11.4) and after (15° ± 6.7) treatment were significantly improved (P casting followed by limited surgery (Dobbs method) was successful in treating idiopathic congenital vertical talus. Our results also showed that this method resulted in an excellent outcome in both idiopathic and syndromic congenital vertical talus, even in older children.

  8. Mean grain size detection of DP590 steel plate using a corrected method with electromagnetic acoustic resonance.

    Science.gov (United States)

    Wang, Bin; Wang, Xiaokai; Hua, Lin; Li, Juanjuan; Xiang, Qing

    2017-04-01

    Electromagnetic acoustic resonance (EMAR) is a considerable method to determine the mean grain size of the metal material with a high precision. The basic ultrasonic attenuation theory used for the mean grain size detection of EMAR is come from the single phase theory. In this paper, the EMAR testing was carried out based on the ultrasonic attenuation theory. The detection results show that the double peaks phenomenon occurs in the EMAR testing of DP590 steel plate. The dual phase structure of DP590 steel is the inducement of the double peaks phenomenon in the EMAR testing. In reaction to the phenomenon, a corrected method with EMAR was put forward to detect the mean grain size of dual phase steel. Compared with the traditional attenuation evaluation method and the uncorrected method with EMAR, the corrected method with EMAR shows great effectiveness and superiority for the mean grain size detection of DP590 steel plate. Copyright © 2016. Published by Elsevier B.V.

  9. Spurious PIV vector detection and correction using a penalized least-squares method with adaptive order differentials

    Science.gov (United States)

    Tang, Chunxiao; Sun, Wenfei; He, Hayi; Li, Hongqiang; Li, Enbang

    2017-07-01

    Spurious vectors (also called "outliers") in particle image velocimetry (PIV) experiments can be classified into two categories according to their space distribution characteristics: scattered and clustered outliers. Most of the currently used validation and correction methods treat these two kinds of outliers together without discrimination. In this paper, we propose a new technique based on a penalized least-squares (PLS) method, which allows automatic classification of flows with different types of outliers. PIV vector fields containing scattered outliers are detected and corrected using higher-order differentials, while lower-order differentials are used for the flows with clustered outliers. The order of differentials is determined adaptively by generalized cross-validation and outlier classification. A simple calculation method of eigenvalues of different orders is also developed to expedite computation speed. The performance of the proposed method is demonstrated with four different velocity fields, and the results show that it works better than conventional methods, especially when the number of outliers is large.

  10. An online model correction method based on an inverse problem: Part I—Model error estimation by iteration

    Science.gov (United States)

    Xue, Haile; Shen, Xueshun; Chou, Jifan

    2015-10-01

    Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the prediction equations can be estimated inversely by using the past data, which are presumed to represent the imperfection of the NWP model (model error, denoted as ME). In this first paper of a two-part series, an iteration method for obtaining the MEs in past intervals is presented, and the results from testing its convergence in idealized experiments are reported. Moreover, two batches of iteration tests were applied in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August 2009 and January-February 2010. The datasets associated with the initial conditions and sea surface temperature (SST) were both based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results showed that 6th h forecast errors were reduced to 10% of their original value after a 20-step iteration. Then, off-line forecast error corrections were estimated linearly based on the 2-month mean MEs and compared with forecast errors. The estimated error corrections agreed well with the forecast errors, but the linear growth rate of the estimation was steeper than the forecast error. The advantage of this iteration method is that the MEs can provide the foundation for online correction. A larger proportion of the forecast errors can be expected to be canceled out by properly introducing the model error correction into GRAPES-GFS.

  11. A distance correction method for improving the accuracy of particle coal online X-ray fluorescence analysis - Part 2: Method and experimental investigation

    Science.gov (United States)

    Zhang, Yan; Jia, Wen Bao; Gardner, Robin; Shan, Qing; Zhang, Xin Lei; Hou, Guojing; Chang, Hao Ping

    2017-12-01

    The distance from X-Ray Fluorescence (XRF) spectrometer to sample surface always changes with the different coal's particle sizes, resulting in the inaccuracy of online XRF measurement. To improve the accuracy of particle coal online XRF analysis, a distance correction method was established elaborated by iteration, which was based on the relationship between the XRF intensity and the distance. In order to verify the effectiveness of this method, five different particle size coal samples with same components have been measured by the online XRF analyzer directly above the conveyor belt, in the meanwhile, the distances between XRF spectrometer and samples' surface were obtained by a laser rangefinder. The results showed that the average distances are decreased with decreasing the particle size. By comparing the results of before and after applying the distance correction method, we demonstrated that the measurement accuracy of online XRF analysis for particle coal can be significantly increased. The distance correction method can be used for the development of online XRF analysis techniques applicable for real-time industrial processes.

  12. Determination of avermectins by the internal standard recovery correction - high performance liquid chromatography - quantitative Nuclear Magnetic Resonance method.

    Science.gov (United States)

    Zhang, Wei; Huang, Ting; Li, Hongmei; Dai, Xinhua; Quan, Can; He, Yajuan

    2017-09-01

    Quantitative Nuclear Magnetic Resonance (qNMR) is widely used to determine the purity of organic compounds. For the compounds with lower purity especially molecular weight more than 500, qNMR is at risk of error for the purity, because the impurity peaks are likely to be incompletely separated from the peak of major component. In this study, an offline ISRC-HPLC-qNMR (internal standard recovery correction - high performance liquid chromatography - qNMR) was developed to overcome this problem. It is accurate by excluding the influence of impurity; it is low-cost by using common mobile phase; and it extends the applicable scope of qNMR. In this method, a mix solution of the sample and an internal standard was separated by HPLC with common mobile phases, and only the eluents of the analyte and the internal standard were collected in the same tube. After evaporation and re-dissolution, it was determined by qNMR. A recovery correction factor was determined by comparison of the solutions before and after these procedures. After correction, the mass fraction of analyte was constant and it was accurate and precise, even though the sample loss varied during these procedures, or even in bad resolution of HPLC. Avermectin B1a with the purity of ~93% and the molecular weight of 873 was analyzed. Moreover, the homologues of avermectin B1a were determined based on the identification and quantitative analysis by tandem mass spectrometry and HPLC, and the results were consistent with the results of traditional mass balance method. The result showed that the method could be widely used for the organic compounds, and could further promote qNMR to become a primary method in the international metrological systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. INVESTIGATING THE EFFECTIVENESS OF KINESIO® TAPING SPACE CORRECTION METHOD IN HEALTHY ADULTS ON PATELLOFEMORAL JOINT AND SUBCUTANEOUS SPACE.

    Science.gov (United States)

    Lyman, Katie J; Keister, Kassiann; Gange, Kara; Mellinger, Christopher D; Hanson, Thomas A

    2017-04-01

    Limited quantitative, physiological evidence exists regarding the effectiveness of Kinesio® Taping methods, particularly with respect to the potential ability to impact underlying physiological joint space and structures. To better understand the impact of these techniques, the underlying physiological processes must be investigated in addition to the examination of more subjective measures related to pain in unhealthy tissues. The purpose of this study was to determine whether the Kinesio® Taping Space Correction Method created a significant difference in patellofemoral joint space, as quantified by diagnostic ultrasound. Pre-test/post-test prospective cohort study. Thirty-two participants with bilaterally healthy knees and no past history of surgery took part in the study. For each participant, diagnostic ultrasound was utilized to collect three measurements: the patellofemoral joint space, the distance from the skin to the superficial patella, and distance from the skin to the patellar tendon. The Kinesio® Taping Space Correction Method was then applied. After a ten-minute waiting period in a non-weight bearing position, all three measurements were repeated. Each participant served as his or her own control. Paired t tests showed a statistically significant difference (mean difference = 1.1 mm, t [3,1]  = 2.823, p  = 0.008, g  = .465) between baseline and taped conditions in the space between the posterior surface of the patella to the medial femoral condyle. Neither the distance from the skin to the superficial patella nor the distance from the skin to the patellar tendon increased to a statistically significant degree. The application of the Kinesio® Taping Space Correction Method increases the patellofemoral joint space in healthy adults by increasing the distance between the patella and the medial femoral condyle, though it does not increase the distance from the skin to the superficial patella nor to the patellar tendon. 3.

  14. Ilizarov technique and limited surgical methods for correction of post-traumatic talipes equinovarus in children.

    Science.gov (United States)

    Wang, Xiao Jian; Chang, Feng; Su, Yunxing; Chen, Bin; Song, Jie-Fu; Wei, Xiao-Chun; Wei, Lei

    2017-10-01

    The objective of this study was to evaluate the efficacy and safety of using Ilizarov invasive distraction technique combined with limited surgical operations in the treatment of post-traumatic talipes equinovarus in children. Eighteen cases of post-traumatic deformed feet in 15 patients who received the treatment of Ilizarov frame application, limited soft-tissue release or osteotomy were selected in this study. After removal of the frame, an ankle-foot orthosis was used continuously for another 6-12 months. Pre- and post-operatively, the International Clubfoot Study Group (ICFSG) score was employed to evaluate the gait and range of motion of the ankle joint. Radiographical assessment was also conducted. Patients were followed up for 22 (17-32) months. Ilizarov frame was applied for a mean duration of 5.5 (4-9) months. When it was removed, the gait was improved significantly in all the patients. The correction time was 6-8 weeks for patients who underwent soft-tissue release and 8-12 weeks for those with bone osteotomy. At the last follow-up assessment, the differences between pre- and post-operative plantar-flexion angle, dorsiflexion, motion of ankle joint and talocalcaneal angle were significant (all P talipes equinovarus in children. © 2017 Royal Australasian College of Surgeons.

  15. How about a Bayesian M/EEG imaging method correcting for incomplete spatio-temporal priors

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Attias, Hagai T.; Sekihara, Kensuke

    2013-01-01

    -temporal prior belief. We have tested the model on both artificial data and real EEG data in order to demonstrate the efficacy of the model. The model was tested at different SNRs (-10.0,-5.2, -3.0, -1.0, 0, 0.8, 3.0 dB) using white noise. At all SNRs the sAquavit performs best in AUC measure, e.g. at SNR=0d......In this contribution we present a hierarchical Bayesian model, sAquavit, to tackle the highly ill-posed problem that follows with MEG and EEG source imaging. Our model facilitates spatio-temporal patterns through the use of both spatial and temporal basis functions. While in contrast to most...... previous spatio-temporal inverse M/EEG models, the proposed model benefits of consisting of two source terms, namely, a spatio-temporal pattern term limiting the source configuration to a spatio-temporal subspace and a source correcting term to pick up source activity not covered by the spatio...

  16. PAIN IN RHEUMATOID ARTHRITIS: SPECIFIC FEATURES OF ITS DEVELOPMENT AND METHODS OF CORRECTION

    Directory of Open Access Journals (Sweden)

    Yuri Aleksandrovich Olyunin

    2010-06-01

    Full Text Available The pain syndrome holds a central position in the clinical picture of rheumatoid arthritis (RA. Articular inflammation is an essential, but not the only, factor that determines the occurrence of pain. Extraarticular soft tissue pathology can play an important role in the formation of pain perceptions in RA. The pain that increases on movement with involvement of affected structures, as well as local tenderness on palpation and dysfunction of an altered segment are the major clinical manifestations of extraarticular soft tissue involvement in RA. Swelling in the area of appropriate tendons and synovial bursae can be seen when superficially located anatomic formations are involved. Magnetic resonance imaging and ultrasonography permit more accurate determination of the site and pattern of an involvement. The pain and functional impairments associated with extraarticular soft tissue pathology determine a need for additional therapy that can correct the existing disorders and improve the quality of life in patients. The major components of this treatment are sparing routine and systemic and local drug therapy. Diclofenac sodium is one of the most universal agents that allow simultaneous monitoring of various pathogenetic mechanisms of the disease. Local glucocorticoids may be used if the sparing routine and nonsteroidal anti-inflammatory drugs fail to control the pain syndrome effectively.

  17. Morphometric substantiation of a fixation method choice at surgical correction of spondylolisthesis

    Directory of Open Access Journals (Sweden)

    Anisimova Е.А.

    2010-09-01

    Full Text Available The purpose was to reveal patterns of morphometric characteristics variability of lumbar vertebrae and sacrum for a choice of more adequate selection of standard sizes and introduction orientation of corrigent metalware at surgical treatment of spondylolisthesis. Preparations of lumbar vertebrae and sacrum of 60 skeletons, 110 Kt-grams of men and women of the first and second periods of mature age without visible pathology of a backbone and 300 Kt-grams of patients with spondylolisthesis. The data on age variability and sexual dimorphism of lumbar vertebrae and sacrum were obtained. The analysis of results of surgical treatment of 288 patients with spondylolisthesis during 1995-2008 was carried out. 160 patients were managed with preoperative planning, taking into account morphometric characteristics of vertebrae and sacrum since 2003. It is necessary to install and arrange metalware at reduction taking into account features of back structures and forward basic complex of lumbar vertebrae and sacrum; that allows to receive adequate decompression of neurovascu-lar structures in 85-90% cases, reliable correction and stabilization of damaged lumbosacral segments

  18. A comparison of high-order explicit Runge–Kutta, extrapolation, and deferred correction methods in serial and parallel

    KAUST Repository

    Ketcheson, David I.

    2014-06-13

    We compare the three main types of high-order one-step initial value solvers: extrapolation, spectral deferred correction, and embedded Runge–Kutta pairs. We consider orders four through twelve, including both serial and parallel implementations. We cast extrapolation and deferred correction methods as fixed-order Runge–Kutta methods, providing a natural framework for the comparison. The stability and accuracy properties of the methods are analyzed by theoretical measures, and these are compared with the results of numerical tests. In serial, the eighth-order pair of Prince and Dormand (DOP8) is most efficient. But other high-order methods can be more efficient than DOP8 when implemented in parallel. This is demonstrated by comparing a parallelized version of the wellknown ODEX code with the (serial) DOP853 code. For an N-body problem with N = 400, the experimental extrapolation code is as fast as the tuned Runge–Kutta pair at loose tolerances, and is up to two times as fast at tight tolerances.

  19. Comparison of three 15N methods to correct for microbial contamination when assessing in situ protein degradability of fresh forages.

    Science.gov (United States)

    Kamoun, M; Ammar, H; Théwis, A; Beckers, Y; France, J; López, S

    2014-11-01

    The use of stable (15)N as a marker to determine microbial contamination in nylon bag incubation residues to estimate protein degradability was investigated. Three methods using (15)N were compared: (15)N-labeled forage (dilution method, LF), (15)N enrichment of rumen solids-associated bacteria (SAB), and (15)N enrichment of rumen liquid-associated bacteria (LAB). Herbage from forages differing in protein and fiber contents (early-cut Italian ryegrass, late-cut Italian ryegrass, and red clover) were freeze-dried and ground and then incubated in situ in the rumen of 3 steers for 3, 6, 12, 24, and 48 h using the nylon bag technique. The (15)N-labeled forages were obtained by fertilizing the plots where herbage was grown with (15)NH4 (15)NO3. Unlabeled forages (obtained from plots fertilized with NH4NO3) were incubated at the same time that ((15)NH4)2SO4 was continuously infused into the rumen of the steers, and then pellets of labeled SAB and LAB were isolated by differential centrifugation of samples of ruminal contents. The proportion of bacterial N in the incubation residues increased from 0.09 and 0.45 g bacterial N/g total N at 3 h of incubation to 0.37 and 0.85 g bacterial N/g total N at 48 h of incubation for early-cut and late-cut ryegrass, respectively. There were differences (P forage (late-cut ryegrass) was 0.51, whereas the corrected values were 0.85, 0.84, and 0.77 for the LF, SAB, and LAB methods, respectively. With early-cut ryegrass and red clover, the differences between uncorrected and corrected values ranged between 6% and 13%, with small differences among the labeling methods. Generally, methods using labeled forage or labeled SAB and LAB provided similar corrected degradability values. The accuracy in estimating the extent of degradation of protein in the rumen from in situ disappearance curves is improved when values are corrected for microbial contamination of the bag residue.

  20. An Improved Dynamical Downscaling Method with GCM Bias Corrections and Its Validation with 30 Years of Climate Simulations

    KAUST Repository

    Xu, Zhongfeng

    2012-09-01

    An improved dynamical downscaling method (IDD) with general circulation model (GCM) bias corrections is developed and assessed over North America. A set of regional climate simulations is performed with the Weather Research and Forecasting Model (WRF) version 3.3 embedded in the National Center for Atmospheric Research\\'s (NCAR\\'s) Community Atmosphere Model (CAM). The GCM climatological means and the amplitudes of interannual variations are adjusted based on the National Centers for Environmental Prediction (NCEP)-NCAR global reanalysis products (NNRP) before using them to drive WRF. In this study, the WRF downscaling experiments are identical except the initial and lateral boundary conditions derived from the NNRP, original GCM output, and bias-corrected GCM output, respectively. The analysis finds that the IDD greatly improves the downscaled climate in both climatological means and extreme events relative to the traditional dynamical downscaling approach (TDD). The errors of downscaled climatological mean air temperature, geopotential height, wind vector, moisture, and precipitation are greatly reduced when the GCM bias corrections are applied. In the meantime, IDD also improves the downscaled extreme events characterized by the reduced errors in 2-yr return levels of surface air temperature and precipitation. In comparison with TDD, IDD is also able to produce a more realistic probability distribution in summer daily maximum temperature over the central U.S.-Canada region as well as in summer and winter daily precipitation over the middle and eastern United States. © 2012 American Meteorological Society.

  1. A GENERALIZED NON-LINEAR METHOD FOR DISTORTION CORRECTION AND TOP-DOWN VIEW CONVERSION OF FISH EYE IMAGES

    Directory of Open Access Journals (Sweden)

    Vivek Singh Bawa

    2017-06-01

    Full Text Available Advanced driver assistance systems (ADAS have been developed to automate and modify vehicles for safety and better driving experience. Among all computer vision modules in ADAS, 360-degree surround view generation of immediate surroundings of the vehicle is very important, due to application in on-road traffic assistance, parking assistance etc. This paper presents a novel algorithm for fast and computationally efficient transformation of input fisheye images into required top down view. This paper also presents a generalized framework for generating top down view of images captured by cameras with fish-eye lenses mounted on vehicles, irrespective of pitch or tilt angle. The proposed approach comprises of two major steps, viz. correcting the fish-eye lens images to rectilinear images, and generating top-view perspective of the corrected images. The images captured by the fish-eye lens possess barrel distortion, for which a nonlinear and non-iterative method is used. Thereafter, homography is used to obtain top-down view of corrected images. This paper also targets to develop surroundings of the vehicle for wider distortion less field of view and camera perspective independent top down view, with minimum computation cost which is essential due to limited computation power on vehicles.

  2. EVALUATION OF NEUTRON SCATTERING CORRECTION USING THE SEMI-EMPIRICAL METHOD AND THE SHADOW-CONE METHOD FOR THE NEUTRON FIELD OF THE KOREA ATOMIC ENERGY RESEARCH INSTITUTE.

    Science.gov (United States)

    Lee, Seung Kyu; Kim, Sang I; Lee, Jungil; Chang, Insu; Kim, Jang-Lyul; Kim, Hyoungtaek; Kim, Min Chae; Kim, Bong-Hwan

    2017-10-19

    When neutron survey metres are calibrated in neutron fields, the results for room- and air-scattered neutrons vary according to the distance from the source and the size, shape and construction of the neutron calibration room. ISO 8529-2 recommends four approaches for correcting these effects: the shadow-cone method, semi-empirical method, generalised fit method and reduced-fitting method. In this study, neutron scattering effects are evaluated and compared using the shadow-cone and semi-empirical methods for the neutron field of the Korea Atomic Energy Research Institute (KAERI). The neutron field is constructed using a 252Cf neutron source positioned in the centre of the neutron calibration room. To compare the neutron scattering effects using the two correction methods, measurements and simulations are performed using respectively KAERI's Bonner sphere spectrometer (BBS) and Monte Carlo N-Particle code at twenty different positions. Neutron spectra are measured by a europium-activated lithium iodide [6LiI(Eu)] scintillator in combination with the BBS. The calibration factors obtained using each methods show good agreement within 1.1%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Multi-perturbation stochastic parallel gradient descent method for wavefront correction.

    Science.gov (United States)

    Wu, Kenan; Sun, Yang; Huai, Ying; Jia, Shuqin; Chen, Xi; Jin, Yuqi

    2015-02-09

    The multi-perturbation stochastic parallel gradient descent (SPGD) method for adaptive optics is presented in this work. The method is based on a new architecture. The incoming beam with distorted wavefront is split into N sub-beams. Each sub-beam is modulated by a wavefront corrector and its performance metric is measured subsequently. Adaptive system based on the multi-perturbation SPGD can operate in two modes - the fast descent mode and the modal basis updating mode. Control methods of the two operation modes are given. Experiments were carried out to prove the effectiveness of the proposed method. Analysis as well as experimental results showed that the two operation modes of the multi-perturbation SPGD enhance the conventional SPGD in different ways. The fast descent mode provides faster convergence than the conventional SPGD. The modal basis updating mode can optimize the modal basis set for SPGD with global coupling.

  4. A new method of cornea modulation with excimer laser for simultaneous correction of presbyopia and ametropia.

    Science.gov (United States)

    Uthoff, Detlef; Pölzl, Markus; Hepper, Daniel; Holland, Detlef

    2012-11-01

    To investigate the outcomes of simultaneous correction of presbyopia and ametropia by a bi-aspheric cornea modulation technique, based on the creation of a central area hyperpositive for near vision and leaving the pericentral cornea for far vision in hyperopic, emmetropic, and myopic presbyopic patients. Sixty eyes of 30 patients were treated with the PresbyMAX technique by one surgeon (D.U.) at the Eye Hospital Bellevue, Kiel, Germany. Twenty eyes with hyperopic presbyopia, 20 eyes with emmetropic presbyopia, and 20 eyes with myopic presbyopia underwent Femto-Lasik, and were assessed up to 6 months postoperatively. All eyes underwent cornea treatment using the PresbyMAX® software, delivering a bi-aspheric multifocal ablation profile developed by SCHWIND eye-tech-solutions (Kleinostheim, Germany). All flaps were created by Ziemer LDV Femtolaser (Port, Switzerland). The mean binocular distance of uncorrected visual acuity (DUCVA) improved in the hyperopic group from 0.28 ± 0.29 logMAR to -0.04 ± 0.07 logMAR, in the emmetropic group from -0.05 ± 0.07 logMAR to 0.02 ± 0.11 logMAR, and in the myopic group from 0.78 ± 0.27 logMAR to 0.09 ± 0.08 logMAR. The mean binocular near uncorrected visual acuity (NUCVA) increased in the hyperopic group from 0.86 ± 0.62 logRAD to 0.24 ± 0.23 logRAD, and in the emmetropic group from 0.48 ± 0.14 logRAD to 0.18 ± 0.11 logRAD. The myopic presbyopes showed a decrease of the mean binocular NUCVA from 0.04 ± 0.19 logRAD to 0.12 ± 0.18 logRAD. The mean postoperative spherical equivalent for distance refraction was -0.13 ± 0.61 D for the hyperopic presbyopia, -0.43 ± 0.35 D for the emmetropic presbyopia, and -0.68 ± 0.42 D for the myopic presbyopia group, whereas the software took aim at -0.50 D in all groups. In presbyopic patients without symptomatic cataracts, but refractive errors, PresbyMAX® will decrease the presbyopic symptoms and correct far distance

  5. Gluon saturation beyond (naive) leading logs

    Energy Technology Data Exchange (ETDEWEB)

    Beuf, Guillaume

    2014-12-15

    An improved version of the Balitsky–Kovchegov equation is presented, with a consistent treatment of kinematics. That improvement allows to resum the most severe of the large higher order corrections which plague the conventional versions of high-energy evolution equations, with approximate kinematics. This result represents a further step towards having high-energy QCD scattering processes under control beyond strict Leading Logarithmic accuracy and with gluon saturation effects.

  6. Should methods of correction for multiple comparisons be applied in pharmacovigilance?

    Directory of Open Access Journals (Sweden)

    Lorenza Scotti

    2015-12-01

    Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.

  7. cgCorrect: a method to correct for confounding cell-cell variation due to cell growth in single-cell transcriptomics

    Science.gov (United States)

    Blasi, Thomas; Buettner, Florian; Strasser, Michael K.; Marr, Carsten; Theis, Fabian J.

    2017-06-01

    Accessing gene expression at a single-cell level has unraveled often large heterogeneity among seemingly homogeneous cells, which remains obscured when using traditional population-based approaches. The computational analysis of single-cell transcriptomics data, however, still imposes unresolved challenges with respect to normalization, visualization and modeling the data. One such issue is differences in cell size, which introduce additional variability into the data and for which appropriate normalization techniques are needed. Otherwise, these differences in cell size may obscure genuine heterogeneities among cell populations and lead to overdispersed steady-state distributions of mRNA transcript numbers. We present cgCorrect, a statistical framework to correct for differences in cell size that are due to cell growth in single-cell transcriptomics data. We derive the probability for the cell-growth-corrected mRNA transcript number given the measured, cell size-dependent mRNA transcript number, based on the assumption that the average number of transcripts in a cell increases proportionally to the cell’s volume during the cell cycle. cgCorrect can be used for both data normalization and to analyze the steady-state distributions used to infer the gene expression mechanism. We demonstrate its applicability on both simulated data and single-cell quantitative real-time polymerase chain reaction (PCR) data from mouse blood stem and progenitor cells (and to quantitative single-cell RNA-sequencing data obtained from mouse embryonic stem cells). We show that correcting for differences in cell size affects the interpretation of the data obtained by typically performed computational analysis.

  8. A method for the dynamic correction of B0-related distortions in single-echo EPI at 7T.

    Science.gov (United States)

    Dymerska, Barbara; Poser, Benedikt A; Barth, Markus; Trattnig, Siegfried; Robinson, Simon D

    2016-07-07

    We propose a method to calculate field maps from the phase of each EPI in an fMRI time series. These field maps can be used to correct the corresponding magnitude images for distortion caused by inhomogeneity in the static magnetic field. In contrast to conventional static distortion correction, in which one 'snapshot' field map is applied to all subsequent fMRI time points, our method also captures dynamic changes to B0 which arise due to motion and respiration. The approach is based on the assumption that the non-B0-related contribution to the phase measured by each radio-frequency coil, which is dominated by the coil sensitivity, is stable over time and can therefore be removed to yield a field map from EPI. Our solution addresses imaging with multi-channel coils at ultra-high field (7T), where phase offsets vary rapidly in space, phase processing is non-trivial and distortions are comparatively large. We propose using dual-echo gradient echo reference scan for the phase offset calculation, which yields estimates with high signal-to-noise ratio. An extrapolation method is proposed which yields reliable estimates for phase offsets even where motion is large and a tailored phase unwrapping procedure for EPI is suggested which gives robust results in regions with disconnected tissue or strong signal decay. Phase offsets are shown to be stable during long measurements (40min) and for large head motions. The dynamic distortion correction proposed here is found to work accurately in the presence of large motion (up to 8.1°), whereas a conventional method based on single field map fails to correct or even introduces distortions (up to 11.2mm). Finally, we show that dynamic unwarping increases the temporal stability of EPI in the presence of motion. Our approach can be applied to any EPI measurements without the need for sequence modification. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Conformational analysis of dimethylbis(methyldithiocarbonato)stannum(IV) revisited: Application of cluster method, dispersion and counterpoise corrections

    Energy Technology Data Exchange (ETDEWEB)

    Ang, Lee Sin, E-mail: anglee631@perlis.uitm.edu.my [Faculty of Applied Sciences, Universiti Teknologi MARA (Malaysia); Sulaiman, Shukri; Chua, Bing Chuan; Mohamed-Ibrahim, Mohamed Ismail [Computational Chemistry and Physics Laboratory, School of Distance Education, Universiti Sains Malaysia (Malaysia)

    2014-05-15

    This investigation extends the previous determinations on the lowest energy conformation of dimethylbis(methyldithiocarbonato)stannum (IV) [Me{sub 2}Sn(S{sub 2}COMe){sub 2}]. In the previous investigations, calculations were performed only on single molecules, hence the crystal packing effects in the calculations were neglected. In this study, we performed systematic investigations on this compound by employing the molecular orbital cluster method. The largest cluster is an 11-molecule system. Methods from ab initio and density functional theory (DFT) were used, with empirical dispersion energy included to account for the intra- and intermolecular energy; and basis set superposition error (BSSE) is corrected with geometrical counterpoise scheme. The results showed that the neglect of crystal packing effects for 1- and 2-molecule clusters was unable to be rectified by the corrective energies, and we showed that the many-molecule cluster is needed to obtain a good agreement with the experimental results. Using the cluster method, our results showed agreement with the SS: SO conformation found in the solid state structure of Me{sub 2}Sn(S{sub 2}COMe).

  10. Saturated and trans fats

    National Research Council Canada - National Science Library

    Shader, Richard I

    2014-01-01

    ... Original Pancake Mix plus ingredients suggested by the recipe: 2 g saturated fat (SF) and no trans fatty acids or trans fat (TFA); bacon, Oscar Mayer Lower Sodium Bacon: 2.5 g SF and no TFA; sausages, Jimmy Dean Original Pork Sausage Links: 8 g SF and no TFA; potatoes, Ore-Ida Mini Tater Tots: 2 g SF and no TFA; and nondairy creamer, Nestlé Coffee-...

  11. The Ehrenfest method with quantum corrections to simulate the relaxation of molecules in solution: equilibrium and dynamics.

    Science.gov (United States)

    Bastida, Adolfo; Cruz, Carlos; Zúñiga, José; Requena, Alberto; Miguel, Beatriz

    2007-01-07

    The use of the Ehrenfest method to simulate the relaxation of molecules in solution is explored. Using the cyanide ion dissolved in water as a test model, the independent trajectory (IT) and the bundle of trajectories (BT) approximations are shown to provide very different results for the time evolution of the vibrational populations of the solute. None of these approximations reproduce the Boltzmann equilibrium vibrational populations accurately. A modification of the Ehrenfest method based on the use of quantum correction factors is thus proposed to solve this problem. The simulations carried out using the modified Ehrenfest method provide IT and BT relaxation times which are closer to each other and which agree quite well with previous hybrid perturbative results.

  12. Aerodynamic optimization of wind turbine rotors using a blade element momentum method with corrections for wake rotation and expansion

    DEFF Research Database (Denmark)

    Døssing, Mads; Aagaard Madsen, Helge; Bak, Christian

    2012-01-01

    by the positive effect of wake rotation, which locally causes the efficiency to exceed the Betz limit. Wake expansion has a negative effect, which is most important at high tip speed ratios. It was further found that by using , it is possible to obtain a 5% reduction in flap bending moment when compared with BEM...... out using BEM as well. Validation of shows good agreement with the flow calculated using an advanced actuator disk method. The maximum power was found at a tip speed ratio of 7 using , and this is lower than the optimum tip speed ratio of 8 found for BEM. The difference is primarily caused......The blade element momentum (BEM) method is widely used for calculating the quasi-steady aerodynamics of horizontal axis wind turbines. Recently, the BEM method has been expanded to include corrections for wake expansion and the pressure due to wake rotation (), and more accurate solutions can now...

  13. Blind deconvolution combined with level set method for correcting cupping artifacts in cone beam CT

    Science.gov (United States)

    Xie, Shipeng; Zhuang, Wenqin; Li, Baosheng; Bai, Peirui; Shao, Wenze; Tong, Yubing

    2017-02-01

    To reduce cupping artifacts and enhance contrast resolution in cone-beam CT (CBCT), in this paper, we introduce a new approach which combines blind deconvolution with a level set method. The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented on a single-scan acquisition. The results demonstrate that the algorithm is practical and effective for reducing the cupping artifacts and enhance contrast resolution on the images, preserves the quality of the reconstructed image, and is very robust.

  14. Comparison of population-based association study methods correcting for population stratification.

    Directory of Open Access Journals (Sweden)

    Feng Zhang

    Full Text Available Population stratification can cause spurious associations in population-based association studies. Several statistical methods have been proposed to reduce the impact of population stratification on population-based association studies. We simulated a set of stratified populations based on the real haplotype data from the HapMap ENCODE project, and compared the relative power, type I error rates, accuracy and positive prediction value of four prevailing population-based association study methods: traditional case-control tests, structured association (SA, genomic control (GC and principal components analysis (PCA under various population stratification levels. Additionally, we evaluated the effects of sample sizes and frequencies of disease susceptible allele on the performance of the four analytical methods in the presence of population stratification. We found that the performance of PCA was very stable under various scenarios. Our comparison results suggest that SA and PCA have comparable performance, if sufficient ancestral informative markers are used in SA analysis. GC appeared to be strongly conservative in significantly stratified populations. It may be better to apply GC in the stratified populations with low stratification level. Our study intends to provide a practical guideline for researchers to select proper study methods and make appropriate inference of the results in population-based association studies.

  15. Corrected momentum exchange method for lattice Boltzmann simulations of suspension flow

    NARCIS (Netherlands)

    Lorenz, E.; Caiazzo, A.; Hoekstra, A.G.

    2009-01-01

    Standard methods for lattice Boltzmann simulations of suspended particles, based on the momentum exchange algorithm, might lack accuracy or violate Galilean invariance in some particular situations. Aiming at simulations of dense suspensions in high-shear flows, we motivate and investigate necessary

  16. [Correction of autonomic reactions parameters in organism of cosmonaut with adaptive biocontrol method

    Science.gov (United States)

    Kornilova, L. N.; Cowings, P. S.; Toscano, W. B.; Arlashchenko, N. I.; Korneev, D. Iu; Ponomarenko, A. V.; Salagovich, S. V.; Sarantseva, A. V.; Kozlovskaia, I. B.

    2000-01-01

    Presented are results of testing the method of adaptive biocontrol during preflight training of cosmonauts. Within the MIR-25 crew, a high level of controllability of the autonomous reactions was characteristic of Flight Commanders MIR-23 and MIR-25 and flight Engineer MIR-23, while Flight Engineer MIR-25 displayed a weak intricate dependence of these reactions on the depth of relaxation or strain.

  17. A chemometric method for correcting FTIR spectra of biomaterials for interference from water in KBr discs

    Science.gov (United States)

    FTIR analysis of solid biomaterials by the familiar KBr disc technique is very often frustrated by water interference in the important protein (amide I) and carbohydrate (hydroxyl) regions of their spectra. A method was therefore devised that overcomes the difficulty and measures FTIR spectra of so...

  18. Methods and methodology for FTIR spectral correction of channel spectra and uncertainty, applied to ferrocene

    Science.gov (United States)

    Islam, M. T.; Trevorah, R. M.; Appadoo, D. R. T.; Best, S. P.; Chantler, C. T.

    2017-04-01

    We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new method for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χr2 analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. Methods for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are applied to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d10) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7 K through 353 K.

  19. Development of an absorption corrected near-field gamma-ray assay method for whole items

    Energy Technology Data Exchange (ETDEWEB)

    Mosby, W.R. [Argonne National Lab., Idaho Falls, ID (United States)

    1996-10-01

    Nuclear material safeguards and waste characterization operations often require quick gamma-ray measurements of reasonable accuracy and precision. This talk will describe the development of a gamma-ray measurement in which the gamma-ray absorption characteristics of the packaging and contents of an item, along with its geometry and the counting geometry, are used in determining an equivalent non-attenuating point source strength in terms of radionuclide activity or mass. Methods for determining the attenuation characteristics and geometry of various types of items and the sensitivity of the measurement results to errors in such characterization will be discussed. The talk will describe operational experience with the method at Argonne-West.

  20. Method for beam hardening correction in quantitative computed X-ray tomography

    Science.gov (United States)

    Yan, Chye Hwang (Inventor); Whalen, Robert T. (Inventor); Napel, Sandy (Inventor)

    2001-01-01

    Each voxel is assumed to contain exactly two distinct materials, with the volume fraction of each material being iteratively calculated. According to the method, the spectrum of the X-ray beam must be known, and the attenuation spectra of the materials in the object must be known, and be monotonically decreasing with increasing X-ray photon energy. Then, a volume fraction is estimated for the voxel, and the spectrum is iteratively calculated.

  1. A new method for x-ray scatter correction: first assessment on a cone-beam CT experimental setup

    Energy Technology Data Exchange (ETDEWEB)

    Rinkel, J [CEA-LETI MINATEC, Division of Micro Technologies for Biology and Healthcare, 38054 Grenoble Cedex 09 (France); Gerfault, L [CEA-LETI MINATEC, Division of Micro Technologies for Biology and Healthcare, 38054 Grenoble Cedex 09 (France); Esteve, F [INSERM U647-RSRM, ESRF, BP200, 38043 Grenoble Cedex 09 (France); Dinten, J-M [CEA-LETI MINATEC, Division of Micro Technologies for Biology and Healthcare, 38054 Grenoble Cedex 09 (France)

    2007-08-07

    Cone-beam computed tomography (CBCT) enables three-dimensional imaging with isotropic resolution and a shorter acquisition time compared to a helical CT scanner. Because a larger object volume is exposed for each projection, scatter levels are much higher than in collimated fan-beam systems, resulting in cupping artifacts, streaks and quantification inaccuracies. In this paper, a general method to correct for scatter in CBCT, without supplementary on-line acquisition, is presented. This method is based on scatter calibration through off-line acquisition combined with on-line analytical transformation based on physical equations, to adapt calibration to the object observed. The method was tested on a PMMA phantom and on an anthropomorphic thorax phantom. The results were validated by comparison to simulation for the PMMA phantom and by comparison to scans obtained on a commercial multi-slice CT scanner for the thorax phantom. Finally, the improvements achieved with the new method were compared to those obtained using a standard beam-stop method. The new method provided results that closely agreed with the simulation and with the conventional CT scanner, eliminating cupping artifacts and significantly improving quantification. Compared to the beam-stop method, lower x-ray doses and shorter acquisition times were needed, both divided by a factor of 9 for the same scatter estimation accuracy.

  2. An evaluation method for tornado missile strike probability with stochastic correction

    Energy Technology Data Exchange (ETDEWEB)

    Eguchi, Yuzuru; Murakami, Takahiro; Hirakuchi, Hiromaru; Sugimoto, Soichiro; Hattori, Yasuo [Nuclear Risk Research Center (External Natural Event Research Team), Central Research Institute of Electric Power Industry, Abiko (Japan)

    2017-03-15

    An efficient evaluation method for the probability of a tornado missile strike without using the Monte Carlo method is proposed in this paper. A major part of the proposed probability evaluation is based on numerical results computed using an in-house code, Tornado-borne missile analysis code, which enables us to evaluate the liftoff and flight behaviors of unconstrained objects on the ground driven by a tornado. Using the Tornado-borne missile analysis code, we can obtain a stochastic correlation between local wind speed and flight distance of each object, and this stochastic correlation is used to evaluate the conditional strike probability, QV(r), of a missile located at position r, where the local wind speed is V. In contrast, the annual exceedance probability of local wind speed, which can be computed using a tornado hazard analysis code, is used to derive the probability density function, p(V). Then, we finally obtain the annual probability of tornado missile strike on a structure with the convolutional integration of product of QV(r) and p(V) over V. The evaluation method is applied to a simple problem to qualitatively confirm the validity, and to quantitatively verify the results for two extreme cases in which an object is located just in the vicinity of or far away from the structure.

  3. Voxel spread function method for correction of magnetic field inhomogeneity effects in quantitative gradient-echo-based MRI.

    Science.gov (United States)

    Yablonskiy, Dmitriy A; Sukstanskii, Alexander L; Luo, Jie; Wang, Xiaoqi

    2013-11-01

    Macroscopic magnetic field inhomogeneities adversely affect different aspects of MRI images. In quantitative MRI when the goal is to quantify biological tissue parameters, they bias and often corrupt such measurements. The goal of this article is to develop a method for correction of macroscopic field inhomogeneities that can be applied to a variety of quantitative gradient-echo-based MRI techniques. We have reanalyzed a basic theory of gradient echo MRI signal formation in the presence of background field inhomogeneities and derived equations that allow for correction of magnetic field inhomogeneity effects based on the phase and magnitude of gradient echo data. We verified our theory by mapping effective transverse relaxation rate in computer simulated, phantom, and in vivo human data collected with multi-gradient echo sequences. The proposed technique takes into account voxel spread function effects and allowed obtaining virtually free from artifacts effective transverse relaxation rate maps for all simulated, phantom and in vivo data except of the edge areas with very steep field gradients. The voxel spread function method, allowing quantification of tissue specific effective transverse relaxation rate-related tissue properties, has a potential to breed new MRI biomarkers serving as surrogates for tissue biological properties similar to longitudinal and transverse relaxation rate constants widely used in clinical and research MRI. Copyright © 2012 Wiley Periodicals, Inc.

  4. Comparing the Performance of Popular MEG/EEG Artifact Correction Methods in an Evoked-Response Study

    Directory of Open Access Journals (Sweden)

    Niels Trusbak Haumann

    2016-01-01

    Full Text Available We here compared results achieved by applying popular methods for reducing artifacts in magnetoencephalography (MEG and electroencephalography (EEG recordings of the auditory evoked Mismatch Negativity (MMN responses in healthy adult subjects. We compared the Signal Space Separation (SSS and temporal SSS (tSSS methods for reducing noise from external and nearby sources. Our results showed that tSSS reduces the interference level more reliably than plain SSS, particularly for MEG gradiometers, also for healthy subjects not wearing strongly interfering magnetic material. Therefore, tSSS is recommended over SSS. Furthermore, we found that better artifact correction is achieved by applying Independent Component Analysis (ICA in comparison to Signal Space Projection (SSP. Although SSP reduces the baseline noise level more than ICA, SSP also significantly reduces the signal—slightly more than it reduces the artifacts interfering with the signal. However, ICA also adds noise, or correction errors, to the waveform when the signal-to-noise ratio (SNR in the original data is relatively low—in particular to EEG and to MEG magnetometer data. In conclusion, ICA is recommended over SSP, but one should be careful when applying ICA to reduce artifacts on neurophysiological data with relatively low SNR.

  5. A Method of Rescue Flight Path Plan Correction Based on the Fusion of Predicted Low-altitude Wind Data

    Directory of Open Access Journals (Sweden)

    Ming Zhang

    2016-10-01

    Full Text Available This study proposes a low-altitude wind prediction model for correcting the flight path plans of low-altitude aircraft. To solve large errors in numerical weather prediction (NWP data and the inapplicability of high-altitude meteorological data to low altitude conditions, the model fuses the low-altitude lattice prediction data and the observation data of a specified ground international exchange station through the unscented Kalman filter (UKF-based NWP interpretation technology to acquire the predicted low-altitude wind data. Subsequently, the model corrects the arrival times at the route points by combining the performance parameters of the aircraft according to the principle of velocity vector composition. Simulation experiment shows that the RMSEs of wind speed and direction acquired with the UKF prediction method are reduced by 12.88% and 17.50%, respectively, compared with the values obtained with the traditional Kalman filter prediction method. The proposed prediction model thus improves the accuracy of flight path planning in terms of time and space.

  6. T2 corrected quantification method of L-p-boronophenylalanine using proton magnetic resonance spectroscopy for boron neutron capture therapy

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Yohei [Department of Neurosurgery, Institute of Clinical Medicine, Graduated School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki (Japan); Isobe, Tomonori [Institute of Clinical Medicine, Graduated School of Comprehensive Human Sciences, University of Tsukuba (Japan); Yamamoto, Tetsuya [Department of Neurosurgery, Institute of Clinical Medicine, Graduated School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki (Japan)], E-mail: tetsu-ya@md.tsukuba.ac.jp; Shibata, Yasushi [Department of Neurosurgery, Institute of Clinical Medicine, Graduated School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki (Japan); Anno, Izumi [Department of Radiological Sciences, Ibaraki Prefectural University of Health Sciences (Japan); Nakai, Kei; Shirakawa, Makoto; Matsushita, Akira [Department of Neurosurgery, Institute of Clinical Medicine, Graduated School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki (Japan); Sato, Eisuke [School of Allied Health Sciences, Kitasato University (Japan); Matsumura, Akira [Department of Neurosurgery, Institute of Clinical Medicine, Graduated School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki (Japan)

    2009-07-15

    In the present study, we aimed to evaluate a T2 corrected quantification method of L-p-boronophenylalanine (BPA) concentration using proton magnetic resonance spectroscopy (MRS). We used five phantoms containing BPA (1.5, 3.0, 5.0, 7.5, and 10 mmol/kg=15, 30, 50, 75, and 100 {mu}g{sup 10}B/g), N-acetyl-aspartic acid (NAA: 3.0 mmol/kg), creatine (Cr: 5.0 mmol/kg), and choline (Cho: 3.0 mmol/kg). The signal intensities of BPA and internal water were corrected by T2 relaxation time. The absolute concentrations of BPA were calculated by proton MRS using an internal water signal as a standard. The major BPA peaks were detected between 7.1 and 7.6 ppm. Mean T2 relaxation time was 314.3{+-}10.8 ms in BPA, 885.1{+-}39.7 ms in internal water. The calculated BPA concentrations were almost same as the actual concentration of BPA and the correlation coefficient was 0.99. Our BPA quantification method was very simple and non-invasive, also it had high accuracy. Therefore, our results indicate that proton MRS can be potentially useful technique for in vivo BPA quantification in boron neutron capture therapy (BNCT)

  7. A practical cone-beam CT scatter correction method with optimized Monte Carlo simulations for image-guided radiation therapy

    Science.gov (United States)

    Xu, Yuan; Bai, Ti; Yan, Hao; Ouyang, Luo; Pompos, Arnold; Wang, Jing; Zhou, Linghong; Jiang, Steve B.; Jia, Xun

    2015-05-01

    Cone-beam CT (CBCT) has become the standard image guidance tool for patient setup in image-guided radiation therapy. However, due to its large illumination field, scattered photons severely degrade its image quality. While kernel-based scatter correction methods have been used routinely in the clinic, it is still desirable to develop Monte Carlo (MC) simulation-based methods due to their accuracy. However, the high computational burden of the MC method has prevented routine clinical application. This paper reports our recent development of a practical method of MC-based scatter estimation and removal for CBCT. In contrast with conventional MC approaches that estimate scatter signals using a scatter-contaminated CBCT image, our method used a planning CT image for MC simulation, which has the advantages of accurate image intensity and absence of image truncation. In our method, the planning CT was first rigidly registered with the CBCT. Scatter signals were then estimated via MC simulation. After scatter signals were removed from the raw CBCT projections, a corrected CBCT image was reconstructed. The entire workflow was implemented on a GPU platform for high computational efficiency. Strategies such as projection denoising, CT image downsampling, and interpolation along the angular direction were employed to further enhance the calculation speed. We studied the impact of key parameters in the workflow on the resulting accuracy and efficiency, based on which the optimal parameter values were determined. Our method was evaluated in numerical simulation, phantom, and real patient cases. In the simulation cases, our method reduced mean HU errors from 44 to 3 HU and from 78 to 9 HU in the full-fan and the half-fan cases, respectively. In both the phantom and the patient cases, image artifacts caused by scatter, such as ring artifacts around the bowtie area, were reduced. With all the techniques employed, we achieved computation time of less than 30 s including the

  8. INVESTIGATING THE EFFECTIVENESS OF KINESIO® TAPING SPACE CORRECTION METHOD IN HEALTHY ADULTS ON PATELLOFEMORAL JOINT AND SUBCUTANEOUS SPACE

    Science.gov (United States)

    Keister, Kassiann; Gange, Kara; Mellinger, Christopher D.; Hanson, Thomas A.

    2017-01-01

    Background Limited quantitative, physiological evidence exists regarding the effectiveness of Kinesio® Taping methods, particularly with respect to the potential ability to impact underlying physiological joint space and structures. To better understand the impact of these techniques, the underlying physiological processes must be investigated in addition to the examination of more subjective measures related to pain in unhealthy tissues. Hypothesis/Purpose The purpose of this study was to determine whether the Kinesio® Taping Space Correction Method created a significant difference in patellofemoral joint space, as quantified by diagnostic ultrasound. Study Design Pre-test/post-test prospective cohort study Methods Thirty-two participants with bilaterally healthy knees and no past history of surgery took part in the study. For each participant, diagnostic ultrasound was utilized to collect three measurements: the patellofemoral joint space, the distance from the skin to the superficial patella, and distance from the skin to the patellar tendon. The Kinesio® Taping Space Correction Method was then applied. After a ten-minute waiting period in a non-weight bearing position, all three measurements were repeated. Each participant served as his or her own control. Results Paired t tests showed a statistically significant difference (mean difference = 1.1 mm, t[3,1] = 2.823, p = 0.008, g = .465) between baseline and taped conditions in the space between the posterior surface of the patella to the medial femoral condyle. Neither the distance from the skin to the superficial patella nor the distance from the skin to the patellar tendon increased to a statistically significant degree. Conclusions The application of the Kinesio® Taping Space Correction Method increases the patellofemoral joint space in healthy adults by increasing the distance between the patella and the medial femoral condyle, though it does not increase the distance from the skin to

  9. PERSISTENT AND INTERMITTENT HYPERHYDRATION IN PATIENTS ON PROGRAM HAEMODIALYSIS: METHODS OF EVALUATION AND CORRECTION

    Directory of Open Access Journals (Sweden)

    A. G. Strokov

    2015-01-01

    Full Text Available Hyperhydration, the sum of persistent (PH and intermittent (IH ones is the strong predictor of mortality in patients on program haemodialysis (PHD. The aim of this research was to investigate the complex of methods for minimization of PH as well as IH. Materials and methods. The bioimpedance multifrequency analysis (BIA, relative blood volume (RBV monitoring and plasma conductivity evaluation by ionic dialysance device were performed in candidates for kidney transplantation. Results. In 380 PHD patients, comparing with 26 healthy persons the expansion of extracellular volume was only observed even in the cases of the huge (3.5–15 L overload. PH of more than 15% of extracellular volume was observed in 41% of patients. The deviation of hydration status from reference value was 3.7 ± 1.4 L at first measurement and 1.9 ± 1.2 L at last one in every patient. RBV decreased insignificantly (less than 2.5% / L ultrafiltration during PHD sessions in patients with PH. This value increased after dry weight consummation and it appeared as surrogate of intravascular refueling capacity. The minimization of sodium dialysate – plasma gradient resulted in decrease of IH. Conclusion. The elimination of both PH and IH in PHD patients is the paramount goal; it demands the complex approaches and further investigations. 

  10. Using Penelope to assess the correctness of NASA Ada software: A demonstration of formal methods as a counterpart to testing

    Science.gov (United States)

    Eichenlaub, Carl T.; Harper, C. Douglas; Hird, Geoffrey

    1993-01-01

    Life-critical applications warrant a higher level of software reliability than has yet been achieved. Since it is not certain that traditional methods alone can provide the required ultra reliability, new methods should be examined as supplements or replacements. This paper describes a mathematical counterpart to the traditional process of empirical testing. ORA's Penelope verification system is demonstrated as a tool for evaluating the correctness of Ada software. Grady Booch's Ada calendar utility package, obtained through NASA, was specified in the Larch/Ada language. Formal verification in the Penelope environment established that many of the package's subprograms met their specifications. In other subprograms, failed attempts at verification revealed several errors that had escaped detection by testing.

  11. Rapid in-focus corrections on quantitative amplitude and phase imaging using transport of intensity equation method.

    Science.gov (United States)

    Meng, X; Tian, X; Kong, Y; Sun, A; Yu, W; Qian, W; Song, X; Cui, H; Xue, L; Liu, C; Wang, S

    2017-06-01

    Transport of intensity equation (TIE) method can acquire sample phase distributions with high speed and accuracy, offering another perspective for cellular observations and measurements. However, caused by incorrect focal plane determination, blurs and halos are induced, decreasing resolution and accuracy in both retrieved amplitude and phase information. In order to obtain high-accurate sample details, we propose TIE based in-focus correction technique for quantitative amplitude and phase imaging, which can locate focal plane and then retrieve both in-focus intensity and phase distributions combining with numerical wavefront extraction and propagation as well as physical image recorder translation. Certified by both numerical simulations and practical measurements, it is believed the proposed method not only captures high-accurate in-focus sample information, but also provides a potential way for fast autofocusing in microscopic system. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  12. Bartlett-type corrections and bootstrap adjustments of likelihood-based inference methods for network meta-analysis.

    Science.gov (United States)

    Noma, Hisashi; Nagashima, Kengo; Maruo, Kazushi; Gosho, Masahiko; Furukawa, Toshi A

    2017-12-18

    In network meta-analyses that synthesize direct and indirect comparison evidence concerning multiple treatments, multivariate random effects models have been routinely used for addressing between-studies heterogeneities. Although their standard inference methods depend on large sample approximations (eg, restricted maximum likelihood estimation) for the number of trials synthesized, the numbers of trials are often moderate or small. In these situations, standard estimators cannot be expected to behave in accordance with asymptotic theory; in particular, confidence intervals cannot be assumed to exhibit their nominal coverage probabilities (also, the type I error probabilities of the corresponding tests cannot be retained). The invalidity issue may seriously influence the overall conclusions of network meta-analyses. In this article, we develop several improved inference methods for network meta-analyses to resolve these problems. We first introduce 2 efficient likelihood-based inference methods, the likelihood ratio test-based and efficient score test-based methods, in a general framework of network meta-analysis. Then, to improve the small-sample inferences, we developed improved higher-order asymptotic methods using Bartlett-type corrections and bootstrap adjustment methods. The proposed methods adopt Monte Carlo approaches using parametric bootstraps to effectively circumvent complicated analytical calculations of case-by-case analyses and to permit flexible application to various statistical models network meta-analyses. These methods can also be straightforwardly applied to multivariate meta-regression analyses and to tests for the evaluation of inconsistency. In numerical evaluations via simulations, the proposed methods generally performed well compared with the ordinary restricted maximum likelihood-based inference method. Applications to 2 network meta-analysis datasets are provided. Copyright © 2017 John Wiley & Sons, Ltd.

  13. IODINATION OF VEGETABLE OIL AS A METHOD FOR CORRECTING IODINE DEFFICIENCY

    Directory of Open Access Journals (Sweden)

    Rodica Sturza

    2006-06-01

    Full Text Available The aim of this work is the study made for obtaining iodized oil that would satisfy the requirements in iodine for human body. The sunflower oil is a product with the most important value, thus the production of oil fortified with iodine would be a cheap and accessible option. These studies indicate that lipids present an important vehicle for the fortification with iodine. Eradication of the iodine deficiency may be realized not only by injection of the iodinated oil, but also by its use as an ingredient for the formulation of different food compositions. This method, complementary with the iodinated salt, would allow the increase of the efficiency of the prophylactic undertaken measures, because is based on the use of vegetal material – sunflower oil; it is cost-efficient and does not require substantial investments.

  14. MICROECOLOGICAL AND FUNCTIONAL ABNORMALITIES OF INTESTINES IN CHILDREN WITH CHRONIC CONSTIPATION. METHODS OF CORRECTION

    Directory of Open Access Journals (Sweden)

    E.V. Komarova

    2010-01-01

    Full Text Available The work contains the follow-up results obtained by the author himself. 51 children with chronic constipation were studied. A complex assessment of microecological and functional abnormalities of intestines in this pathology revealed a reduced level of short-chain fatty acids and anaerobic index, degree I and II dysbiosis, digestion disorder in small intestines. Changes in metabolic activity of intestinal microflora were in line with the functional state of GIT. One of therapy methods for chronic constipation is restoration of normal microflora of large intestines. Absorbents need to be used to influence the pathogenic microflora. Kew words: chronic constipation, treatment, enterosorbents, dioctahedral smectite, children. (Pediatric Pharmacology. – 2010; 7(6:74-76

  15. Saturation in nuclei

    CERN Document Server

    Lappi, T

    2010-01-01

    This talk discusses some recent studies of gluon saturation in nuclei. We stress the connection between the initial condition in heavy ion collisions and observables in deep inelastic scattering (DIS). The dominant degree of freedom in the small x nuclear wavefunction is a nonperturbatively strong classical gluon field, which determines the initial condition for the glasma fields in the initial stages of a heavy ion collision. A correlator of Wilson lines from the same classical fields, known as the dipole cross section, can be used to compute many inclusive and exclusive observables in DIS.

  16. A New Robust Solver for Saturated-Unsaturated Richards' Equation

    Science.gov (United States)

    Barajas-Solano, D. A.; Tartakovsky, D. M.

    2012-12-01

    We present a novel approach for the numerical integration of the saturated-unsaturated Richards' equation, a degenerate parabolic partial differential equation that models flow in porous media. The method is based on the mixed (pore pressure-water content) form of RE, written as a set of differential algebraic equations (DAEs) of index-1 for the fully saturated case and index-2 for the partially saturated case. A DAE-based approach allows us to overcome the numerical challenges posed by the degenerate nature of the Richards' equation. The resulting set of DAEs is solved using the stiffly-accurate, single-step, 3-stage implicit Runge-Kutta method Radau IIA, chosen for its favorable accuracy and stability properties, and its ease of implementation. For each time step a nonlinear system of equations on the intermediate Runge-Kutta states of the pore pressure is solved, written so to ensure that the next step pore pressure and water content correspond to one another correctly. The implementation of our approach compares favorably to state-of-the-art DAE-based solvers in both one- and two-dimensional simulations. These solvers use multi-step backward difference formulas together with a pressure-based form of Richards' equation. To the best of our knowledge, our method is the first instance of a successful DAE-based solver that uses the mixed form of Richards' equation. We consider this a promising line of research, with future work to be done on the use of globally convergent methods for the solution of the occurring nonlinear systems of equations.

  17. Finite element method analysis of the periodontal ligament in mandibular canine movement with transparent tooth correction treatment.

    Science.gov (United States)

    Cai, Yongqing; Yang, Xiaoxiang; He, Bingwei; Yao, Jun

    2015-09-04

    This study used the 3D finite element method to investigate canine's displacements and stresses in the canine's periodontal ligament (PDL) during canine's translation, inclination, and rotation with transparent tooth correction treatment. Finite element models were developed to simulate dynamic orthodontic treatments of the translation, inclination, and rotation of the left mandibular canine with transparent tooth correction system. Piecewise static simulations were performed to replicate the dynamic process of orthodontic treatments. The distribution and change trends of canine's displacements and stresses in the canine's PDL during the three types of tooth movements were obtained. Maximum displacements were observed at the crown and middle part in the translation case, at the crown in the inclination case, and at the crown and root part in the rotation case. The relative maximum von Mises and principal stresses were mainly found at the cervix of the PDL in the translation and inclination cases. In the translation case, tensile stress was mainly observed on the mesial and distal surfaces near the lingual side and compressive stress was located at the bottom of the labial surface. In the inclination case, tensile stress was mainly observed at the labial cervix and lingual apex and compressive stress was located at the lingual cervix and labial apex. In the rotation case, von Mises stress was mainly located at the cervix and inside the lingual surface, tensile stress was located on the distal surface, and compressive stress was detected on the mesial surface. The stress and displacement value rapidly decreased in the first few steps and then reached a plateau. Canine's movement type significantly influences the distribution of canine's displacement and stresses in the canine's PDL. Changes in canine's displacement and stresses in the canine's PDL were exponential in transparent tooth correction treatment.

  18. Effectiveness of different correction methods of pyeloureteral segment according to the data of diuretic ultrasonography

    Directory of Open Access Journals (Sweden)

    D. Z. Vorobets

    2015-08-01

    Full Text Available Methods of estimation of effectiveness of the open and laparoscopic pyeloplasty, as well as endo-urological palliative methods – laser resection, balloon dilatation and endopyelotomy, which determine the anatomical and functional peculiarities of renal pelvis and pyelo-ureteral junction with the help of ultrasound diagnostics during the forced diuresis, have been proposed. Changes of the area of renal pelvis, the velocity of post-furosemide increase of the scope of renal pelvis, rate of its drainage, changes in the diameter of pyeloureteral junction have been studied. This methodical approach is non-invasive, informative and simple in application. It is shown that dispersions of samples of patients after the open surgery do not differ from the dispersions of samples of the same patients before the operation on such parameters as areas of renal pelvis before the induction of furosemide, areas of renal pelvis after 15 minutes administration of furosemide, the rate of drainage after furosemide, the original diameter of pyeloureteral junction. This may indicate the stability of surgery results. For example, the larger renal pelvis by kidney size before the operation corresponded to the larger designed pelvis after the operation; renal pelvis drained faster before the operation, features faster drainage after the operation as well. Variation in the areas of renal pelvis which decreased in 40 minutes after furosemide, percent rate of longitudinal pelvis area, rate of after-furosemide increase in pelvis area, diameter of pyeloureteral junction in 15 minutes administration of furosemide after the open pyeloplasty was significantly different compared to the variation in the same parameter for the same patients before the operations. More substantial difference was observed in the same patients before and after Anderson-Hynes surgery by parameters of relative rate of after-furosemide drainage of pelvis and increase in diameter of pyeloureteral junction

  19. Weighted bootstrapping: a correction method for assessing the robustness of phylogenetic trees

    Directory of Open Access Journals (Sweden)

    Makarenkov Vladimir

    2010-08-01

    Full Text Available Abstract Background Non-parametric bootstrapping is a widely-used statistical procedure for assessing confidence of model parameters based on the empirical distribution of the observed data 1 and, as such, it has become a common method for assessing tree confidence in phylogenetics 2. Traditional non-parametric bootstrapping does not weigh each tree inferred from resampled (i.e., pseudo-replicated sequences. Hence, the quality of these trees is not taken into account when computing bootstrap scores associated with the clades of the original phylogeny. As a consequence, traditionally, the trees with different bootstrap support or those providing a different fit to the corresponding pseudo-replicated sequences (the fit quality can be expressed through the LS, ML or parsimony score contribute in the same way to the computation of the bootstrap support of the original phylogeny. Results In this article, we discuss the idea of applying weighted bootstrapping to phylogenetic reconstruction by weighting each phylogeny inferred from resampled sequences. Tree weights can be based either on the least-squares (LS tree estimate or on the average secondary bootstrap score (SBS associated with each resampled tree. Secondary bootstrapping consists of the estimation of bootstrap scores of the trees inferred from resampled data. The LS and SBS-based bootstrapping procedures were designed to take into account the quality of each "pseudo-replicated" phylogeny in the final tree estimation. A simulation study was carried out to evaluate the performances of the five weighting strategies which are as follows: LS and SBS-based bootstrapping, LS and SBS-based bootstrapping with data normalization and the traditional unweighted bootstrapping. Conclusions The simulations conducted with two real data sets and the five weighting strategies suggest that the SBS-based bootstrapping with the data normalization usually exhibits larger bootstrap scores and a higher robustness

  20. Novel, cyclic heat dissipation method for the correction of natural temperature gradients in sap flow measurements. Part 2. Laboratory validation.

    Science.gov (United States)

    Reyes-Acosta, J Leonardo; Vandegehuchte, Maurits W; Steppe, Kathy; Lubczynski, Maciek W

    2012-07-01

    Sap flow measurements conducted with thermal dissipation probes (TDPs) are vulnerable to natural temperature gradient (NTG) bias. Few studies, however, attempted to explain the dynamics underlying the NTG formation and its influence on the sensors' signal. This study focused on understanding how the TDP signals are affected by negative and positive temperature influences from NTG and tested the novel cyclic heat dissipation (CHD) method to filter out the NTG bias. A series of three experiments were performed in which gravity-driven water flow was enforced on freshly cut stem segments of Fagus sylvatica L., while an artificial temperature gradient (ATG) was induced. The first experiment sought to confirm the incidence of the ATG on sensors. The second experiment established the mis-estimations caused by the biasing effect of the ATG on standard TDP measurements. The third experiment tested the accuracy of the CHD method to account for the ATG biasing effect, as compared with other cyclic correction methods. During experiments, sap flow measured by TDP was assessed against gravimetric measurements. The results show that negative and positive ATGs were comparable in pattern but substantially larger than field NTGs. Second, the ATG bias caused an overestimation of the standard TDP sap flux density of ∼17 cm(3) cm(-2) h(-1) by 76%, and the sap flux density of ∼2 cm(3) cm(-2) h(-1) by over 800%. Finally, the proposed CHD method successfully reduced the max. ATG bias to 25% at ∼11 cm(3) cm(-2) h(-1) and to 40% at ∼1 cm(3) cm(-2) h(-1). We concluded that: (i) the TDP method is susceptible to NTG especially at low flows; (ii) the CHD method successfully corrected the TDP signal and resulted in generally more accurate sap flux density estimates (mean absolute percentage error ranging between 11 and 21%) than standard constant power TDP method and other cyclic power methods; and (iii) the ATG enforcing system is a suitable way of re-creating NTG for future tests.

  1. Rehabilitation of walking disorders with a freezing in patients Parkinson disease: methods of outpatient correction

    Directory of Open Access Journals (Sweden)

    Krivonos О.V.

    2013-12-01

    Full Text Available The study aimed the effectiveness of rehabilitation approaches for patients with Parkinson disease with a freezing at an outpatient condition. Material and methods. The study included 26 patients with Parkinson disease (14 men and 12 women, average age was 54.1± 9.5 years, average disease duration — 7.8± 3.1 years, the stage of the disease by Hoehn and Yahr scale — 3, 1±0.8. The control group included 15 patients with Parkinson disease (9 men and 6 women, matched in age, duration and severity. All patients have got a stable antiparkinsonian therapy before and during the study. The experiment had been performed for 6 months. The rehabilitation program consisted of 10 sessions. In rehabilitation program there had been used a sensor treadmill and Nordic walking. Patients had been kept training the Nordic walking at home through the course of the study. The results show the effectiveness of rehabilitation for reducing the severity of a freezing, increasing the speed of walking, the increased of the length step, decreased the time for turning compared to the control group. The effect had been supported by home rehabilitation training for 3 and 6 months after the study. Besides, patients from the main group did not need the change of the antiparkinsonian therapy after the study comparing with the control group.

  2. A convenient method to prepare emulsified polyacrylate nanoparticles from powders [corrected] for drug delivery applications.

    Science.gov (United States)

    Garay-Jimenez, Julio C; Turos, Edward

    2011-08-01

    We describe a method to obtain purified, polyacrylate nanoparticles in a homogeneous powdered form that can be readily reconstituted in aqueous media for in vivo applications. Polyacrylate-based nanoparticles can be easily prepared by emulsion polymerization using a 7:3 mixture of butyl acrylate and styrene in water containing sodium dodecyl sulfate as a surfactant and potassium persulfate as a water-soluble radical initiator. The resulting emulsions contain nanoparticles measuring 40-50 nm in diameter with uniform morphology, and can be purified by centrifugation and dialysis to remove larger coagulants as well as residual surfactant and monomers associated with toxicity. These purified emulsions can be lyophilized in the presence of maltose (a non-toxic cryoprotectant) to provide a homogeneous dried powder, which can be reconstituted as an emulsion by addition of an aqueous diluent. Dynamic light scattering and microbiological experiments were carried out on the reconstituted nanoparticles. This procedure allows for ready preparation of nanoparticle emulsions for drug delivery applications. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. A comparison of methods for a priori bias correction in soil moisture data assimilation

    Science.gov (United States)

    Kumar, Sujay V.; Reichle, Rolf H.; Harrison, Kenneth W.; Peters-Lidard, Christa D.; Yatheendradas, Soni; Santanello, Joseph A.

    2012-03-01

    Data assimilation is increasingly being used to merge remotely sensed land surface variables such as soil moisture, snow, and skin temperature with estimates from land models. Its success, however, depends on unbiased model predictions and unbiased observations. Here a suite of continental-scale, synthetic soil moisture assimilation experiments is used to compare two approaches that address typical biases in soil moisture prior to data assimilation: (1) parameter estimation to calibrate the land model to the climatology of the soil moisture observations and (2) scaling of the observations to the model's soil moisture climatology. To enable this research, an optimization infrastructure was added to the NASA Land Information System (LIS) that includes gradient-based optimization methods and global, heuristic search algorithms. The land model calibration eliminates the bias but does not necessarily result in more realistic model parameters. Nevertheless, the experiments confirm that model calibration yields assimilation estimates of surface and root zone soil moisture that are as skillful as those obtained through scaling of the observations to the model's climatology. Analysis of innovation diagnostics underlines the importance of addressing bias in soil moisture assimilation and confirms that both approaches adequately address the issue.

  4. Analytical recovery of protozoan enumeration methods: have drinking water QMRA models corrected or created bias?

    Science.gov (United States)

    Schmidt, P J; Emelko, M B; Thompson, M E

    2013-05-01

    Quantitative microbial risk assessment (QMRA) is a tool to evaluate the potential implications of pathogens in a water supply or other media and is of increasing interest to regulators. In the case of potentially pathogenic protozoa (e.g. Cryptosporidium oocysts and Giardia cysts), it is well known that the methods used to enumerate (oo)cysts in samples of water and other media can have low and highly variable analytical recovery. In these applications, QMRA has evolved from ignoring analytical recovery to addressing it in point-estimates of risk, and then to addressing variation of analytical recovery in Monte Carlo risk assessments. Often, variation of analytical recovery is addressed in exposure assessment by dividing concentration values that were obtained without consideration of analytical recovery by random beta-distributed recovery values. A simple mathematical proof is provided to demonstrate that this conventional approach to address non-constant analytical recovery in drinking water QMRA will lead to overestimation of mean pathogen concentrations. The bias, which can exceed an order of magnitude, is greatest when low analytical recovery values are common. A simulated dataset is analyzed using a diverse set of approaches to obtain distributions representing temporal variation in the oocyst concentration, and mean annual risk is then computed from each concentration distribution using a simple risk model. This illustrative example demonstrates that the bias associated with mishandling non-constant analytical recovery and non-detect samples can cause drinking water systems to be erroneously classified as surpassing risk thresholds. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Dispersion-corrected energy decomposition analysis for intermolecular interactions based on the BLW and dDXDM methods.

    Science.gov (United States)

    Steinmann, Stephan N; Corminboeuf, Clemence; Wu, Wei; Mo, Yirong

    2011-06-02

    As the simplest variant of the valence bond (VB) theory, the block-localized wave function (BLW) method defines the intermediate electron-localized state self-consistently at the DFT level and can be used to explore the nature of intermolecular interactions in terms of several physically intuitive energy components. Yet, it is unclear how the dispersion interaction affects such a kind of energy decomposition analysis (EDA) as standard density functional approximations neglect the long-range dispersion attractive interactions. Three electron densities corresponding to the initial electron-localized state, optimal electron-localized state, and final electron-delocalized state are involved in the BLW-ED approach; a density-dependent dispersion correction, such as the recently proposed dDXDM approach, can thus uniquely probe the impact of the long-range dispersion effect on EDA results computed at the DFT level. In this paper, we incorporate the dDXDM dispersion corrections into the BLW-ED approach and investigate a range of representative systems such as hydrogen-bonding systems, acid-base pairs, and van der Waals complexes. Results show that both the polarization and charge-transfer energies are little affected by the inclusion of the long-range dispersion effect, which thus can be regarded as an independent energy component in EDA. © 2011 American Chemical Society

  6. Evaluation of ion chamber dependent correction factors for ionisation chamber dosimetry in proton beams using a Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Palmans, H. [Ghent Univ. (Belgium). Dept. of Biomedical Physics; Verhaegen, F.

    1995-12-01

    In the last decade, several clinical proton beam therapy facilities have been developed. To satisfy the demand for uniformity in clinical (routine) proton beam dosimetry two dosimetry protocols (ECHED and AAPM) have been published. Both protocols neglect the influence of ion chamber dependent parameters on dose determination in proton beams because of the scatter properties of these beams, although the problem has not been studied thoroughly yet. A comparison between water calorimetry and ionisation chamber dosimetry showed a discrepancy of 2.6% between the former method and ionometry following the ECHED protocol. Possibly, a small part of this difference can be attributed to chamber dependent correction factors. Indications for this possibility are found in ionometry measurements. To allow the simulation of complex geometries with different media necessary for the study of those corrections, an existing proton Monte Carlo code (PTRAN, Berger) has been modified. The original code, that applies Mollire`s multiple scattering theory and Vavilov`s energy straggling theory, calculates depth dose profiles, energy distributions and radial distributions for pencil beams in water. Comparisons with measurements and calculations reported in the literature are done to test the program`s accuracy. Preliminary results of the influence of chamber design and chamber materials on dose to water determination are presented.

  7. Correction to “Control of fossil-fuel particulate black carbon and organic matter, possibly the most effective method of slowing global warming”

    National Research Council Canada - National Science Library

    Mark Z. Jacobson

    2005-01-01

    This document describes two updates and a correction that affect two figures (Figures 1 and 14) in “Control of fossil‐fuel particulate black carbon and organic matter, possibly the most effective method of slowing global warming...

  8. Matrix effect-corrected liquid chromatography/tandem mass-spectrometric method for determining acylcarnitines in human urine.

    Science.gov (United States)

    Abe, Kazuki; Suzuki, Hiroyuki; Maekawa, Masamitsu; Shimada, Miki; Yamaguchi, Hiroaki; Mano, Nariyasu

    2017-05-01

    Administration of pivalate-containing antibiotics decreases serum carnitine and increases urinary pivaloylcarnitine, resulting in hypocarnitinemia. Carnitine and acylcarnitines are important biomarkers in the diagnosis of carnitine deficiency, but the relationship between acylcarnitines and drug-induced hypocarnitinemia remains unclear. Quantification of acylcarnitines enables discovery of new biomarkers for prediction and diagnosis of drug-induced hypocarnitinemia. Here we describe a liquid chromatography/tandem mass-spectrometric method for simultaneously quantifying carnitine, 15 acylcarnitines, and cefditoren (the pivoxilated product of an antibiotic prodrug) in human urine. The matrix effect is corrected in 87.8-103% using deuterium-labeled internal standards ( 2 H 9 -carnitine, 2 H 3 -hexanoylcarnitine, and 2 H 3 -stearoylcarnitine). The surrogate matrix method had an error of method. Dynamic ranges were 0.1-100μmol/l for acylcarnitines and 0.3-300μg/ml for cefditoren. Both accuracy and precision were method, urine samples from eight healthy volunteers (five adults and three children) were analyzed, and individual differences were clearly observed. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Interger multiplication with overflow detection or saturation

    Energy Technology Data Exchange (ETDEWEB)

    Schulte, M.J.; Balzola, P.I.; Akkas, A.; Brocato, R.W.

    2000-01-11

    High-speed multiplication is frequently used in general-purpose and application-specific computer systems. These systems often support integer multiplication, where two n-bit integers are multiplied to produce a 2n-bit product. To prevent growth in word length, processors typically return the n least significant bits of the product and a flag that indicates whether or not overflow has occurred. Alternatively, some processors saturate results that overflow to the most positive or most negative representable number. This paper presents efficient methods for performing unsigned or two's complement integer multiplication with overflow detection or saturation. These methods have significantly less area and delay than conventional methods for integer multiplication with overflow detection and saturation.

  10. The effect of neuromuscular electrical stimulation on congenital talipes equinovarus following correction with the Ponseti method: a pilot study.

    Science.gov (United States)

    Gelfer, Yael; Durham, Sally; Daly, Karen; Shitrit, Reuven; Smorgick, Yossi; Ewins, David

    2010-09-01

    The Ponseti method for clubfoot treatment offers satisfactory initial correction, but success correlates with abduction brace compliance, which is variable. Electrical stimulation as a dynamic intervention to prevent relapses was investigated. Data were compared to a control group. There was a significant improvement in ankle range of motion only in the study group after short-term intervention, and a trend toward greater increase in calf circumference in this group. Parental perception was positive with no compliance issues. This study suggests stimulation is feasible with potential to increase ankle range of motion and facilitate muscle activity. It could be an important adjunct in preventing relapses, however, further studies with larger groups and longer intervention and follow-up duration are necessary.

  11. Determination of several trace elements in silicate rocks by an XRF method with background and matrix corrections

    Energy Technology Data Exchange (ETDEWEB)

    Pascual, J.

    1987-12-01

    An X-ray fluorescence method for determining trace elements in silicate rock samples was studied. The procedure focused on the application of the pertinent matrix corrections. Either the Compton peak or the reciprocal of the mass absorption coefficient of the sample was used as internal standard for this purpose. X-ray tubes with W or Cr anodes were employed, and the W L..beta.. and Cr K..cap alpha.. Compton intensities scattered by the sample were measured. The mass absorption coefficients at both sides of the absorption edge for Fe (1.658 and 1.936 A) were calculated. The elements Zr, Y, Rb, Zn, Ni, Cr and V were determined in 15 international reference rocks covering wide ranges of concentration. Relative mean errors were in many cases less than 10%.

  12. Impact of CT attenuation correction method on quantitative respiratory-correlated (4D) PET/CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nyflot, Matthew J., E-mail: nyflot@uw.edu [Department of Radiation Oncology, University of Washington, Seattle, Washington 98195-6043 (United States); Lee, Tzu-Cheng [Department of Bioengineering, University of Washington, Seattle, Washington 98195-6043 (United States); Alessio, Adam M.; Kinahan, Paul E. [Department of Radiology, University of Washington, Seattle, Washington 98195-6043 (United States); Wollenweber, Scott D.; Stearns, Charles W. [GE Healthcare, Waukesha, Wisconsin 53188 (United States); Bowen, Stephen R. [Department of Radiation Oncology, University of Washington, Seattle, Washington 98195-6043 and Department of Radiology, University of Washington, Seattle, Washington 98195-6043 (United States)

    2015-01-15

    Purpose: Respiratory-correlated positron emission tomography (PET/CT) 4D PET/CT is used to mitigate errors from respiratory motion; however, the optimal CT attenuation correction (CTAC) method for 4D PET/CT is unknown. The authors performed a phantom study to evaluate the quantitative performance of CTAC methods for 4D PET/CT in the ground truth setting. Methods: A programmable respiratory motion phantom with a custom movable insert designed to emulate a lung lesion and lung tissue was used for this study. The insert was driven by one of five waveforms: two sinusoidal waveforms or three patient-specific respiratory waveforms. 3DPET and 4DPET images of the phantom under motion were acquired and reconstructed with six CTAC methods: helical breath-hold (3DHEL), helical free-breathing (3DMOT), 4D phase-averaged (4DAVG), 4D maximum intensity projection (4DMIP), 4D phase-matched (4DMATCH), and 4D end-exhale (4DEXH) CTAC. Recovery of SUV{sub max}, SUV{sub mean}, SUV{sub peak}, and segmented tumor volume was evaluated as RC{sub max}, RC{sub mean}, RC{sub peak}, and RC{sub vol}, representing percent difference relative to the static ground truth case. Paired Wilcoxon tests and Kruskal–Wallis ANOVA were used to test for significant differences. Results: For 4DPET imaging, the maximum intensity projection CTAC produced significantly more accurate recovery coefficients than all other CTAC methods (p < 0.0001 over all metrics). Over all motion waveforms, ratios of 4DMIP CTAC recovery were 0.2 ± 5.4, −1.8 ± 6.5, −3.2 ± 5.0, and 3.0 ± 5.9 for RC{sub max}, RC{sub peak}, RC{sub mean}, and RC{sub vol}. In comparison, recovery coefficients for phase-matched CTAC were −8.4 ± 5.3, −10.5 ± 6.2, −7.6 ± 5.0, and −13.0 ± 7.7 for RC{sub max}, RC{sub peak}, RC{sub mean}, and RC{sub vol}. When testing differences between phases over all CTAC methods and waveforms, end-exhale phases were significantly more accurate (p = 0.005). However, these differences were driven by

  13. Z-correction, a method for achieving ultraprecise self-calibration on large area coordinate measurement machines for photomasks

    Science.gov (United States)

    Ekberg, Peter; Stiblert, Lars; Mattsson, Lars

    2014-05-01

    High-quality photomasks are a prerequisite for the production of flat panel TVs, tablets and other kinds of high-resolution displays. During the past years, the resolution demand has become more and more accelerated, and today, the high-definition standard HD, 1920 × 1080 pixels2, is well established, and already the next-generation so-called ultra-high-definition UHD or 4K display is entering the market. Highly advanced mask writers are used to produce the photomasks needed for the production of such displays. The dimensional tolerance in X and Y on absolute pattern placement on these photomasks, with sizes of square meters, has been in the range of 200-300 nm (3σ), but is now on the way to be <150 nm (3σ). To verify these photomasks, 2D ultra-precision coordinate measurement machines are used with even tighter tolerance requirements. The metrology tool MMS15000 is today the world standard tool used for the verification of large area photomasks. This paper will present a method called Z-correction that has been developed for the purpose of improving the absolute X, Y placement accuracy of features on the photomask in the writing process. However, Z-correction is also a prerequisite for achieving X and Y uncertainty levels <90 nm (3σ) in the self-calibration process of the MMS15000 stage area of 1.4 × 1.5 m2. When talking of uncertainty specifications below 200 nm (3σ) of such a large area, the calibration object used, here an 8-16 mm thick quartz plate of size approximately a square meter, cannot be treated as a rigid body. The reason for this is that the absolute shape of the plate will be affected by gravity and will therefore not be the same at different places on the measurement machine stage when it is used in the self-calibration process. This mechanical deformation will stretch or compress the top surface (i.e. the image side) of the plate where the pattern resides, and therefore spatially deform the mask pattern in the X- and Y-directions. Errors due

  14. Intensity correction method customized for multi-animal abdominal MR imaging with 3T clinical scanner and multi-array coil.

    Science.gov (United States)

    Mitsuda, Minoru; Yamaguchi, Masayuki; Nakagami, Ryutaro; Furuta, Toshihiro; Sekine, Norio; Niitsu, Mamoru; Moriyama, Noriyuki; Fujii, Hirofumi

    2013-01-01

    Simultaneous magnetic resonance (MR) imaging of multiple small animals in a single session increases throughput of preclinical imaging experiments. Such imaging using a 3-tesla clinical scanner with multi-array coil requires correction of intensity variation caused by the inhomogeneous sensitivity profile of the coil. We explored a method for correcting intensity that we customized for multi-animal MR imaging, especially abdominal imaging. Our institutional committee for animal experimentation approved the protocol. We acquired high resolution T₁-, T₂-, and T₂*-weighted images and low resolution proton density-weighted images (PDWIs) of 4 rat abdomens simultaneously using a 3T clinical scanner and custom-made multi-array coil. For comparison, we also acquired T₁-, T₂-, and T₂*-weighted volume coil images in the same rats in 4 separate sessions. We used software created in-house to correct intensity variation. We applied thresholding to the PDWIs to produce binary images that displayed only a signal-producing area, calculated multi-array coil sensitivity maps by dividing low-pass filtered PDWIs by low-pass filtered binary images pixel by pixel, and divided uncorrected T₁-, T₂-, or T₂*-weighted images by those maps to obtain intensity-corrected images. We compared tissue contrast among the liver, spinal canal, and muscle between intensity-corrected multi-array coil images and volume coil images. Our intensity correction method performed well for all pulse sequences studied and corrected variation in original multi-array coil images without deteriorating the throughput of animal experiments. Tissue contrasts were comparable between intensity-corrected multi-array coil images and volume coil images. Our intensity correction method customized for multi-animal abdominal MR imaging using a 3T clinical scanner and dedicated multi-array coil could facilitate image interpretation.

  15. Saturated Zone Colloid Transport

    Energy Technology Data Exchange (ETDEWEB)

    H. S. Viswanathan

    2004-10-07

    This scientific analysis provides retardation factors for colloids transporting in the saturated zone (SZ) and the unsaturated zone (UZ). These retardation factors represent the reversible chemical and physical filtration of colloids in the SZ. The value of the colloid retardation factor, R{sub col} is dependent on several factors, such as colloid size, colloid type, and geochemical conditions (e.g., pH, Eh, and ionic strength). These factors are folded into the distributions of R{sub col} that have been developed from field and experimental data collected under varying geochemical conditions with different colloid types and sizes. Attachment rate constants, k{sub att}, and detachment rate constants, k{sub det}, of colloids to the fracture surface have been measured for the fractured volcanics, and separate R{sub col} uncertainty distributions have been developed for attachment and detachment to clastic material and mineral grains in the alluvium. Radionuclides such as plutonium and americium sorb mostly (90 to 99 percent) irreversibly to colloids (BSC 2004 [DIRS 170025], Section 6.3.3.2). The colloid retardation factors developed in this analysis are needed to simulate the transport of radionuclides that are irreversibly sorbed onto colloids; this transport is discussed in the model report ''Site-Scale Saturated Zone Transport'' (BSC 2004 [DIRS 170036]). Although it is not exclusive to any particular radionuclide release scenario, this scientific analysis especially addresses those scenarios pertaining to evidence from waste-degradation experiments, which indicate that plutonium and americium may be irreversibly attached to colloids for the time scales of interest. A section of this report will also discuss the validity of using microspheres as analogs to colloids in some of the lab and field experiments used to obtain the colloid retardation factors. In addition, a small fraction of colloids travels with the groundwater without any significant

  16. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    Science.gov (United States)

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  17. Direct Analysis of Low-Volatile Molecular Marker Extract from Airborne Particulate Matter Using Sensitivity Correction Method

    Directory of Open Access Journals (Sweden)

    Satoshi Irei

    2016-01-01

    Full Text Available Molecular marker analysis of environmental samples often requires time consuming preseparation steps. Here, analysis of low-volatile nonpolar molecular markers (5-6 ring polycyclic aromatic hydrocarbons or PAHs, hopanoids, and n-alkanes without the preseparation procedure is presented. Analysis of artificial sample extracts was directly conducted by gas chromatography-mass spectrometry (GC-MS. After every sample injection, a standard mixture was also analyzed to make a correction on the variation of instrumental sensitivity caused by the unfavorable matrix contained in the extract. The method was further validated for the PAHs using the NIST standard reference materials (SRMs and then applied to airborne particulate matter samples. Tests with the SRMs showed that overall our methodology was validated with the uncertainty of ~30%. The measurement results of airborne particulate matter (PM filter samples showed a strong correlation between the PAHs, implying the contributions from the same emission source. Analysis of size-segregated PM filter samples showed that their size distributions were found to be in the PM smaller than 0.4 μm aerodynamic diameter. The observations were consistent with our expectation of their possible sources. Thus, the method was found to be useful for molecular marker studies.

  18. Galaxy Zoo: comparing the demographics of spiral arm number and a new method for correcting redshift bias

    Science.gov (United States)

    Hart, Ross E.; Bamford, Steven P.; Willett, Kyle W.; Masters, Karen L.; Cardamone, Carolin; Lintott, Chris J.; Mackay, Robert J.; Nichol, Robert C.; Rosslowe, Christopher K.; Simmons, Brooke D.; Smethurst, Rebecca J.

    2016-10-01

    The majority of galaxies in the local Universe exhibit spiral structure with a variety of forms. Many galaxies possess two prominent spiral arms, some have more, while others display a many-armed flocculent appearance. Spiral arms are associated with enhanced gas content and star formation in the discs of low-redshift galaxies, so are important in the understanding of star formation in the local universe. As both the visual appearance of spiral structure, and the mechanisms responsible for it vary from galaxy to galaxy, a reliable method for defining spiral samples with different visual morphologies is required. In this paper, we develop a new debiasing method to reliably correct for redshift-dependent bias in Galaxy Zoo 2, and release the new set of debiased classifications. Using these, a luminosity-limited sample of ˜18 000 Sloan Digital Sky Survey spiral galaxies is defined, which are then further sub-categorized by spiral arm number. In order to explore how different spiral galaxies form, the demographics of spiral galaxies with different spiral arm numbers are compared. It is found that whilst all spiral galaxies occupy similar ranges of stellar mass and environment, many-armed galaxies display much bluer colours than their two-armed counterparts. We conclude that two-armed structure is ubiquitous in star-forming discs, whereas many-armed spiral structure appears to be a short-lived phase, associated with more recent, stochastic star-formation activity.

  19. Saturated linkage map construction in Rubus idaeus using genotyping by sequencing and genome-independent imputation

    Directory of Open Access Journals (Sweden)

    Ward Judson A

    2013-01-01

    Full Text Available Abstract Background Rapid development of highly saturated genetic maps aids molecular breeding, which can accelerate gain per breeding cycle in woody perennial plants such as Rubus idaeus (red raspberry. Recently, robust genotyping methods based on high-throughput sequencing were developed, which provide high marker density, but result in some genotype errors and a large number of missing genotype values. Imputation can reduce the number of missing values and can correct genotyping errors, but current methods of imputation require a reference genome and thus are not an option for most species. Results Genotyping by Sequencing (GBS was used to produce highly saturated maps for a R. idaeus pseudo-testcross progeny. While low coverage and high variance in sequencing resulted in a large number of missing values for some individuals, a novel method of imputation based on maximum likelihood marker ordering from initial marker segregation overcame the challenge of missing values, and made map construction computationally tractable. The two resulting parental maps contained 4521 and 2391 molecular markers spanning 462.7 and 376.6 cM respectively over seven linkage groups. Detection of precise genomic regions with segregation distortion was possible because of map saturation. Microsatellites (SSRs linked these results to published maps for cross-validation and map comparison. Conclusions GBS together with genome-independent imputation provides a rapid method for genetic map construction in any pseudo-testcross progeny. Our method of imputation estimates the correct genotype call of missing values and corrects genotyping errors that lead to inflated map size and reduced precision in marker placement. Comparison of SSRs to published R. idaeus maps showed that the linkage maps constructed with GBS and our method of imputation were robust, and marker positioning reliable. The high marker density allowed identification of genomic regions with segregation

  20. Evaluation of iterative reconstruction method and attenuation correction on brain dopamine transporter SPECT using anthropomorphic striatal phantom

    Directory of Open Access Journals (Sweden)

    Akira Maebatake

    2016-07-01

    Full Text Available Objective(s: The aim of this study was to determine the optimal reconstruction parameters for iterative reconstruction in different devices and collimators for dopamine transporter (DaT single-photon emission computed tomography (SPECT. The results were compared between filtered back projection (FBP and different attenuation correction (AC methods.Methods: An anthropomorphic striatal phantom was filled with 123I solutions at different striatum-to-background radioactivity ratios. Data were acquired using two SPECT/CT devices, equipped with a low-to-medium-energy general-purpose collimator (cameras A-1 and B-1 and a low-energy high-resolution (LEHR collimator (cameras A-2 and B-2.The SPECT images were once reconstructed by FBP using Chang’s AC and once by ordered subset expectation maximization (OSEM using both CTAC and Chang’s AC; moreover, scatter correction was performed. OSEM on cameras A-1 and A-2 included resolution recovery (RR. The images were analyzed, using the specific binding ratio (SBR. Regions of interest for the background were placed on both frontal and occipital regions.Results: The optimal number of iterations and subsets was 10i10s on camera A-1, 10i5s on camera A-2, and 7i6s on cameras B-1 and B-2. The optimal full width at half maximum of the Gaussian filter was 2.5 times the pixel size. In the comparison between FBP and OSEM, the quality was superior on OSEM-reconstructed images, although edge artifacts were observed in cameras A-1 and A-2. The SBR recovery of OSEM was higher than that of FBP on cameras A-1 and A-2, while no significant difference was detected on cameras B-1 and B-2. Good linearity of SBR was observed in all cameras. Inthe comparison between Chang’s AC and CTAC, a significant correlation was observed on all cameras. The difference in the background region influenced SBR differently in Chang’s AC and CTAC on cameras A-1 and B-1.Conclusion: Iterative reconstruction improved image quality on all cameras

  1. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xi [Brookhaven National Laboratory, Upton, Long Island, NY 11973 (United States); Huang, Xiaobiao, E-mail: xiahuang@slac.stanford.edu [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States)

    2016-08-21

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. The method has been successfully demonstrated on the NSLS-II storage ring.

  2. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xi; Huang, Xiaobiao

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. The fitting results are used for lattice correction. The method has been successfully demonstrated on the NSLS-II storage ring.

  3. A method for simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xi [Brookhaven National Lab. (BNL), Upton, NY (United States); Huang, Xiaobiao [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2016-08-01

    We propose a method to simultaneously correct linear optics errors and linear coupling for storage rings using turn-by-turn (TbT) beam position monitor (BPM) data. The independent component analysis (ICA) method is used to isolate the betatron normal modes from the measured TbT BPM data. The betatron amplitudes and phase advances of the projections of the normal modes on the horizontal and vertical planes are then extracted, which, combined with dispersion measurement, are used to fit the lattice model. Furthermore, the fitting results are used for lattice correction. Our method has been successfully demonstrated on the NSLS-II storage ring.

  4. Elevated transferrin saturation and risk of diabetes

    DEFF Research Database (Denmark)

    Ellervik, Christina; Mandrup-Poulsen, Thomas; Andersen, Henrik Ullits

    2011-01-01

    OBJECTIVE We tested the hypothesis that elevated transferrin saturation is associated with an increased risk of any form of diabetes, as well as type 1 or type 2 diabetes separately. RESEARCH DESIGN AND METHODS We used two general population studies, The Copenhagen City Heart Study (CCHS, N = 9...

  5. Two-beam interaction in saturable media

    DEFF Research Database (Denmark)

    Schjødt-Eriksen, Jens; Schmidt, Michel R.; Juul Rasmussen, Jens

    1998-01-01

    The dynamics of two coupled soliton solutions of the nonlinear Schrodinger equation with a saturable nonlinearity is investigated It is shown by means of a variational method and by direct numerical calculations that two well-separated solitons can orbit around each other, if their initial velocity...

  6. SU-F-BRE-01: A Rapid Method to Determine An Upper Limit On a Radiation Detector's Correction Factor During the QA of IMRT Plans

    Energy Technology Data Exchange (ETDEWEB)

    Kamio, Y [CHUM - Notre Dame, Montreal, QC (Canada); Bouchard, H [National Physical Laboratory, Teddington, Middlesex (United Kingdom)

    2014-06-15

    Purpose: Discrepancies in the verification of the absorbed dose to water from an IMRT plan using a radiation dosimeter can be wither caused by 1) detector specific nonstandard field correction factors as described by the formalism of Alfonso et al. 2) inaccurate delivery of the DQA plan. The aim of this work is to develop a simple/fast method to determine an upper limit on the contribution of composite field correction factors to these discrepancies. Methods: Indices that characterize the non-flatness of the symmetrised collapsed delivery (VSC) of IMRT fields over detector-specific regions of interest were shown to be correlated with IMRT field correction factors. The indices introduced are the uniformity index (UI) and the mean fluctuation index (MF). Each one of these correlation plots have 10 000 fields generated with a stochastic model. A total of eight radiation detectors were investigated in the radial orientation. An upper bound on the correction factors was evaluated by fitting values of high correction factors for a given index value. Results: These fitted curves can be used to compare the performance of radiation dosimeters in composite IMRT fields. Highly water-equivalent dosimeters like the scintillating detector (Exradin W1) and a generic alanine detector have been found to have corrections under 1% over a broad range of field modulations (0 – 0.12 for MF and 0 – 0.5 for UI). Other detectors have been shown to have corrections of a few percent over this range. Finally, a full Monte Carlo simulations of 18 clinical and nonclinical IMRT field showed good agreement with the fitted curve for the A12 ionization chamber. Conclusion: This work proposes a rapid method to evaluate an upper bound on the contribution of correction factors to discrepancies found in the verification of DQA plans.

  7. Soil Structure and Saturated Hydraulic Conductivity

    Science.gov (United States)

    Houskova, B.; Nagy, V.

    The role of soil structure on saturated hydraulic conductivity changes is studied in plough layers of texturally different soils. Three localities in western part of Slovakia in Zitny ostrov (Corn Island) were under investigation: locality Kalinkovo with light Calcaric Fluvisol (FAO 1970), Macov with medium heavy Calcari-mollic Fluvisol and Jurova with heavy Calcari-mollic Fluvisol. Soil structure was determined in dry as well as wet state and in size of macro and micro aggregates. Saturated hydraulic conductivity was measured by the help of double ring method. During the period of ring filling the soil surface was protected against aggregates damage by falling water drops. Spatial and temporal variability of studied parameters was evaluated. Cultivated crops were ensilage maize at medium heavy and heavy soil and colza at light soil. Textural composition of soil and actual water content at the beginning of measurement are one of major factor affecting aggregate stability and consequently also saturated hydraulic conductivity.

  8. Systeme de fautes et correction phonetique par la methode verbo-tonale des francophones belges qui apprenent l'espagnol (Phonetic Correction and the Verbo-tonal Method for Teaching Spanish to French-speaking Belgians)

    Science.gov (United States)

    Sarmiento Padilla, Jose

    1974-01-01

    Describes experiments in the field of phonetic correction. Several techniques used at the University of Mons for teaching Spanish pronunciation to French-speaking Belgians are explained. (Text is in French.) (PMP)

  9. Quantitative 1D saturation profiles on chalk by NMR

    DEFF Research Database (Denmark)

    Olsen, Dan; Topp, Simon; Stensgaard, Anders

    1996-01-01

    Quantitative one-dimensional saturation profiles showing the distribution of water and oil in chalk core samples are calculated from NMR measurements utilizing a 1D CSI spectroscopy pulse sequence. Saturation profiles may be acquired under conditions of fluid flow through the sample. Results reveal...... that strong saturation gradients exist in chalk core samples after core floods, due to capillary effects. The method is useful in analysis of corefloods, e.g., for determination of capillary pressure functions...

  10. Misconceptions in Reporting Oxygen Saturation

    NARCIS (Netherlands)

    Toffaletti, John; Zijlstra, Willem G.

    2007-01-01

    BACKGROUND: We describe some misconceptions that have become common practice in reporting blood gas and cooximetry results. In 1980, oxygen saturation was incorrectly redefined in a report of a new instrument for analysis of hemoglobin (Hb) derivatives. Oxygen saturation (sO(2)) was redefined as the

  11. Hybrid Positron Emission Tomography/Magnetic Resonance Imaging: Challenges, Methods, and State of the Art of Hardware Component Attenuation Correction.

    Science.gov (United States)

    Paulus, Daniel H; Quick, Harald H

    2016-10-01

    Attenuation correction (AC) is an essential step in the positron emission tomography (PET) data reconstruction process to provide accurate and quantitative PET images. The introduction of PET/magnetic resonance (MR) hybrid systems has raised new challenges but also possibilities regarding PET AC. While in PET/computed tomography (CT) imaging, CT images can be converted to attenuation maps, MR images in PET/MR do not provide a direct relation to attenuation. For the AC of patient tissues, new methods have been suggested, for example, based on image segmentation, atlas registration, or ultrashort echo time MR sequences. Another challenge in PET/MR hybrid imaging is AC of hardware components that are placed in the PET/MR field of view, such as the patient table or various radiofrequency (RF) coils covering the body of the patient for MR signal detection. Hardware components can be categorized into 4 different groups: (1) patient table, (2) RF receiver coils, (3) radiation therapy equipment, and (4) PET and MR imaging phantoms. For rigid and stationary objects, such as the patient table and some RF coils like the head/neck coil, predefined CT-based attenuation maps stored on the system can be used for automatic AC. Flexible RF coils are not included into the AC process till now because they can vary in position as well as in shape and are not accurately detectable with the PET/MR system.This work summarizes challenges, established methods, new concepts, and the state of art in hardware component AC in the context of PET/MR hybrid imaging. The work also gives an overview of PET/MR hardware devices, their attenuation properties, and their effect on PET quantification.

  12. Weighted Mean of Signal Intensity for Unbiased Fiber Tracking of Skeletal Muscles: Development of a New Method and Comparison With Other Correction Techniques.

    Science.gov (United States)

    Giraudo, Chiara; Motyka, Stanislav; Weber, Michael; Resinger, Christoph; Thorsten, Feiweier; Traxler, Hannes; Trattnig, Siegfried; Bogner, Wolfgang

    2017-08-01

    The aim of this study was to investigate the origin of random image artifacts in stimulated echo acquisition mode diffusion tensor imaging (STEAM-DTI), assess the role of averaging, develop an automated artifact postprocessing correction method using weighted mean of signal intensities (WMSIs), and compare it with other correction techniques. Institutional review board approval and written informed consent were obtained. The right calf and thigh of 10 volunteers were scanned on a 3 T magnetic resonance imaging scanner using a STEAM-DTI sequence.Artifacts (ie, signal loss) in STEAM-based DTI, presumably caused by involuntary muscle contractions, were investigated in volunteers and ex vivo (ie, human cadaver calf and turkey leg using the same DTI parameters as for the volunteers). An automated postprocessing artifact correction method based on the WMSI was developed and compared with previous approaches (ie, iteratively reweighted linear least squares and informed robust estimation of tensors by outlier rejection [iRESTORE]). Diffusion tensor imaging and fiber tracking metrics, using different averages and artifact corrections, were compared for region of interest- and mask-based analyses. One-way repeated measures analysis of variance with Greenhouse-Geisser correction and Bonferroni post hoc tests were used to evaluate differences among all tested conditions. Qualitative assessment (ie, images quality) for native and corrected images was performed using the paired t test. Randomly localized and shaped artifacts affected all volunteer data sets. Artifact burden during voluntary muscle contractions increased on average from 23.1% to 77.5% but were absent ex vivo. Diffusion tensor imaging metrics (mean diffusivity, fractional anisotropy, radial diffusivity, and axial diffusivity) had a heterogeneous behavior, but in the range reported by literature. Fiber track metrics (number, length, and volume) significantly improved in both calves and thighs after artifact

  13. A simple method of correction for profile-length water-column height variations in high-resolution, shallow-water seismic data

    Science.gov (United States)

    Kim, Hyeonju; Lee, Gwang Hoon; Yi, Bo Yeon; Yoon, Youngho; Kim, Kyong-O.; Kim, Han-Joon; Lee, Sang Hoon

    2017-06-01

    In high-resolution, shallow-water seismic surveys, correction for water-column height variations caused by tides, weather, and currents is an important part of data processing. In this study, we present a very simple method of correction for profile-length (i.e., long-wavelength) water-column height variations for high-resolution seismic data using a reference bathymetric grid. First, the difference between the depth of the seafloor picked from seismic data and the bathymetry from the bathymetric grid is computed at the locations where the shot points of seismic profiles and the bathymetric grid points are collocated or closest. Then, the results are gridded and smoothed to obtain the profile-length water-column height variations for the survey area. Next, the water-column height variations for each seismic profile are extracted from the smoothed grid and converted to two-way traveltimes. The corrections for the remaining mis-ties at the intersections, computed within a circular region around each tie shot point, are added to the corrections for the water-column height variations. The final, mistie corrected water-column height corrections are loaded to the SEGY trace header of seismic data as a total static. We applied this method to the sparker data acquired from the shallow-water area off the western-central part of Korea where the tidal range is over 7 m. The corrections for water-column height variations range from -10 to 4 m with a median value of about -2 m. Large corrections occur locally between and near the islands probably due to the amplification and shortening in tidal wavelength caused by rapid shoaling toward the islands.

  14. A novel method for correction of temporally- and spatially-variant optical distortion in planar particle image velocimetry

    Science.gov (United States)

    Zha, Kan; Busch, Stephen; Park, Cheolwoong; Miles, Paul C.

    2016-08-01

    In-cylinder flow measurements are necessary to gain a fundamental understanding of swirl-supported, light-duty Diesel engine processes for high thermal efficiency and low emissions. Planar particle image velocimetry (PIV) can be used for non-intrusive, in situ measurement of swirl-plane velocity fields through a transparent piston. In order to keep the flow unchanged from all-metal engine operation, the geometry of the transparent piston must adapt the production-intent metal piston geometry. As a result, a temporally- and spatially-variant optical distortion is introduced to the particle images. To ensure reliable measurement of particle displacements, this work documents a systematic exploration of optical distortion quantification and a hybrid back-projection procedure that combines ray-tracing-based geometric and in situ manual back-projection approaches. The proposed hybrid back-projection method for the first time provides a time-efficient and robust way to process planar PIV measurements conducted in an optical research engine with temporally- and spatially-varying optical distortion. This method is based upon geometric ray tracing and serves as a universal tool for the correction of optical distortion with an arbitrary but axisymmetric piston crown window geometry. Analytical analysis demonstrates that the ignorance of optical distortion change during the PIV laser temporal interval may induce a significant error in instantaneous velocity measurements. With the proposed digital dewarping method, this piston-motion-induced error can be eliminated. Uncertainty analysis with simulated particle images provides guidance on whether to back-project particle images or back-project velocity fields in order to minimize dewarping-induced uncertainties. The optimal implementation is piston-geometry-dependent. For regions with significant change in nominal magnification factor, it is recommended to apply the proposed back-projection approach to particle images prior to

  15. Statistical methods to correct for verification bias in diagnostic studies are inadequate when there are few false negatives: a simulation study

    Directory of Open Access Journals (Sweden)

    Vickers Andrew J

    2008-11-01

    Full Text Available Abstract Background A common feature of diagnostic research is that results for a diagnostic gold standard are available primarily for patients who are positive for the test under investigation. Data from such studies are subject to what has been termed "verification bias". We evaluated statistical methods for verification bias correction when there are few false negatives. Methods A simulation study was conducted of a screening study subject to verification bias. We compared estimates of the area-under-the-curve (AUC corrected for verification bias varying both the rate and mechanism of verification. Results In a single simulated data set, varying false negatives from 0 to 4 led to verification bias corrected AUCs ranging from 0.550 to 0.852. Excess variation associated with low numbers of false negatives was confirmed in simulation studies and by analyses of published studies that incorporated verification bias correction. The 2.5th – 97.5th centile range constituted as much as 60% of the possible range of AUCs for some simulations. Conclusion Screening programs are designed such that there are few false negatives. Standard statistical methods for verification bias correction are inadequate in this circumstance.

  16. WAter Saturation Shift Referencing (WASSR) for chemical exchange saturation transfer experiments

    Science.gov (United States)

    Kim, Mina; Gillen, Joseph; Landman, Bennett. A.; Zhou, Jinyuan; van Zijl, Peter C.M.

    2010-01-01

    Chemical exchange saturation transfer (CEST) is a contrast mechanism exploiting exchange-based magnetization transfer (MT) between solute and water protons. CEST effects compete with direct water saturation and conventional MT processes and generally can only be quantified through an asymmetry analysis of the water saturation spectrum (Z-spectrum) with respect to the water frequency, a process that is exquisitely sensitive to magnetic field inhomogeneities. Here, it is shown that direct water saturation imaging allows measurement of the absolute water frequency in each voxel, allowing proper centering of Z-spectra on a voxel-by-voxel basis independent of spatial B0 field variations. Optimal acquisition parameters for this “water saturation shift referencing” or “WASSR” approach were estimated using Monte Carlo simulations and later confirmed experimentally. The optimal ratio of the WASSR sweep width to the linewidth of the direct saturation curve was found to be 3.3–4.0, requiring a sampling of 16–32 points. The frequency error was smaller than 1 Hz at signal to noise ratios of 40 or higher. The WASSR method was applied to study glycogen, where the chemical shift difference between the hydroxyl (OH) protons and bulk water protons at 3T is so small (0.75–1.25 ppm) that the CEST spectrum is inconclusive without proper referencing. PMID:19358232

  17. Water saturation shift referencing (WASSR) for chemical exchange saturation transfer (CEST) experiments.

    Science.gov (United States)

    Kim, Mina; Gillen, Joseph; Landman, Bennett A; Zhou, Jinyuan; van Zijl, Peter C M

    2009-06-01

    Chemical exchange saturation transfer (CEST) is a contrast mechanism that exploits exchange-based magnetization transfer (MT) between solute and water protons. CEST effects compete with direct water saturation and conventional MT processes, and generally can only be quantified through an asymmetry analysis of the water saturation spectrum (Z-spectrum) with respect to the water frequency, a process that is exquisitely sensitive to magnetic field inhomogeneities. Here it is shown that direct water saturation imaging allows measurement of the absolute water frequency in each voxel, allowing proper centering of Z-spectra on a voxel-by-voxel basis independently of spatial B(0) field variations. Optimal acquisition parameters for this "water saturation shift referencing" (WASSR) approach were estimated using Monte Carlo simulations and later confirmed experimentally. The optimal ratio of the WASSR sweep width to the linewidth of the direct saturation curve was found to be 3.3-4.0, requiring a sampling of 16-32 points. The frequency error was smaller than 1 Hz at signal-to-noise ratios of 40 or higher. The WASSR method was applied to study glycogen, where the chemical shift difference between the hydroxyl (OH) protons and bulk water protons at 3T is so small (0.75-1.25 ppm) that the CEST spectrum is inconclusive without proper referencing.

  18. Comparison of pulseoximetry oxygen saturation and arterial oxygen saturation in open heart intensive care unit

    Directory of Open Access Journals (Sweden)

    Alireza Mahoori

    2013-08-01

    Full Text Available Background: Pulseoximetry is widely used in the critical care setting, currently used to guide therapeutic interventions. Few studies have evaluated the accuracy of SPO2 (puls-eoximetry oxygen saturation in intensive care unit after cardiac surgery. Our objective was to compare pulseoximetry with arterial oxygen saturation (SaO2 during clinical routine in such patients, and to examine the effect of mild acidosis on this relationship.Methods: In an observational prospective study 80 patients were evaluated in intensive care unit after cardiac surgery. SPO2 was recorded and compared with SaO2 obtained by blood gas analysis. One or serial arterial blood gas analyses (ABGs were performed via a radial artery line while a reliable pulseoximeter signal was present. One hundred thirty seven samples were collected and for each blood gas analyses, SaO2 and SPO2 we recorded.Results: O2 saturation as a marker of peripheral perfusion was measured by Pulseoxim-etry (SPO2. The mean difference between arterial oxygen saturation and pulseoximetry oxygen saturation was 0.12%±1.6%. A total of 137 paired readings demonstrated good correlation (r=0.754; P<0.0001 between changes in SPO2 and those in SaO2 in samples with normal hemoglobin. Also in forty seven samples with mild acidosis, paired readings demonstrated good correlation (r=0.799; P<0.0001 and the mean difference between SaO2 and SPO2 was 0.05%±1.5%.Conclusion: Data showed that in patients with stable hemodynamic and good signal quality, changes in pulseoximetry oxygen saturation reliably predict equivalent changes in arterial oxygen saturation. Mild acidosis doesn’t alter the relation between SPO2 and SaO2 to any clinically important extent. In conclusion, the pulse oximeter is useful to monitor oxygen saturation in patients with stable hemodynamic.

  19. Synchrotron radiation measurement of multiphase fluid saturations in porous media: Experimental technique and error analysis

    Science.gov (United States)

    Tuck, David M.; Bierck, Barnes R.; Jaffé, Peter R.

    1998-06-01

    Multiphase flow in porous media is an important research topic. In situ, nondestructive experimental methods for studying multiphase flow are important for improving our understanding and the theory. Rapid changes in fluid saturation, characteristic of immiscible displacement, are difficult to measure accurately using gamma rays due to practical restrictions on source strength. Our objective is to describe a synchrotron radiation technique for rapid, nondestructive saturation measurements of multiple fluids in porous media, and to present a precision and accuracy analysis of the technique. Synchrotron radiation provides a high intensity, inherently collimated photon beam of tunable energy which can yield accurate measurements of fluid saturation in just one second. Measurements were obtained with precision of ±0.01 or better for tetrachloroethylene (PCE) in a 2.5 cm thick glass-bead porous medium using a counting time of 1 s. The normal distribution was shown to provide acceptable confidence limits for PCE saturation changes. Sources of error include heat load on the monochromator, periodic movement of the source beam, and errors in stepping-motor positioning system. Hypodermic needles pushed into the medium to inject PCE changed porosity in a region approximately ±1 mm of the injection point. Improved mass balance between the known and measured PCE injection volumes was obtained when appropriate corrections were applied to calibration values near the injection point.

  20. Saturation current spikes eliminated in saturable core transformers

    Science.gov (United States)

    Schwarz, F. C.

    1971-01-01

    Unsaturating composite magnetic core transformer, consisting of two separate parallel cores designed so impending core saturation causes signal generation, terminates high current spike in converter primary circuit. Simplified waveform, demonstrates transformer effectiveness in eliminating current spikes.

  1. Health State Monitoring of Bladed Machinery with Crack Growth Detection in BFG Power Plant Using an Active Frequency Shift Spectral Correction Method.

    Science.gov (United States)

    Sun, Weifang; Yao, Bin; He, Yuchao; Chen, Binqiang; Zeng, Nianyin; He, Wangpeng

    2017-08-09

    Power generation using waste-gas is an effective and green way to reduce the emission of the harmful blast furnace gas (BFG) in pig-iron producing industry. Condition monitoring of mechanical structures in the BFG power plant is of vital importance to guarantee their safety and efficient operations. In this paper, we describe the detection of crack growth of bladed machinery in the BFG power plant via vibration measurement combined with an enhanced spectral correction technique. This technique enables high-precision identification of amplitude, frequency, and phase information (the harmonic information) belonging to deterministic harmonic components within the vibration signals. Rather than deriving all harmonic information using neighboring spectral bins in the fast Fourier transform spectrum, this proposed active frequency shift spectral correction method makes use of some interpolated Fourier spectral bins and has a better noise-resisting capacity. We demonstrate that the identified harmonic information via the proposed method is of suppressed numerical error when the same level of noises is presented in the vibration signal, even in comparison with a Hanning-window-based correction method. With the proposed method, we investigated vibration signals collected from a centrifugal compressor. Spectral information of harmonic tones, related to the fundamental working frequency of the centrifugal compressor, is corrected. The extracted spectral information indicates the ongoing development of an impeller blade crack that occurred in the centrifugal compressor. This method proves to be a promising alternative to identify blade cracks at early stages.

  2. Health State Monitoring of Bladed Machinery with Crack Growth Detection in BFG Power Plant Using an Active Frequency Shift Spectral Correction Method

    Directory of Open Access Journals (Sweden)

    Weifang Sun

    2017-08-01

    Full Text Available Power generation using waste-gas is an effective and green way to reduce the emission of the harmful blast furnace gas (BFG in pig-iron producing industry. Condition monitoring of mechanical structures in the BFG power plant is of vital importance to guarantee their safety and efficient operations. In this paper, we describe the detection of crack growth of bladed machinery in the BFG power plant via vibration measurement combined with an enhanced spectral correction technique. This technique enables high-precision identification of amplitude, frequency, and phase information (the harmonic information belonging to deterministic harmonic components within the vibration signals. Rather than deriving all harmonic information using neighboring spectral bins in the fast Fourier transform spectrum, this proposed active frequency shift spectral correction method makes use of some interpolated Fourier spectral bins and has a better noise-resisting capacity. We demonstrate that the identified harmonic information via the proposed method is of suppressed numerical error when the same level of noises is presented in the vibration signal, even in comparison with a Hanning-window-based correction method. With the proposed method, we investigated vibration signals collected from a centrifugal compressor. Spectral information of harmonic tones, related to the fundamental working frequency of the centrifugal compressor, is corrected. The extracted spectral information indicates the ongoing development of an impeller blade crack that occurred in the centrifugal compressor. This method proves to be a promising alternative to identify blade cracks at early stages.

  3. Estimation of an image derived input function with MR-defined carotid arteries in FDG-PET human studies using a novel partial volume correction method

    DEFF Research Database (Denmark)

    Sari, Hasan; Erlandsson, Kjell; Law, Ian

    2017-01-01

    segmentation of the carotid arteries from MR images. The simulation study results showed that at least 92% of the true intensity could be recovered after the partial volume correction. Results from 19 subjects showed that the mean cerebral metabolic rate of glucose calculated using arterial samples and partial...... volume corrected image derived input function were 26.9 and 25.4 mg/min/100 g, respectively, for the grey matter and 7.2 and 6.7 mg/min/100 g for the white matter. No significant difference in the estimated cerebral metabolic rate of glucose values was observed between arterial samples and corrected......Kinetic analysis of (18)F-fluorodeoxyglucose positron emission tomography data requires an accurate knowledge the arterial input function. The gold standard method to measure the arterial input function requires collection of arterial blood samples and is an invasive method. Measuring an image...

  4. ETHNOPRED: a novel machine learning method for accurate continental and sub-continental ancestry identification and population stratification correction

    Science.gov (United States)

    2013-01-01

    Background Population stratification is a systematic difference in allele frequencies between subpopulations. This can lead to spurious association findings in the case–control genome wide association studies (GWASs) used to identify single nucleotide polymorphisms (SNPs) associated with disease-linked phenotypes. Methods such as self-declared ancestry, ancestry informative markers, genomic control, structured association, and principal component analysis are used to assess and correct population stratification but each has limitations. We provide an alternative technique to address population stratification. Results We propose a novel machine learning method, ETHNOPRED, which uses the genotype and ethnicity data from the HapMap project to learn ensembles of disjoint decision trees, capable of accurately predicting an individual’s continental and sub-continental ancestry. To predict an individual’s continental ancestry, ETHNOPRED produced an ensemble of 3 decision trees involving a total of 10 SNPs, with 10-fold cross validation accuracy of 100% using HapMap II dataset. We extended this model to involve 29 disjoint decision trees over 149 SNPs, and showed that this ensemble has an accuracy of ≥ 99.9%, even if some of those 149 SNP values were missing. On an independent dataset, predominantly of Caucasian origin, our continental classifier showed 96.8% accuracy and improved genomic control’s λ from 1.22 to 1.11. We next used the HapMap III dataset to learn classifiers to distinguish European subpopulations (North-Western vs. Southern), East Asian subpopulations (Chinese vs. Japanese), African subpopulations (Eastern vs. Western), North American subpopulations (European vs. Chinese vs. African vs. Mexican vs. Indian), and Kenyan subpopulations (Luhya vs. Maasai). In these cases, ETHNOPRED produced ensembles of 3, 39, 21, 11, and 25 disjoint decision trees, respectively involving 31, 502, 526, 242 and 271 SNPs, with 10-fold cross validation accuracy of

  5. ETHNOPRED: a novel machine learning method for accurate continental and sub-continental ancestry identification and population stratification correction.

    Science.gov (United States)

    Hajiloo, Mohsen; Sapkota, Yadav; Mackey, John R; Robson, Paula; Greiner, Russell; Damaraju, Sambasivarao

    2013-02-22

    Population stratification is a systematic difference in allele frequencies between subpopulations. This can lead to spurious association findings in the case-control genome wide association studies (GWASs) used to identify single nucleotide polymorphisms (SNPs) associated with disease-linked phenotypes. Methods such as self-declared ancestry, ancestry informative markers, genomic control, structured association, and principal component analysis are used to assess and correct population stratification but each has limitations. We provide an alternative technique to address population stratification. We propose a novel machine learning method, ETHNOPRED, which uses the genotype and ethnicity data from the HapMap project to learn ensembles of disjoint decision trees, capable of accurately predicting an individual's continental and sub-continental ancestry. To predict an individual's continental ancestry, ETHNOPRED produced an ensemble of 3 decision trees involving a total of 10 SNPs, with 10-fold cross validation accuracy of 100% using HapMap II dataset. We extended this model to involve 29 disjoint decision trees over 149 SNPs, and showed that this ensemble has an accuracy of ≥ 99.9%, even if some of those 149 SNP values were missing. On an independent dataset, predominantly of Caucasian origin, our continental classifier showed 96.8% accuracy and improved genomic control's λ from 1.22 to 1.11. We next used the HapMap III dataset to learn classifiers to distinguish European subpopulations (North-Western vs. Southern), East Asian subpopulations (Chinese vs. Japanese), African subpopulations (Eastern vs. Western), North American subpopulations (European vs. Chinese vs. African vs. Mexican vs. Indian), and Kenyan subpopulations (Luhya vs. Maasai). In these cases, ETHNOPRED produced ensembles of 3, 39, 21, 11, and 25 disjoint decision trees, respectively involving 31, 502, 526, 242 and 271 SNPs, with 10-fold cross validation accuracy of 86.5% ± 2.4%, 95.6% ± 3

  6. Comparison of empirical models and laboratory saturated hydraulic ...

    African Journals Online (AJOL)

    Numerous methods for estimating soil saturated hydraulic conductivity exist, which range from direct measurement in the laboratory to models that use only basic soil properties. A study was conducted to compare laboratory saturated hydraulic conductivity (Ksat) measurement and that estimated from empirical models.

  7. Stability and stabilization of linear systems with saturating actuators

    CERN Document Server

    Tarbouriech, Sophie; Gomes da Silva Jr, João Manoel; Queinnec, Isabelle

    2011-01-01

    Gives the reader an in-depth understanding of the phenomena caused by the more-or-less ubiquitous problem of actuator saturation. Proposes methods and algorithms designed to avoid, manage or overcome the effects of actuator saturation. Uses a state-space approach to ensure local and global stability of the systems considered. Compilation of fifteen years' worth of research results.

  8. Saturated hydraulic conductivity values of some forest soils of ...

    African Journals Online (AJOL)

    A simple falling-head method is presented for the laboratory determination of saturated hydraulic conductivity of some forest soils of Ghana. Using the procedure, it was found that saturated hydraulic conductivity was positively and negatively correlated with sand content and clay content, respectively, both at P = 0.05 level.

  9. Estimation of Saturation Flow Rates at Signalized Intersections

    Directory of Open Access Journals (Sweden)

    Chang-qiao Shao

    2012-01-01

    Full Text Available The saturation flow rate is a fundamental parameter to measure the intersection capacity and time the traffic signals. However, it is revealed that traditional methods which are mainly developed using the average value of observed queue discharge headways to estimate the saturation headway might lead to underestimate saturation flow rate. The goal of this paper is to study the stochastic nature of queue discharge headways and to develop a more accurate estimate method for saturation headway and saturation flow rate. Based on the surveyed data, the characteristics of queue discharge headways and the estimation method of saturated flow rate are studied. It is found that the average value of queue discharge headways is greater than the median value and that the skewness of the headways is positive. Normal distribution tests were conducted before and after a log transformation of the headways. The goodness-of-fit test showed that for some surveyed sites, the queue discharge headways can be fitted by the normal distribution and for other surveyed sites, the headways can be fitted by lognormal distribution. According to the queue discharge headway characteristics, the median value of queue discharge headways is suggested to estimate the saturation headway and a new method of estimation saturation flow rates is developed.

  10. The critical spot eraser-a method to interactively control the correction of local hot and cold spots in IMRT planning.

    Science.gov (United States)

    Süss, Philipp; Bortz, Michael; Küfer, Karl-Heinz; Thieke, Christian

    2013-03-21

    Common problems in inverse radiotherapy planning are localized dose insufficiencies like hot spots in organs at risk or cold spots inside targets. These are hard to correct since the optimization is based on global evaluations like maximum/minimum doses, equivalent uniform doses or dose-volume constraints for whole structures. In this work, we present a new approach to locally correct the dose of any given treatment plan. Once a treatment plan has been found that is acceptable in general but requires local corrections, these areas are marked by the planner. Then the system generates new plans that fulfil the local dose goals. Consequently, it is possible to interactively explore all plans between the locally corrected plans and the original treatment plan, allowing one to exactly adjust the degree of local correction and how the plan changes overall. Both the amount (in Gy) and the size of the local dose change can be navigated. The method is introduced formally as a new mathematical optimization setting, and is evaluated using a clinical example of a meningioma at the base of the skull. It was possible to eliminate a hot spot outside the target volume while controlling the dose changes to all other parts of the treatment plan. The proposed method has the potential to become the final standard step of inverse treatment planning.

  11. A Method of Time-Intensity Curve Calculation for Vascular Perfusion of Uterine Fibroids Based on Subtraction Imaging with Motion Correction

    Science.gov (United States)

    Zhu, Xinjian; Wu, Ruoyu; Li, Tao; Zhao, Dawei; Shan, Xin; Wang, Puling; Peng, Song; Li, Faqi; Wu, Baoming

    2016-12-01

    The time-intensity curve (TIC) from contrast-enhanced ultrasound (CEUS) image sequence of uterine fibroids provides important parameter information for qualitative and quantitative evaluation of efficacy of treatment such as high-intensity focused ultrasound surgery. However, respiration and other physiological movements inevitably affect the process of CEUS imaging, and this reduces the accuracy of TIC calculation. In this study, a method of TIC calculation for vascular perfusion of uterine fibroids based on subtraction imaging with motion correction is proposed. First, the fibroid CEUS recording video was decoded into frame images based on the record frame rate. Next, the Brox optical flow algorithm was used to estimate the displacement field and correct the motion between two frames based on warp technique. Then, subtraction imaging was performed to extract the positional distribution of vascular perfusion (PDOVP). Finally, the average gray of all pixels in the PDOVP from each image was determined, and this was considered the TIC of CEUS image sequence. Both the correlation coefficient and mutual information of the results with proposed method were larger than those determined using the original method. PDOVP extraction results have been improved significantly after motion correction. The variance reduction rates were all positive, indicating that the fluctuations of TIC had become less pronounced, and the calculation accuracy has been improved after motion correction. This proposed method can effectively overcome the influence of motion mainly caused by respiration and allows precise calculation of TIC.

  12. Comparing the Performance of Popular MEG/EEG Artifact Correction Methods in an Evoked-Response Study

    DEFF Research Database (Denmark)

    Haumann, Niels Trusbak; Parkkonen, Lauri; Kliuchko, Marina

    2016-01-01

    it reduces the artifacts interfering with the signal. However, ICA also adds noise, or correction errors, to the waveform when the signal-to-noise ratio (SNR) in the original data is relatively low—in particular to EEG and to MEG magnetometer data. In conclusion, ICA is recommended over SSP, but one should...

  13. The Application of the Model Correction Factor Method to a Reliability Analysis of a Composite Blade Structure

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimiroy; Friis-Hansen, Peter; Berggreen, Christian

    2009-01-01

    This paper presents a reliability analysis of a composite blade profile. The so-called Model Correction Factor technique is applied as an effective alternate approach to the response surface technique. The structural reliability is determined by use of a simplified idealised analytical model whic...

  14. A virtual sinogram method to reduce dental metallic implant artefacts in computed tomography-based attenuation correction for PET

    NARCIS (Netherlands)

    Abdoli, Mehrsima; Ay, Mohammad Reza; Ahmadian, Alireza; Zaidi, Habib

    Objective Attenuation correction of PET data requires accurate determination of the attenuation map (mu map), which represents the spatial distribution of linear attenuation coefficients of different tissues at 511 keV. The presence of high-density metallic dental filling material in head and neck

  15. Analysing saturable antibody binding based on serum data and pharmacokinetic modelling

    Science.gov (United States)

    Kletting, Peter; Kiryakos, Hady; Reske, Sven N.; Glatting, Gerhard

    2011-01-01

    In radioimmunotherapy, organ dose calculations are frequently based on pretherapeutic biodistribution measurements, assuming equivalence between pretherapeutic and therapeutic biodistribution. However, when saturation of antibody binding sites is important, this assumption might not be justified. Residual antibody and different amounts of administered antibody may lead to a considerably altered therapeutic biodistribution. In this study we developed a method based on serum activity measurements to investigate this effect in radioimmunotherapy with 90Y-labelled anti-CD66 antibody. Pretherapeutic and therapeutic serum activity data of ten patients with acute leukaemia were fitted to a set of four parsimonious pharmacokinetic models. All models included the key mechanisms of antibody binding, immunoreactivity and degradation; however, they differed with respect to linear or nonlinear binding and global or individual fitting of the model parameters. The empirically most supported model was chosen according to the corrected Akaike information criterion. The nonlinear models were most supported by the data (sum of probabilities ≈100%). Using the presented method, we identified relevant saturable binding for radioimmunotherapy with 90Y-labelled anti-CD66 antibody solely based on serum data. This general method may also be applicable to investigate other systems where saturation of binding sites might be important.

  16. Contributions of different bias-correction methods and reference meteorological forcing data sets to uncertainty in projected temperature and precipitation extremes

    Science.gov (United States)

    Iizumi, Toshichika; Takikawa, Hiroki; Hirabayashi, Yukiko; Hanasaki, Naota; Nishimori, Motoki

    2017-08-01

    The use of different bias-correction methods and global retrospective meteorological forcing data sets as the reference climatology in the bias correction of general circulation model (GCM) daily data is a known source of uncertainty in projected climate extremes and their impacts. Despite their importance, limited attention has been given to these uncertainty sources. We compare 27 projected temperature and precipitation indices over 22 regions of the world (including the global land area) in the near (2021-2060) and distant future (2061-2100), calculated using four Representative Concentration Pathways (RCPs), five GCMs, two bias-correction methods, and three reference forcing data sets. To widen the variety of forcing data sets, we developed a new forcing data set, S14FD, and incorporated it into this study. The results show that S14FD is more accurate than other forcing data sets in representing the observed temperature and precipitation extremes in recent decades (1961-2000 and 1979-2008). The use of different bias-correction methods and forcing data sets contributes more to the total uncertainty in the projected precipitation index values in both the near and distant future than the use of different GCMs and RCPs. However, GCM appears to be the most dominant uncertainty source for projected temperature index values in the near future, and RCP is the most dominant source in the distant future. Our findings encourage climate risk assessments, especially those related to precipitation extremes, to employ multiple bias-correction methods and forcing data sets in addition to using different GCMs and RCPs.

  17. Correction for Patient Sway in Radiographic Biplanar Imaging for Three-Dimensional Reconstruction of the Spine: In Vitro Study of a New Method

    Energy Technology Data Exchange (ETDEWEB)

    Legaye, J. (Dept. of Orthopedic Surgery, Univ. of Louvain - Mont-Godinne, Yvoir (Belgium)); Saunier, P.; Dumas, R. (Univ. of Lyon 1 - INRETS, Villeurbanne (France)); Vallee, C. (Radiology Dept., Hpital Raymond Poincare, Garches (France))

    2009-08-15

    Background: Three-dimensional (3D) reconstructions of the spine in the upright position are classically obtained using two-dimensional, non-simultaneous radiographic imaging. However, a subject's sway between exposures induces inaccuracy in the 3D reconstructions. Purpose: To evaluate the impact of patient sway between successive radiographic exposures, and to test if 3D reconstruction accuracy can be improved by a corrective method with simultaneous Moire-X-ray imaging. Material and Methods: Using a calibrated deformable phantom perceptible by both techniques (Moire and X-ray), the 3D positional and rotational vertebral data from 3D reconstructions with and without the corrective procedure were compared to the corresponding data of computed tomography (CT) scans, considered as a reference. All were expressed in the global axis system, as defined by the Scoliosis Research Society. Results: When a sagittal sway of 10 deg occurred between successive biplanar X-rays, the accuracy of the 3D reconstruction without correction was 8.8 mm for the anteroposterior vertebral locations and 6.4 deg for the sagittal orientations. When the corrective method was applied, the accuracy was improved to 1.3 mm and 1.5 deg, respectively. Conclusion: 3D accuracy improved significantly by using the corrective method, whatever the subject's sway. This technique is reliable for clinical appraisal of the spine, if the subject's sway does not exceed 10 deg. For greater sway, improvement persists, but a risk of lack of accuracy exists

  18. Novel, cyclic heat dissipation method for the correction of natural temperature gradients in sap flow measurements. Part 1. Theory and application.

    Science.gov (United States)

    Lubczynski, Maciek W; Chavarro-Rincon, Diana; Roy, Jean

    2012-07-01

    Natural temperature gradient (NTG) can be a significant problem in thermal sap flow measurements, particularly in dry environments with sparse vegetation. To resolve this problem, we propose a novel correction method called cyclic heat dissipation (CHD) in its thermal dissipation probe (TDP) application. The CHD method is based on cyclic, switching ON/OFF power schema measurements and a three-exponential model, extrapolating measured signal to steady state thermal equilibrium. The extrapolated signal OFF represents NTG, whereas the extrapolated signal ON represents standard TDP signal, biased by NTG. Therefore, subtraction of the OFF signal from the ON signal allows defining the unbiased TDP signal, finally processed according to standard Granier calibration. The in vivo Kalahari measurements were carried out in three steps on four different tree species, first as NTG, then as standard TDP and finally in CHD mode, each step for ∼1-2 days. Afterwards, each tree was separated from its stem following modified Roberts' (1977) procedure, and CHD verification was applied. The typical NTG varying from ∼0.5 °C during night-time to -1 °C during day-time, after CHD correction, resulted in significant reduction of sap flux densities (J(p)) as compared with the standard TDP, particularly distinct for low J(p). The verification of the CHD method indicated ∼20% agreement with the reference method, largely dependent on the sapwood area estimate. The proposed CHD method offers the following advantages: (i) in contrast to any other NTG correction method, it removes NTG bias from the measured signal by using in situ, extrapolated to thermal equilibrium signal; (ii) it does not need any specific calibration making use of the standard Granier calibration; (iii) it provides a physical background to the proposed NTG correction; (iv) it allows for power savings; (v) it is not tied to TDP, and so can be adapted to other thermal methods. In its current state, the CHD data

  19. Statistical bias correction method applied on CMIP5 datasets over the Indian region during the summer monsoon season for climate change applications

    Science.gov (United States)

    Prasanna, V.

    2018-01-01

    This study makes use of temperature and precipitation from CMIP5 climate model output for climate change application studies over the Indian region during the summer monsoon season (JJAS). Bias correction of temperature and precipitation from CMIP5 GCM simulation results with respect to observation is discussed in detail. The non-linear statistical bias correction is a suitable bias correction method for climate change data because it is simple and does not add up artificial uncertainties to the impact assessment of climate change scenarios for climate change application studies (agricultural production changes) in the future. The simple statistical bias correction uses observational constraints on the GCM baseline, and the projected results are scaled with respect to the changing magnitude in future scenarios, varying from one model to the other. Two types of bias correction techniques are shown here: (1) a simple bias correction using a percentile-based quantile-mapping algorithm and (2) a simple but improved bias correction method, a cumulative distribution function (CDF; Weibull distribution function)-based quantile-mapping algorithm. This study shows that the percentile-based quantile mapping method gives results similar to the CDF (Weibull)-based quantile mapping method, and both the methods are comparable. The bias correction is applied on temperature and precipitation variables for present climate and future projected data to make use of it in a simple statistical model to understand the future changes in crop production over the Indian region during the summer monsoon season. In total, 12 CMIP5 models are used for Historical (1901-2005), RCP4.5 (2005-2100), and RCP8.5 (2005-2100) scenarios. The climate index from each CMIP5 model and the observed agricultural yield index over the Indian region are used in a regression model to project the changes in the agricultural yield over India from RCP4.5 and RCP8.5 scenarios. The results revealed a better

  20. Use Residual Correction Method and Monotone Iterative Technique to Calculate the Upper and Lower Approximate Solutions of Singularly Perturbed Non-linear Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Chi-Chang Wang

    2013-09-01

    Full Text Available This paper seeks to use the proposed residual correction method in coordination with the monotone iterative technique to obtain upper and lower approximate solutions of singularly perturbed non-linear boundary value problems. First, the monotonicity of a non-linear differential equation is reinforced using the monotone iterative technique, then the cubic-spline method is applied to discretize and convert the differential equation into the mathematical programming problems of an inequation, and finally based on the residual correction concept, complex constraint solution problems are transformed into simpler questions of equational iteration. As verified by the four examples given in this paper, the method proposed hereof can be utilized to fast obtain the upper and lower solutions of questions of this kind, and to easily identify the error range between mean approximate solutions and exact solutions.

  1. The accuracy of the Hewlett-Packard 47201A ear oximeter below 50% saturation.

    Science.gov (United States)

    Stradling, J R

    1982-01-01

    The Hewlett-Packard 47201A ear oximeter has been shown to measure arterial oxygen saturation (SaO2) above 50% SaO2 with an accuracy of +/- 5%. Below 50% SaO2, the oximeter underestimates arterial saturation in a predictable way, thus allowing a correction factor to be used: true SaO2 = (oximeter reading + 50)/2.

  2. Theory of graphene saturable absorption

    Science.gov (United States)

    Marini, A.; Cox, J. D.; García de Abajo, F. J.

    2017-03-01

    Saturable absorption is a nonperturbative nonlinear optical phenomenon that plays a pivotal role in the generation of ultrafast light pulses. Here we show that this effect emerges in graphene at unprecedentedly low light intensities, thus opening avenues to new nonlinear physics and applications in optical technology. Specifically, we theoretically investigate saturable absorption in extended graphene by developing a semianalytical nonperturbative single-particle approach, describing electron dynamics in the atomically-thin material using the two-dimensional Dirac equation for massless Dirac fermions, which is recast in the form of generalized Bloch equations. By solving the electron dynamics nonperturbatively, we account for both interband and intraband contributions to the intensity-dependent saturated conductivity and conclude that the former dominates regardless of the intrinsic doping state of the material. We obtain results in qualitative agreement with atomistic quantum-mechanical simulations of graphene nanoribbons including electron-electron interactions, finite-size, and higher-band effects. Remarkably, such effects are found to affect mainly the linear absorption, while the predicted saturation intensities are in good quantitative agreement in the limit of extended graphene. Additionally, we find that the modulation depth of saturable absorption in graphene can be electrically manipulated through an externally applied gate voltage. Our results are relevant for the development of graphene-based optoelectronic devices, as well as for applications in mode-locking and random lasers.

  3. Method for Correction of Consequences of Radiation-Induced Heart Disease using Low-Intensity Electromagnetic Emission under Experimental Conditions.

    Science.gov (United States)

    Bavrina, A P; Monich, V A; Malinovskaya, S L; Yakovleva, E I; Bugrova, M L; Lazukin, V F

    2015-05-01

    Effects of successive exposure to ionizing irradiation and low-intensity broadband red light on electrical activity of the heart and myocardium microstructure were studied in rats. Lowintensity red light corrected some ECG parameters, in particular, it normalized QT and QTc intervals and voltage of R and T waves. Changes in ECG parameters were followed by alterations in microstructure of muscle fi laments in the myocardium of treatment group animals comparing to control group.

  4. Using Fuzzy Optimisation Method in Calculation of Charge Burden to Correct the Chemical Composition of Metal Melt

    Directory of Open Access Journals (Sweden)

    E. Ziółkowski

    2007-07-01

    Full Text Available The article describes a mathematical model of an algorithm used in calculation of the charge burden to correct the misadjusted chemical composition of metal melt. The model assumes that the charge materials covered by calculations are characterised by a fuzzy (uncertain chemical composition. The model also assumes different yield of chemical elements from the metal melt and charge materials. The discussion is completed with an example of calculations illustrating practical application of the said algorithm.

  5. SU-C-201-06: Small Field Correction Factors for the MicroDiamond Detector in the Gamma Knife-Model C Derived Using Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, J C [Wayne State University, Detroit, MI (United States); Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI (United States); Knill, C [Wayne State University, Detroit, MI (United States); Beaumont Hospital, Canton, MI (United States)

    2016-06-15

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes. Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to

  6. Comparison of Postoperative Height Changes of the Second Metatarsal Among 3 Osteotomy Methods for Hallux Valgus Deformity Correction.

    Science.gov (United States)

    Choi, Jun Young; Suh, Yu Min; Yeom, Ji Woong; Suh, Jin Soo

    2017-01-01

    We aimed to compare the postoperative height of the second metatarsal head relative to the first metatarsal head using axial radiographs among 3 different commonly used osteotomy techniques: proximal chevron metatarsal osteotomy (PCMO), scarf osteotomy, and distal chevron metatarsal osteotomy (DCMO). We retrospectively reviewed the radiographs and clinical findings of the patients with painful callosities under the second metatarsal head, complicated by hallux valgus, who underwent isolated PCMO, scarf osteotomy, or DCMO from February 2005 to January 2015. Each osteotomy was performed with 20 degrees of plantar ward obliquity. Along with lateral translation and rotation of the distal fragment to correct the deformity, lowering of the first metatarsal head was made by virtue of the oblique metatarsal osteotomy. Significant postoperative change in the second metatarsal height was observed on axial radiographs in all groups; this value was greatest in the PCMO group (vs scarf: P = .013; vs DCMO: P = .008) but did not significantly differ between the scarf and DCMO groups ( P = .785). The power for second metatarsal height correction was significantly greater in the PCMO group (vs scarf: P = .0005; vs DCMO: P = .0005) but did not significantly differ between the scarf and DCMO groups ( P = .832). Among the 3 osteotomy techniques commonly used to correct hallux valgus deformity, we observed that PCMO yielded the most effective height change of the second metatarsal head. Level III, retrospective comparative series.

  7. Photovoltaic yield: correction method for the mismatch between the solar spectrum and the reference ASTMG AM1.5G spectrum

    Directory of Open Access Journals (Sweden)

    Mambrini Thomas

    2015-01-01

    Full Text Available We propose a method for a spectral correction of the predicted PV yield and we show the importance of the spectral mismatch on the solar cell. Indeed, currently predicted PV yield are made considering solar irradiation, ambient temperature, incidence angle and partially (or not the solar spectrum. However, the solar spectrum is not always the same. It varies depending on the site location, atmospheric conditions, time of the day...This may impact the photovoltaic solar cells differently according to their technology (crystalline Silicon, thin film, multi-junctions... This paper presents a method for calculating the correction of the short-circuit current of a photovoltaic cell due to the mismatch of the solar spectrum with the reference ASTM AM1.5G spectrum, for a specific site, throughout the year, using monthly data of AERONET (AErosol RObotic NETwork established by NASA and CNRS and the model SMARTS (simple model for atmospheric transmission of sunshine developed by the NREL. We applied this correction method on the site of Palaiseau (France, 48.7°N, 2.2°E, 156 m, close to our laboratory, just for comparison and the example of Blida (Algeria, 36°N, 2°E, 230 m is given for one year. This example illustrates the importance of this spectral correction to better estimate the photovoltaic yield. To be more precise, instead of modeling the solar spectral distribution, one can measure it with a spectro-radiometer, and then, derive the spectral mismatch correction. Some of our typical measurements are presented in this paper.

  8. Comparison of local International Sensitivity Index calibration and 'Direct INR' methods in correction of locally reported International Normalized Ratios: an international study.

    Science.gov (United States)

    Poller, L; Keown, M; Ibrahim, S; van den Besselaar, A M H P; Roberts, C; Stevenson, K; Tripodi, A; Pattison, A; Jespersen, J

    2007-05-01

    It is no longer feasible to check local International Normalized Ratios (INR) by the World Health Organization International Sensitivity Index (ISI) calibrations because the necessary manual prothrombin time technique required has generally been discarded. An international collaborative study at 77 centers has compared local INR correction using the two alternative methods recommended in the Scientific and Standardization Committee of the International Society on Thrombosis and Haemostasis guidelines: local ISI calibration and 'Direct INR'. Success of INR correction by local ISI calibration and with Direct INR was assessed with a set of 27 certified lyophilized plasmas (20 from patients on warfarin and seven from normals). At 49 centers using human thromboplastins, 3.0% initial average local INR deviation from certified INR was reduced by local ISI calibration to 0.7%, and at 25 centers using rabbit reagents, from 15.9% to 7.5%. With a minority of commercial thromboplastins, mainly 'combined' rabbit reagents, INR correction was not achieved by local ISI calibration. However, when rabbit combined reagents were excluded the overall mean INR deviation after correction was reduced further to 3.9%. In contrast, with Direct INR, mean deviation using human thromboplastins increased from 3.0% to 6.6%, but there was some reduction with rabbit reagents from 15.9% to 10% (12.3% with combined reagents excluded). Local ISI calibration gave INR correction for the majority of PT systems but failed at the small number using combined rabbit reagents suggesting a need for a combined reference thromboplastin. Direct INR correction was disappointing but better than local ISI calibration with combined rabbit reagents. Interlaboratory variability was improved by both procedures with human reagents only.

  9. Estimation of an image derived input function with MR-defined carotid arteries in FDG-PET human studies using a novel partial volume correction method.

    Science.gov (United States)

    Sari, Hasan; Erlandsson, Kjell; Law, Ian; Larsson, Henrik Bw; Ourselin, Sebastien; Arridge, Simon; Atkinson, David; Hutton, Brian F

    2017-04-01

    Kinetic analysis of 18 F-fluorodeoxyglucose positron emission tomography data requires an accurate knowledge the arterial input function. The gold standard method to measure the arterial input function requires collection of arterial blood samples and is an invasive method. Measuring an image derived input function is a non-invasive alternative but is challenging due to partial volume effects caused by the limited spatial resolution of the positron emission tomography scanners. In this work, a practical image derived input function extraction method is presented, which only requires segmentation of the carotid arteries from MR images. The simulation study results showed that at least 92% of the true intensity could be recovered after the partial volume correction. Results from 19 subjects showed that the mean cerebral metabolic rate of glucose calculated using arterial samples and partial volume corrected image derived input function were 26.9 and 25.4 mg/min/100 g, respectively, for the grey matter and 7.2 and 6.7 mg/min/100 g for the white matter. No significant difference in the estimated cerebral metabolic rate of glucose values was observed between arterial samples and corrected image derived input function (p > 0.12 for grey matter and white matter). Hence, the presented image derived input function extraction method can be a practical alternative to noninvasively analyze dynamic 18 F-fluorodeoxyglucose data without the need for blood sampling.

  10. Determination of the activity of a molecular solute in saturated solution

    Energy Technology Data Exchange (ETDEWEB)

    Nordstroem, Fredrik L. [Department of Chemical Engineering and Technology, Royal Institute of Technology, 100 44 Stockholm (Sweden); Rasmuson, Ake C. [Department of Chemical Engineering and Technology, Royal Institute of Technology, 100 44 Stockholm (Sweden)], E-mail: rasmuson@ket.kth.se

    2008-12-15

    Prediction of the solubility of a solid molecular compound in a solvent, as well as, estimation of the solution activity coefficient from experimental solubility data both require estimation of the activity of the solute in the saturated solution. The activity of the solute in the saturated solution is often defined using the pure melt at the same temperature as the thermodynamic reference. In chemical engineering literature also the activity of the solid is usually defined on the same reference state. However, far below the melting temperature, the properties of this reference state cannot be determined experimentally, and different simplifications and approximations are normally adopted. In the present work, a novel method is presented to determine the activity of the solute in the saturated solution (=ideal solubility) and the heat capacity difference between the pure supercooled melt and solid. The approach is based on rigorous thermodynamics, using standard experimental thermodynamic data at the melting temperature of the pure compound and solubility measurements in different solvents at various temperatures. The method is illustrated using data for ortho-, meta-, and para-hydroxybenzoic acid, salicylamide and paracetamol. The results show that complete neglect of the heat capacity terms may lead to estimations of the activity that are incorrect by a factor of 12. Other commonly used simplifications may lead to estimations that are only one-third of the correct value.

  11. A method for partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function.

    Science.gov (United States)

    Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert

    2010-01-07

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised by partial volume effects which may affect treatment prognosis, assessment or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discovery LS at positions of increasing radii from the scanner's center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method's correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three-dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of +/-30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated

  12. Inductance identification of an induction machine taking load-dependent saturation into account

    OpenAIRE

    Ranta, Mikaela; Hinkkanen, Marko; Luomi, Jorma

    2008-01-01

    The paper proposes an identification method for the inductances of induction machines, based on signal injection. Due to magnetic saturation, a saturation-induced saliency appears in the induction motor, and the total leakage inductance estimate depends on the angle of the excitation signal. The proposed identification method is based on a small-signal model that includes the saturation-induced saliency. Because of the saturation, the load also affects the estimate, and measurements are neede...

  13. [Monitoring of jugular venous oxygen saturation].

    Science.gov (United States)

    Nakamura, Shunsuke

    2011-04-01

    The continuous monitoring of jugular venous oxygen saturation(SjO2) has become a practical method for monitoring global cerebral oxygenation and metabolism. SjO2 reflects the balance between the cerebral blood flow and the cerebral metabolic rate for oxygen (CMRO2), if arterial oxyhemoglobin saturation, hemoglobin concentration remain constant. Normal SjO2 values range between 55% and 75%. Low SjO2 indicates cerebral hypoperfusion or ischemia. Conversely, an increased SjO2 indicates either cerebral hyperemia or a disorder that decreases CMRO2. In minimizing secondary brain damage following resuscitation from cardiopulmonary arrest, SjO2 monitoring is thus considered to be an integral part of multimodality monitoring and can provide important information for the management of patients in neurointensive care.

  14. [The improved design of encoding mask and the correcting method for recovered spectral images in Hadamard transform spectral imager based on DMD].

    Science.gov (United States)

    Xu, Jun; Xie, Cheng-Wang; Liu, Hai-Wen; Liu, Qiang; Li, Bin-Cheng

    2013-05-01

    A novel type of DMD-based Hadamard transform spectral imager is introduced. Taking the 7-order S-matrix as an example for discussion, the present paper develops an improved design of Hadamard encoding mask, which makes the dispersed spectrum of all pixels to be encoded by seven elements strictly. A correcting method for the recovered spectral images is proposed, and eventually 6 high-quality spectral images are obtained when Hadamard transform spectral imager operates based on 7-order S-matrix. Similarly, if the spectral imager operates based on n-order S-matrix, n--1 spectral images can be obtained. The experimental results show that the improved design and the correction method are feasible and effective.

  15. A comparative study of the vibrational corrections for the dynamic electric properties of the LiF molecule using numerical and perturbation methods

    Science.gov (United States)

    Pessoa, Renato; Castro, Marcos A.; Amaral, Orlando A. V.; Fonseca, Tertius L.

    2