WorldWideScience

Sample records for reliable peak signal-to-noise

  1. Free Energy Adjusted Peak Signal to Noise Ratio (FEA-PSNR) for Image Quality Assessment

    Science.gov (United States)

    Liu, Ning; Zhai, Guangtao

    2017-12-01

    Peak signal to noise ratio (PSNR), the de facto universal image quality metric has been widely criticized as having poor correlation with human subjective quality ratings. In this paper, it will be illustrated that the low performance of PSNR as an image quality metric is partially due to its inability of differentiating image contents. And it is revealed that the deviation between subjective score and PSNR for each type of distortions can be systematically captured by perceptual complexity of the target image. The free energy modelling technique is then introduced to simulate the human cognitive process and measure perceptual complexity of an image. Then it is shown that performance of PSNR can be effectively improved using a linear score mapping process considering image free energy and distortion type. The proposed free energy adjusted peak signal to noise ratio (FEA-PSNR) does not change computational steps the of ordinary PSNR and therefore it inherits the merits of being simple, derivable and physically meaningful. So FEA-PSNR can be easily integrated into existing PSNR based image processing systems to achieve more visually plausible results. And the proposed analysis approach can be extended to other types of image quality metrics for enhanced performance.

  2. The influence of the maximal value and peak enhancement value of arterial and venous enhancement curve on CT perfusion parameters and signal-to-noise ratio

    International Nuclear Information System (INIS)

    Ju Haiyue; Gao Sijia; Xu Ke; Wang Qiang

    2007-01-01

    Objective: To explore the influence of the maximal value and peak enhancement value of arterial and venous enhancement curve on CT perfusion parameters and signal-to-noise ratio (SNR). Methods: Seventeen patients underwent brain CT perfusion scanning. All row data were analyzed with perfusion software for 6 times, and get different arterial and venous enhancement curves for each patient. The maximal values and peak enhancement values of each arterial and venous enhancement curves, as well as mean perfusion parameters including cerebral blood flow (CBF), cerebral blood volume (CBV), mean transit time (MTT), permeability surface area product (PS), and their standard deviations (SD) in homolateral white and gray matter were measured and recorded. SNR was calculated by dividing the mean perfusion parameter value by its SD. Pearson correlation analysis and two-tailed paired Student t test were used for statistics. Results: The maximal values and peak enhancement values of arterial and venous curves were correlated with mean SNR CBF , SNR CBV and SNR MTT in both white matter and gray matters (r value range: 0.332-0.922, P PS in white matter(r=0.256, P PS (in both white matter and gray matters) and arterial peak enhancement values, the maximal values and venous peak enhancement values, or between SNR PS (in gray matter) and the maximal values of venous curve(r value range: -0.058-0.210, P>0.05). (2) Mean CBF, CBV and PS values in the group with low venous peak enhancement values were significantly different from the group with high venous peak enhancement values in both white and gray matters (t value range: 3.830-5.337, P 0.05). Conclusions: The mean perfusion parameters and SNR are influenced by the maximal values and peak enhancement values of the arterial and venous curves. Peak enhancement of arterial and venous curves should be adjusted to higher level to make parameter values more reliable and increase the SNR. (authors)

  3. KiDS-450: cosmological constraints from weak lensing peak statistics - I. Inference from analytical prediction of high signal-to-noise ratio convergence peaks

    Science.gov (United States)

    Shan, HuanYuan; Liu, Xiangkun; Hildebrandt, Hendrik; Pan, Chuzhong; Martinet, Nicolas; Fan, Zuhui; Schneider, Peter; Asgari, Marika; Harnois-Déraps, Joachim; Hoekstra, Henk; Wright, Angus; Dietrich, Jörg P.; Erben, Thomas; Getman, Fedor; Grado, Aniello; Heymans, Catherine; Klaes, Dominik; Kuijken, Konrad; Merten, Julian; Puddu, Emanuella; Radovich, Mario; Wang, Qiao

    2018-02-01

    This paper is the first of a series of papers constraining cosmological parameters with weak lensing peak statistics using ˜ 450 deg2 of imaging data from the Kilo Degree Survey (KiDS-450). We measure high signal-to-noise ratio (SNR: ν) weak lensing convergence peaks in the range of 3 < ν < 5, and employ theoretical models to derive expected values. These models are validated using a suite of simulations. We take into account two major systematic effects, the boost factor and the effect of baryons on the mass-concentration relation of dark matter haloes. In addition, we investigate the impacts of other potential astrophysical systematics including the projection effects of large-scale structures, intrinsic galaxy alignments, as well as residual measurement uncertainties in the shear and redshift calibration. Assuming a flat Λ cold dark matter model, we find constraints for S_8=σ _8(Ω _m/0.3)^{0.5}=0.746^{+0.046}_{-0.107} according to the degeneracy direction of the cosmic shear analysis and Σ _8=σ _8(Ω _m/0.3)^{0.38}=0.696^{+0.048}_{-0.050} based on the derived degeneracy direction of our high-SNR peak statistics. The difference between the power index of S8 and in Σ8 indicates that combining cosmic shear with peak statistics has the potential to break the degeneracy in σ8 and Ωm. Our results are consistent with the cosmic shear tomographic correlation analysis of the same data set and ˜2σ lower than the Planck 2016 results.

  4. Power peaking nuclear reliability factors

    International Nuclear Information System (INIS)

    Hassan, H.A.; Pegram, J.W.; Mays, C.W.; Romano, J.J.; Woods, J.J.; Warren, H.D.

    1977-11-01

    The Calculational Nuclear Reliability Factor (CNRF) assigned to the limiting power density calculated in reactor design has been determined. The CNRF is presented as a function of the relative power density of the fuel assembly and its radial local. In addition, the Measurement Nuclear Reliability Factor (MNRF) for the measured peak hot pellet power in the core has been evaluated. This MNRF is also presented as a function of the relative power density and radial local within the fuel assembly

  5. Increasing the Signal to Noise Ratio in a Chemistry Laboratory ...

    African Journals Online (AJOL)

    Increasing the Signal to Noise Ratio in a Chemistry Laboratory - Improving a Practical for Academic Development Students. ... Analysis of data collected in 2001 shows that the changes made a significant impact on the effectiveness of the laboratory session. South African Journal of Chemistry Vol.56 2003: 47-53 ...

  6. Structural Parameters of Star Clusters: Signal to Noise Effects

    Directory of Open Access Journals (Sweden)

    Narbutis D.

    2015-09-01

    Full Text Available We study the impact of photometric signal to noise on the accuracy of derived structural parameters of unresolved star clusters using MCMC model fitting techniques. Star cluster images were simulated as a smooth surface brightness distribution following a King profile convolved with a point spread function. The simulation grid was constructed by varying the levels of sky background and adjusting the cluster’s flux to a specified signal to noise. Poisson noise was introduced to a set of cluster images with the same input parameters at each node of the grid. Model fitting was performed using “emcee” algorithm. The presented posterior distributions of the parameters illustrate their uncertainty and degeneracies as a function of signal to noise. By defining the photometric aperture containing 80% of the cluster’s flux, we find that in all realistic sky background level conditions a signal to noise ratio of ~50 is necessary to constrain the cluster’s half-light radius to an accuracy better than ~20%. The presented technique can be applied to synthetic images simulating various observations of extragalactic star clusters.

  7. Improving the signal-to-noise ratio in mass and ion kinetic energy spectrometers

    International Nuclear Information System (INIS)

    Brenton, A.G.; Beynon, J.H.; Morgan, R.P.

    1979-01-01

    The signal-to-noise ratio in mass and ion kinetic energy spectrometers is limited by noise generated from the presence of scattered ions and neutrals. Methods of eliminating this are illustrated with reference to the ZAB-2F instrument manufactured by VG-Micromass Ltd. It is estimated that after the modifications the instrument is capable, on a routine basis, of measuring peaks corresponding to the arrival of ions at a rate of the order of 1 ion s -1 . (Auth.)

  8. A high signal-to-noise ratio composite quasar spectrum

    International Nuclear Information System (INIS)

    Francis, P.J.; Hewett, P.C.; Foltz, C.B.; Chaffee, F.H.; Weymann, R.J.

    1991-01-01

    A very high signal-to-noise ratio (S/N of about 400) composite spectrum of the rest-frame ultraviolet and optical region of high luminosity quasars is presented. The spectrum is derived from 718 individual spectra obtained as part of the Large Bright Quasar Survey. The moderate resolution, 4A or less, and high signal-to-noise ratio allow numerous weak emission features to be identified. Of particular note is the large equivalent-width of the Fe II emission in the rest-frame ultraviolet and the blue continuum slope of the composite. The primary aim of this paper is to provide a reference spectrum for use in line identifications, and a series of large-scale representations of the composite spectrum are shown. A measure of the standard deviation of the individual quasar spectra from the composite spectrum is also presented. 12 refs

  9. Signal-to-noise limitations in white light holography.

    Science.gov (United States)

    Ribak, E; Roddier, C; Roddier, F; Breckinridge, J B

    1988-03-15

    A simple derivation is given for the signal-to-noise ratio (SNR) in images reconstructed from incoherent holograms. Dependence is shown to be on the hologram SNR, object complexity, and the number of pixels in the detector. Reconstruction of involved objects becomes possible with high dynamic range detectors such as charge coupled devices. We have produced such white light holograms by means of a rotational shear interferometer combined with a chromatic corrector. A digital inverse transform recreated the object.

  10. Debuncher Momentum Cooling Systems Signal to Noise Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Pasquinelli, Ralph J.; /Fermilab

    2001-12-18

    The Debuncher Momentum cooling systems were carefully measured for signal to noise. It was observed that cooling performance was not optimum. Closer inspection shows that the installed front-end bandpass filters are wider than the pickup response. (The original filters were specified to be wider so that none of the available bandwidth would be clipped.) The end result is excess noise is amplified and passed onto the kickers unimpeded, hence, reducing the achievable system gain. From this data, new filters should be designed to improve performance. New system bandwidths are specified on the data figures. Also included are the transfer function measurements that clearly show adjacent band response. In band 4 upper, the adjacent lobes are strong and out of phase. This is also degrading the system performance. The correlation between spectrum analyzer signal to noise and network analyzer system transfer functions is very strong. The table below has a calculation of expected improvement of front noise reduction by means of building new front-end bandpass filters. The calculation is based on a flat input noise spectrum and is a linear estimation of improvement. The listed 3dB bandwidths of the original filters are from measured data. The expected bandwidth is taken from the linear spectrum analyzer plots and is closer to a 10 dB bandwidth making the percentage improvement conservative. The signal to noise measurements are taken with circulating pbars in the Debuncher. One cooling system was measured at a time with all others off. Beam currents are below ten microamperes.

  11. Debuncher Momentum Cooling Systems Signal to Noise Measurements

    International Nuclear Information System (INIS)

    Pasquinelli, Ralph J.

    2001-01-01

    The Debuncher Momentum cooling systems were carefully measured for signal to noise. It was observed that cooling performance was not optimum. Closer inspection shows that the installed front-end bandpass filters are wider than the pickup response. (The original filters were specified to be wider so that none of the available bandwidth would be clipped.) The end result is excess noise is amplified and passed onto the kickers unimpeded, hence, reducing the achievable system gain. From this data, new filters should be designed to improve performance. New system bandwidths are specified on the data figures. Also included are the transfer function measurements that clearly show adjacent band response. In band 4 upper, the adjacent lobes are strong and out of phase. This is also degrading the system performance. The correlation between spectrum analyzer signal to noise and network analyzer system transfer functions is very strong. The table below has a calculation of expected improvement of front noise reduction by means of building new front-end bandpass filters. The calculation is based on a flat input noise spectrum and is a linear estimation of improvement. The listed 3dB bandwidths of the original filters are from measured data. The expected bandwidth is taken from the linear spectrum analyzer plots and is closer to a 10 dB bandwidth making the percentage improvement conservative. The signal to noise measurements are taken with circulating pbars in the Debuncher. One cooling system was measured at a time with all others off. Beam currents are below ten microamperes.

  12. Increasing signal-to-noise ratio of swept-source optical coherence tomography by oversampling in k-space

    Science.gov (United States)

    Nagib, Karim; Mezgebo, Biniyam; Thakur, Rahul; Fernando, Namal; Kordi, Behzad; Sherif, Sherif

    2018-03-01

    Optical coherence tomography systems suffer from noise that could reduce ability to interpret reconstructed images correctly. We describe a method to increase the signal-to-noise ratio of swept-source optical coherence tomography (SSOCT) using oversampling in k-space. Due to this oversampling, information redundancy would be introduced in the measured interferogram that could be used to reduce white noise in the reconstructed A-scan. We applied our novel scaled nonuniform discrete Fourier transform to oversampled SS-OCT interferograms to reconstruct images of a salamander egg. The peak-signal-to-noise (PSNR) between the reconstructed images using interferograms sampled at 250MS/s andz50MS/s demonstrate that this oversampling increased the signal-to-noise ratio by 25.22 dB.

  13. Particle image velocimetry correlation signal-to-noise ratio metrics and measurement uncertainty quantification

    International Nuclear Information System (INIS)

    Xue, Zhenyu; Charonko, John J; Vlachos, Pavlos P

    2014-01-01

    In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we extend the original work by Charonko and Vlachos and present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. Several corrections have been applied in this work. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations by applying a subtraction of the minimum correlation value to remove the effect of the background image noise. In addition, the notion of a ‘valid’ measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct from an ‘outlier’ measurement. Finally the type and significance of the error distribution function is investigated. These advancements lead to more robust and reliable uncertainty estimation models compared with the original work by Charonko and Vlachos. The models are tested against both synthetic benchmark data as well as experimental measurements. In this work, U 68.5 uncertainties are estimated at the 68.5% confidence level while U 95 uncertainties are estimated at 95% confidence level. For all cases the resulting calculated coverage factors approximate the expected theoretical confidence intervals, thus demonstrating the applicability of these new models for estimation of uncertainty for individual PIV measurements. (paper)

  14. Particle image velocimetry correlation signal-to-noise ratio metrics and measurement uncertainty quantification

    Science.gov (United States)

    Xue, Zhenyu; Charonko, John J.; Vlachos, Pavlos P.

    2014-11-01

    In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we extend the original work by Charonko and Vlachos and present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. Several corrections have been applied in this work. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations by applying a subtraction of the minimum correlation value to remove the effect of the background image noise. In addition, the notion of a ‘valid’ measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct from an ‘outlier’ measurement. Finally the type and significance of the error distribution function is investigated. These advancements lead to more robust and reliable uncertainty estimation models compared with the original work by Charonko and Vlachos. The models are tested against both synthetic benchmark data as well as experimental measurements. In this work, {{U}68.5} uncertainties are estimated at the 68.5% confidence level while {{U}95} uncertainties are estimated at 95% confidence level. For all cases the resulting calculated coverage factors approximate the expected theoretical confidence intervals, thus demonstrating the applicability of these new models for estimation of uncertainty for individual PIV measurements.

  15. Enhancement of the Signal-to-Noise Ratio in Sonic Logging Waveforms by Seismic Interferometry

    KAUST Repository

    Aldawood, Ali

    2012-04-01

    Sonic logs are essential tools for reliably identifying interval velocities which, in turn, are used in many seismic processes. One problem that arises, while logging, is irregularities due to washout zones along the borehole surfaces that scatters the transmitted energy and hence weakens the signal recorded at the receivers. To alleviate this problem, I have extended the theory of super-virtual refraction interferometry to enhance the signal-to-noise ratio (SNR) sonic waveforms. Tests on synthetic and real data show noticeable signal-to-noise ratio (SNR) enhancements of refracted P-wave arrivals in the sonic waveforms. The theory of super-virtual interferometric stacking is composed of two redatuming steps followed by a stacking procedure. The first redatuming procedure is of correlation type, where traces are correlated together to get virtual traces with the sources datumed to the refractor. The second datuming step is of convolution type, where traces are convolved together to dedatum the sources back to their original positions. The stacking procedure following each step enhances the signal to noise ratio of the refracted P-wave first arrivals. Datuming with correlation and convolution of traces introduces severe artifacts denoted as correlation artifacts in super-virtual data. To overcome this problem, I replace the datuming with correlation step by datuming with deconvolution. Although the former datuming method is more robust, the latter one reduces the artifacts significantly. Moreover, deconvolution can be a noise amplifier which is why a regularization term is utilized, rendering the datuming with deconvolution more stable. Tests of datuming with deconvolution instead of correlation with synthetic and real data examples show significant reduction of these artifacts. This is especially true when compared with the conventional way of applying the super-virtual refraction interferometry method.

  16. Study of signal-to-noise ratio in digital mammography

    Science.gov (United States)

    Kato, Yuri; Fujita, Naotoshi; Kodera, Yoshie

    2009-02-01

    Mammography techniques have recently advanced from those using analog systems (the screen-film system) to those using digital systems; for example, computed radiography (CR) and flat-panel detectors (FPDs) are nowadays used in mammography. Further, phase contrast mammography (PCM)-a digital technique by which images with a magnification of 1.75× can be obtained-is now available in the market. We studied the effect of the air gap in PCM and evaluated the effectiveness of an antiscatter x-ray grid in conventional mammography (CM) by measuring the scatter fraction ratio (SFR) and relative signal-to-noise ratio (rSNR) and comparing them between PCM and the digital CM. The results indicated that the SFRs for the CM images obtained with a grid were the lowest and that these ratios were almost the same as those for the PCM images. In contrast, the rSNRs for the PCM images were the highest, which means that the scattering of x-rays was sufficiently reduced by the air gap without the loss of primary x-rays.

  17. Interferometric Imaging of Geostationary Satellites: Signal-to-Noise Considerations

    Science.gov (United States)

    Jorgensen, A.; Schmitt, H.; Mozurkewich, D.; Armstrong, J.; Restaino, S.; Hindsley, R.

    2011-09-01

    Geostationary satellites are generally too small to image at high resolution with conventional single-dish telescopes. Obtaining many resolution elements across a typical geostationary satellite body requires a single-dish telescope with a diameter of 10’s of m or more, with a good adaptive optics system. An alternative is to use an optical/infrared interferometer consisting of multiple smaller telescopes in an array configuration. In this paper and companion papers1, 2 we discuss the performance of a common-mount 30-element interferometer. The instrument design is presented by Mozurkewich et al.,1 and imaging performance is presented by Schmitt et al.2 In this paper we discuss signal-to-noise ratio for both fringe-tracking and imaging. We conclude that the common-mount interferometer is sufficiently sensitive to track fringes on the majority of geostationary satellites. We also find that high-fidelity images can be obtained after a short integration time of a few minutes to a few tens of minutes.

  18. Skalabilitas Signal to Noise Ratio (SNR pada Pengkodean Video dengan Derau Gaussian

    Directory of Open Access Journals (Sweden)

    Agus Purwadi

    2015-04-01

    Full Text Available In video transmission, there is a possibility of packet lost an d a large load variation on the bandwidth. These are the source of network congestion, which can interfere the communication data rate. This study discusses a system to overcome the congestion with Signal-to-noise ratio (SNR scalability-based approach, for the video sequence encoding method into two layers, which is a solution to decrease encoding mode for each packet and channel coding rate. The goal is to minimize any distortion from the source to the destination. The coding system used is a video coding standards that is MPEG-2 or H.263 with SNR scalability. The algorithm used for motion compensation, temporal redundancy and spatial redundancy is the Discrete Cosine Transform (DCT and quantization. The transmission error is simulated by adding Gaussian noise (error on motion vectors. From the simulation results, the SNR and Peak Signal to Noise Ratio (PSNR in the noisy video frames decline with averages of 3dB and 4dB respectively.

  19. Shuttle bit rate synchronizer. [signal to noise ratios and error analysis

    Science.gov (United States)

    Huey, D. C.; Fultz, G. L.

    1974-01-01

    A shuttle bit rate synchronizer brassboard unit was designed, fabricated, and tested, which meets or exceeds the contractual specifications. The bit rate synchronizer operates at signal-to-noise ratios (in a bit rate bandwidth) down to -5 dB while exhibiting less than 0.6 dB bit error rate degradation. The mean acquisition time was measured to be less than 2 seconds. The synchronizer is designed around a digital data transition tracking loop whose phase and data detectors are integrate-and-dump filters matched to the Manchester encoded bits specified. It meets the reliability (no adjustments or tweaking) and versatility (multiple bit rates) of the shuttle S-band communication system through an implementation which is all digital after the initial stage of analog AGC and A/D conversion.

  20. Lidar signal-to-noise ratio improvements: Considerations and techniques

    Science.gov (United States)

    Hassebo, Yasser Y.

    The primary objective of this study is to improve lidar signal-to-noise ratio (SNR) and hence extend attainable lidar ranges through reduction of the sky background noise (BGP), which dominates other sources of noise in daytime operations. This is particularly important for Raman lidar techniques where the Raman backscattered signal of interest is relatively weak compared with the elastic backscatter lidars. Two approaches for reduction of sky background noise are considered: (1) Improvements in lidar SNR by optimization of the design of the lidar receiver were examined by a series of simulations. This part of the research concentrated on biaxial lidar systems, where overlap between laser beam and receiver field of view (FOV) is an important aspect of noise considerations. The first optimized design evolved is a wedge shaped aperture. While this design has the virtue of greatly reducing background light, it is difficult to implement practically, requiring both changes in area and position with lidar range. A second more practical approach, which preserves some of the advantages of the wedge design, was also evolved. This uses a smaller area circular aperture optimally located in the image plane for desired ranges. Simulated numerical results for a biaxial lidar have shown that the best receiver parameters selection is one using a small circular aperture (field stop) with a small telescope focal length f, to ensure the minimum FOV that accepts all return signals over the entire lidar range while at the same time minimizing detected BGP and hence maximizing lidar SNR and attainable lidar ranges. The improvement in lidar SNR was up to 18%. (2) A polarization selection technique was implemented to reduce sky background signal for linearly polarized monostatic elastic backscatter lidar measurements. The technique takes advantage of naturally occurring polarization properties in scattered sky light, and then ensures that both the lidar transmitter and receiver track and

  1. Imaging resolution signal-to-noise ratio in transverse phase amplification from classical information theory

    International Nuclear Information System (INIS)

    French, Doug; Huang Zun; Pao, H.-Y.; Jovanovic, Igor

    2009-01-01

    A quantum phase amplifier operated in the spatial domain can improve the signal-to-noise ratio in imaging beyond the classical limit. The scaling of the signal-to-noise ratio with the gain of the quantum phase amplifier is derived from classical information theory

  2. Improved stochastic resonance algorithm for enhancement of signal-to-noise ratio of high-performance liquid chromatographic signal

    International Nuclear Information System (INIS)

    Xie Shaofei; Xiang Bingren; Deng Haishan; Xiang Suyun; Lu Jun

    2007-01-01

    Based on the theory of stochastic resonance, an improved stochastic resonance algorithm with a new criterion for optimizing system parameters to enhance signal-to-noise ratio (SNR) of HPLC/UV chromatographic signal for trace analysis was presented in this study. Compared with the conventional criterion in stochastic resonance, the proposed one can ensure satisfactory SNR as well as good peak shape of chromatographic peak in output signal. Application of the criterion to experimental weak signals of HPLC/UV was investigated and the results showed an excellent quantitative relationship between different concentrations and responses

  3. Study on the ratio of signal to noise for single photon resolution time spectrometer

    International Nuclear Information System (INIS)

    Wang Zhaomin; Huang Shengli; Xu Zizong; Wu Chong

    2001-01-01

    The ratio of signal to noise for single photon resolution time spectrometer and their influence factors were studied. A method to depress the background, to shorten the measurement time and to increase the ratio of signal to noise was discussed. Results show that ratio of signal to noise is proportional to solid angle of detector to source and detection efficiency, and inverse proportional to electronics noise. Choose the activity of the source was important for decreasing of random coincidence counting. To use a coincidence gate and a discriminator of single photon were an effective way of increasing measurement accuracy and detection efficiency

  4. The deterioration of signal to noise ratio due to baseline restoration

    International Nuclear Information System (INIS)

    Henein, K.L.

    1976-02-01

    The deterioration of signal to noise ratio due to baseline restoration is theoretically studied. This study brings to the conclusion that a restorer has negligible influence on the signal to noise ratio when its time constant is ten times greater than that of the main amplifier filter, and that the rapid restorers prevail over the slow ones when the time constant of the filter is increased by at least 50% of its optimal value [fr

  5. The dependence of signal-to-noise ratio on number of scans in covariance spectroscopy.

    Science.gov (United States)

    Qian, Yi; Shen, Ming; Amoureux, Jean-Paul; Noda, Isao; Hu, Bingwen

    2014-01-01

    The dependence of signal-to-noise ratio on the number of scans in covariance spectroscopy has been systematically analyzed for the first time with the intriguing relationship of SNRcov∝n/2, which is different from that in FT2D spectrum with SNRFT∝n. This relationship guarantees the signal-to-noise ratio when increasing the number of scans. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Signal-to-noise based local decorrelation compensation for speckle interferometry applications

    International Nuclear Information System (INIS)

    Molimard, Jerome; Cordero, Raul; Vautrin, Alain

    2008-01-01

    Speckle-based interferometric techniques allow assessing the whole-field deformation induced on a specimen due to the application of load. These high sensitivity optical techniques yield fringe images generated by subtracting speckle patterns captured while the specimen undergoes deformation. The quality of the fringes, and in turn the accuracy of the deformation measurements, strongly depends on the speckle correlation. Specimen rigid body motion leads to speckle decorrelation that, in general, cannot be effectively counteracted by applying a global translation to the involved speckle patterns. In this paper, we propose a recorrelation procedure based on the application of locally evaluated translations. The proposed procedure implies dividing the field into several regions, applying a local translation, and calculating, in every region, the signal-to-noise ratio (SNR). Since the latter is a correlation indicator (the noise increases with the decorrelation) we argue that the proper translation is that which maximizes the locally evaluated SNR. The search of the proper local translations is, of course, an interactive process that can be facilitated by using a SNR optimization algorithm. The performance of the proposed recorrelation procedure was tested on two examples. First, the SNR optimization algorithm was applied to fringe images obtained by subtracting simulated speckle patterns. Next, it was applied to fringe images obtained by using a shearography optical setup from a specimen subjected to mechanical deformation. Our results show that the proposed SNR optimization method can significantly improve the reliability of measurements performed by using speckle-based techniques

  7. Speech perception at positive signal-to-noise ratios using adaptive adjustment of time compression.

    Science.gov (United States)

    Schlueter, Anne; Brand, Thomas; Lemke, Ulrike; Nitzschner, Stefan; Kollmeier, Birger; Holube, Inga

    2015-11-01

    Positive signal-to-noise ratios (SNRs) characterize listening situations most relevant for hearing-impaired listeners in daily life and should therefore be considered when evaluating hearing aid algorithms. For this, a speech-in-noise test was developed and evaluated, in which the background noise is presented at fixed positive SNRs and the speech rate (i.e., the time compression of the speech material) is adaptively adjusted. In total, 29 younger and 12 older normal-hearing, as well as 24 older hearing-impaired listeners took part in repeated measurements. Younger normal-hearing and older hearing-impaired listeners conducted one of two adaptive methods which differed in adaptive procedure and step size. Analysis of the measurements with regard to list length and estimation strategy for thresholds resulted in a practical method measuring the time compression for 50% recognition. This method uses time-compression adjustment and step sizes according to Versfeld and Dreschler [(2002). J. Acoust. Soc. Am. 111, 401-408], with sentence scoring, lists of 30 sentences, and a maximum likelihood method for threshold estimation. Evaluation of the procedure showed that older participants obtained higher test-retest reliability compared to younger participants. Depending on the group of listeners, one or two lists are required for training prior to data collection.

  8. Signal-to-noise ratios of multiplexing spectrometers in high backgrounds

    Science.gov (United States)

    Knacke, R. F.

    1978-01-01

    Signal-to-noise ratios and the amount of multiplexing gain achieved with a Michelson spectrometer during detector and background noise are studied. Noise caused by the warm background is found in 10 and 20-micron atmospheric windows in high resolution Fourier spectroscopy. An equation is derived for the signal-to-noise ratio based on the number of channels, total time to obtain the complete spectrum, the signal power in one spectral element, and the detector noise equivalent power in the presence of negligible background. Similar expressions are derived for backgrounds yielding a noise equivalent power to a spectral element, and backgrounds having flat spectra in the frequency range under investigation.

  9. Low concentration of a Gd-chelate increases the signal-to-noise ratio in fast pulsing BEST experiments

    Science.gov (United States)

    Sibille, Nathalie; Bellot, Gaëtan; Wang, Jing; Déméné, Hélène

    2012-11-01

    Despite numerous developments in the past few years that aim to increase the sensitivity of NMR multidimensional experiments, NMR spectroscopy still suffers from intrinsic low sensitivity. In this report, we show that the combination of two developments in the field, the Band-selective Excitation Short-Transient (BEST) experiment [Schanda et al., J. Am. Chem. Soc., 128 (2006) 9042] and the addition of the nonionic paramagnetic gadolinium chelate gadodiamide into NMR samples, enhances the signal-to-noise ratio. This effect is shown here for four different proteins, three globular and one unfolded, of molecular weights ranging from 6.5 kDa to 40 kDa, using 2D BEST HSQC and 3D BEST triple resonance sequences. Moreover, we show that the increase in signal-to-noise ratio provided by the gadodiamide is higher for peak resonances with lower than average intensity in BEST experiments. It is interesting to note that these residues are on average the weakest ones in those experiments. In this case, the gadodiamide-mediated increase can reach a value of 60% for low and 30% for high molecular weight proteins respectively. An investigation into the origin of this “paramagnetic gain” in BEST experiments is presented.

  10. RELIABILITY OF THE DETECTION OF THE BARYON ACOUSTIC PEAK

    International Nuclear Information System (INIS)

    MartInez, Vicent J.; Arnalte-Mur, Pablo; De la Cruz, Pablo; Saar, Enn; Tempel, Elmo; Pons-BorderIa, MarIa Jesus; Paredes, Silvestre; Fernandez-Soto, Alberto

    2009-01-01

    The correlation function of the distribution of matter in the universe shows, at large scales, baryon acoustic oscillations, which were imprinted prior to recombination. This feature was first detected in the correlation function of the luminous red galaxies of the Sloan Digital Sky Survey (SDSS). Recently, the final release (DR7) of the SDSS has been made available, and the useful volume is about two times bigger than in the old sample. We present here, for the first time, the redshift-space correlation function of this sample at large scales together with that for one shallower, but denser volume-limited subsample drawn from the Two-Degree Field Redshift Survey. We test the reliability of the detection of the acoustic peak at about 100 h -1 Mpc and the behavior of the correlation function at larger scales by means of careful estimation of errors. We confirm the presence of the peak in the latest data although broader than in previous detections.

  11. Signal-to-noise contribution of principal component loads in reconstructed near-infrared Raman tissue spectra.

    Science.gov (United States)

    Grimbergen, M C M; van Swol, C F P; Kendall, C; Verdaasdonk, R M; Stone, N; Bosch, J L H R

    2010-01-01

    The overall quality of Raman spectra in the near-infrared region, where biological samples are often studied, has benefited from various improvements to optical instrumentation over the past decade. However, obtaining ample spectral quality for analysis is still challenging due to device requirements and short integration times required for (in vivo) clinical applications of Raman spectroscopy. Multivariate analytical methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA), are routinely applied to Raman spectral datasets to develop classification models. Data compression is necessary prior to discriminant analysis to prevent or decrease the degree of over-fitting. The logical threshold for the selection of principal components (PCs) to be used in discriminant analysis is likely to be at a point before the PCs begin to introduce equivalent signal and noise and, hence, include no additional value. Assessment of the signal-to-noise ratio (SNR) at a certain peak or over a specific spectral region will depend on the sample measured. Therefore, the mean SNR over the whole spectral region (SNR(msr)) is determined in the original spectrum as well as for spectra reconstructed from an increasing number of principal components. This paper introduces a method of assessing the influence of signal and noise from individual PC loads and indicates a method of selection of PCs for LDA. To evaluate this method, two data sets with different SNRs were used. The sets were obtained with the same Raman system and the same measurement parameters on bladder tissue collected during white light cystoscopy (set A) and fluorescence-guided cystoscopy (set B). This method shows that the mean SNR over the spectral range in the original Raman spectra of these two data sets is related to the signal and noise contribution of principal component loads. The difference in mean SNR over the spectral range can also be appreciated since fewer principal components can

  12. Measurement of signal-to-noise ratio performance of TV fluoroscopy systems

    International Nuclear Information System (INIS)

    Geluk, R.J.

    1985-01-01

    A method has been developed for direct measurement of Signal-to-Noise ratio performance on X-ray TV systems. To this end the TV signal resulting from a calibrated test object, is compared with the noise level in the image. The method is objective and produces instantaneous readout, which makes it very suitable for system evaluation under dynamic conditions. (author)

  13. Assessment of uniformity and signal-to-noise ratio in radiological image intensifier TV systems

    International Nuclear Information System (INIS)

    Malone, J.F.; O'Connor, M.K.; Maher, K.P.

    1985-01-01

    A method of measuring the uniformity of radiological Image Intensifier-TV systems is described. Large non-uniformities were observed in the systems tested. A method of estimating the Signal-to-Noise Ratio in such systems is also presented and applied to characterise the effectiveness of the noise reduction techniques used in digital fluoroscopy. (author)

  14. Signal-to-noise ratio analysis and evaluation of the Hadamard imaging technique

    Science.gov (United States)

    Jobson, D. J.; Katzberg, S. J.; Spiers, R. B., Jr.

    1977-01-01

    The signal-to-noise ratio performance of the Hadamard imaging technique is analyzed and an experimental evaluation of a laboratory Hadamard imager is presented. A comparison between the performances of Hadamard and conventional imaging techniques shows that the Hadamard technique is superior only when the imaging objective lens is required to have an effective F (focus) number of about 2 or slower.

  15. Signal-to-Noise ratio and design complexity based on Unified Loss ...

    African Journals Online (AJOL)

    Taguchi's quality loss function for larger-the-better performance characteristics uses a reciprocal transformation to compute quality loss. This paper suggests that reciprocal transformation unnecessarily complicates and may distort results. Examples of this distortion include the signal-to-noise ratio based on mean squared ...

  16. Gas inventory charges and peak-load reliability

    International Nuclear Information System (INIS)

    Lyon, T.P.; Hackett, S.C.

    1990-01-01

    The natural gas industry has historically been organized through a vertical sequence of long-term contracts, the first between wellhead producer and pipeline, and the second between pipeline and local distribution company (LDC). These long-term contracts contained provisions, variously called take-or-pay (TOP) clauses or minimum bills, that required buyers to pay for a minimum level of supply in all later time periods, regardless of the buyers' actual demand requirements. As a result, the pipeline's purchase obligation was typically offset by the distributor's purchase obligation, so that the pipeline essentially passed the minimum purchase requirement directly from producer to distributor. The authors focus on the role GICs (Gas Inventory Charges) can play in the provision of peak-load reliability, and the effects of GICs and their treatment by regulators on pipeline system design. In particular, they compare the various options available to local distribution companies (LDCs) for providing peak-load reliability, emphasizing the alternative downstream storage. They find that the ratemaking decisions of state regulators may distort LDC choices between different gas supply options, inducing what may be an inefficient demand for new storage facilities. GICs, when competitively prices, offer state regulators a means of circumventing these distortions

  17. Robust Frame Synchronization for Low Signal-to-Noise Ratio Channels Using Energy-Corrected Differential Correlation

    Directory of Open Access Journals (Sweden)

    Kim Pansoo

    2009-01-01

    Full Text Available Recent standards for wireless transmission require reliable synchronization for channels with low signal-to-noise ratio (SNR as well as with a large amount of frequency offset, which necessitates a robust correlator structure for the initial frame synchronization process. In this paper, a new correlation strategy especially targeted for low SNR regions is proposed and its performance is analyzed. By utilizing a modified energy correction term, the proposed method effectively reduces the variance of the decision variable to enhance the detection performance. Most importantly, the method is demonstrated to outperform all previously reported schemes by a significant margin, for SNRs below 5 dB regardless of the existence of the frequency offsets. A variation of the proposed method is also presented for further enhancement over the channels with small frequency errors. The particular application considered for the performance verification is the second generation digital video broadcasting system for satellites (DVB-S2.

  18. Techniques and software tools for estimating ultrasonic signal-to-noise ratios

    Science.gov (United States)

    Chiou, Chien-Ping; Margetan, Frank J.; McKillip, Matthew; Engle, Brady J.; Roberts, Ronald A.

    2016-02-01

    At Iowa State University's Center for Nondestructive Evaluation (ISU CNDE), the use of models to simulate ultrasonic inspections has played a key role in R&D efforts for over 30 years. To this end a series of wave propagation models, flaw response models, and microstructural backscatter models have been developed to address inspection problems of interest. One use of the combined models is the estimation of signal-to-noise ratios (S/N) in circumstances where backscatter from the microstructure (grain noise) acts to mask sonic echoes from internal defects. Such S/N models have been used in the past to address questions of inspection optimization and reliability. Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at ISU, an effort was recently initiated to improve existing research-grade software by adding graphical user interface (GUI) to become user friendly tools for the rapid estimation of S/N for ultrasonic inspections of metals. The software combines: (1) a Python-based GUI for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signal and backscattered grain noise characteristics. The latter makes use of several models including: the Multi-Gaussian Beam Model for computing sonic fields radiated by commercial transducers; the Thompson-Gray Model for the response from an internal defect; the Independent Scatterer Model for backscattered grain noise; and the Stanke-Kino Unified Model for attenuation. The initial emphasis was on reformulating the research-grade code into a suitable modular form, adding the graphical user interface and performing computations rapidly and robustly. Thus the initial inspection problem being addressed is relatively simple. A normal-incidence pulse/echo immersion inspection is simulated for a curved metal component having a non-uniform microstructure, specifically an equiaxed, untextured microstructure in which the average

  19. The signal-to-noise analysis of the Little-Hopfield model revisited

    International Nuclear Information System (INIS)

    Bolle, D; Blanco, J Busquets; Verbeiren, T

    2004-01-01

    Using the generating functional analysis an exact recursion relation is derived for the time evolution of the effective local field of the fully connected Little-Hopfield model. It is shown that, by leaving out the feedback correlations arising from earlier times in this effective dynamics, one precisely finds the recursion relations usually employed in the signal-to-noise approach. The consequences of this approximation as well as the physics behind it are discussed. In particular, it is pointed out why it is hard to notice the effects, especially for model parameters corresponding to retrieval. Numerical simulations confirm these findings. The signal-to-noise analysis is then extended to include all correlations, making it a full theory for dynamics at the level of the generating functional analysis. The results are applied to the frequently employed extremely diluted (a)symmetric architectures and to sequence processing networks

  20. Comparison of the signal-to-noise characteristics of quantum versus thermal ghost imaging

    International Nuclear Information System (INIS)

    O'Sullivan, Malcolm N.; Chan, Kam Wai Clifford; Boyd, Robert W.

    2010-01-01

    We present a theoretical comparison of the signal-to-noise characteristics of quantum versus thermal ghost imaging. We first calculate the signal-to-noise ratio of each process in terms of its controllable experimental conditions. We show that a key distinction is that a thermal ghost image always resides on top of a large background; the fluctuations in this background constitutes an intrinsic noise source for thermal ghost imaging. In contrast, there is a negligible intrinsic background to a quantum ghost image. However, for practical reasons involving achievable illumination levels, acquisition times for thermal ghost images are often much shorter than those for quantum ghost images. We provide quantitative predictions for the conditions under which each process provides superior performance. Our conclusion is that each process can provide useful functionality, although under complementary conditions.

  1. Radiometric and signal-to-noise ratio properties of multiplex dispersive spectrometry

    International Nuclear Information System (INIS)

    Barducci, Alessandro; Guzzi, Donatella; Lastri, Cinzia; Nardino, Vanni; Marcoionni, Paolo; Pippi, Ivan

    2010-01-01

    Recent theoretical investigations have shown important radiometric disadvantages of interferential multiplexing in Fourier transform spectrometry that apparently can be applied even to coded aperture spectrometers. We have reexamined the methods of noninterferential multiplexing in order to assess their signal-to-noise ratio (SNR) performance, relying on a theoretical modeling of the multiplexed signals. We are able to show that quite similar SNR and radiometric disadvantages affect multiplex dispersive spectrometry. The effect of noise on spectral estimations is discussed.

  2. Enhancing scatterometry CD signal-to-noise ratio for 1x logic and memory challenges

    Science.gov (United States)

    Shaughnessy, Derrick; Krishnan, Shankar; Wei, Lanhua; Shchegrov, Andrei V.

    2013-04-01

    The ongoing transition from 2D to 3D structures in logic and memory has led to an increased adoption of scatterometry CD (SCD) for inline metrology. However, shrinking device dimensions in logic and high aspect ratios in memory represent primary challenges for SCD and require a significant breakthrough in improving signal-to-noise performance. We present a report on the new generation of SCD technology, enabled by a new laser-driven plasma source. The developed light source provides several key advantages over conventional arc lamps typically used in SCD applications. The plasma color temperature of the laser driven source is considerably higher than available with arc lamps resulting in >5X increase in radiance in the visible and >10X increase in radiance in the DUV when compared to sources on previous generation SCD tools while maintaining or improving source intensity noise. This high radiance across such a broad spectrum allows for the use of a single light source from 190-1700nm. When combined with other optical design changes, the higher source radiance enables reduction of measurement box size of our spectroscopic ellipsometer from 45×45um box to 25×25um box without compromising signal to noise ratio. The benefits for 1×nm SCD metrology of the additional photons across the DUV to IR spectrum have been found to be greater than the increase in source signal to noise ratio would suggest. Better light penetration in Si and poly-Si has resulted in improved sensitivity and correlation breaking for critical parameters in 1xnm FinFET and HAR flash memory structures.

  3. Muon Signals at a Low Signal-to-Noise Ratio Environment

    CERN Document Server

    Zakareishvili, Tamar; The ATLAS collaboration

    2017-01-01

    Calorimeters provide high-resolution energy measurements for particle detection. Muon signals are important for evaluating electronics performance, since they produce a signal that is close to electronic noise values. This work provides a noise RMS analysis for the Demonstrator drawer of the 2016 Tile Calorimeter (TileCal) Test Beam in order to help reconstruct events in a low signal-to-noise environment. Muon signals were then found for a beam penetrating through all three layers of the drawer. The Demonstrator drawer is an electronic candidate for TileCal, part of the ATLAS experiment for the Large Hadron Collider that operates at the European Organization for Nuclear Research (CERN).

  4. Statistical Angles on the Lattice QCD Signal-to-Noise Problem

    Science.gov (United States)

    Wagman, Michael L.

    The theory of quantum chromodynamics (QCD) encodes the strong interactions that bind quarks and gluons into nucleons and that bind nucleons into nuclei. Predictive control of QCD would allow nuclear structure and reactions as well as properties of supernovae and neutron stars to be theoretically studied from first principles. Lattice QCD (LQCD) can represent generic QCD predictions in terms of well-defined path integrals, but the sign and signal-to-noise problems have obstructed LQCD calculations of large nuclei and nuclear matter in practice. This thesis presents a statistical study of LQCD correlation functions, with a particular focus on characterizing the structure of the noise associated with quantum fluctuations. The signal-to-noise problem in baryon correlation functions is demonstrated to arise from a sign problem associated with Monte Carlo sampling of complex correlation functions. Properties of circular statistics are used to understand the emergence of a large time noise region where standard energy measurements are unreliable. Power-law tails associated with stable distributions and Levy flights are found to play a central role in the time evolution of baryon correlation functions. Building on these observations, a new statistical analysis technique called phase reweighting is introduced that allow energy levels to be extracted from large-time correlation functions with time-independent signal-to-noise ratios. Phase reweighting effectively includes dynamical refinement of source magnitudes but introduces a bias associated with the phase. This bias can be removed by performing an extrapolation, but at the expense of re-introducing a signal-to-noise problem. Lattice QCD calculations of the ρ+ and nucleon masses and of the ΞΞ(1S0) binding energy show consistency between standard results obtained using smaller-time correlation functions and phase-reweighted results using large-time correlation functions inaccessible to standard statistical analysis

  5. MEMS microphone innovations towards high signal to noise ratios (Conference Presentation) (Plenary Presentation)

    Science.gov (United States)

    Dehé, Alfons

    2017-06-01

    After decades of research and more than ten years of successful production in very high volumes Silicon MEMS microphones are mature and unbeatable in form factor and robustness. Audio applications such as video, noise cancellation and speech recognition are key differentiators in smart phones. Microphones with low self-noise enable those functions. Backplate-free microphones enter the signal to noise ratios above 70dB(A). This talk will describe state of the art MEMS technology of Infineon Technologies. An outlook on future technologies such as the comb sensor microphone will be given.

  6. Balanced detection for self-mixing interferometry to improve signal-to-noise ratio

    Science.gov (United States)

    Zhao, Changming; Norgia, Michele; Li, Kun

    2018-01-01

    We apply balanced detection to self-mixing interferometry for displacement and vibration measurement, using two photodiodes for implementing a differential acquisition. The method is based on the phase opposition of the self-mixing signal measured between the two laser diode facet outputs. The balanced signal obtained by enlarging the self-mixing signal, also by canceling of the common-due noises mainly due to disturbances on laser supply and transimpedance amplifier. Experimental results demonstrate the signal-to-noise ratio significantly improves, with almost twice signals enhancement and more than half noise decreasing. This method allows for more robust, longer-distance measurement systems, especially using fringe-counting.

  7. Modeling speech intelligibility based on the signal-to-noise envelope power ratio

    DEFF Research Database (Denmark)

    Jørgensen, Søren

    of modulation frequency selectivity in the auditory processing of sound with a decision metric for intelligibility that is based on the signal-to-noise envelope power ratio (SNRenv). The proposed speech-based envelope power spectrum model (sEPSM) is demonstrated to account for the effects of stationary...... through three commercially available mobile phones. The model successfully accounts for the performance across the phones in conditions with a stationary speech-shaped background noise, whereas deviations were observed in conditions with “Traffic” and “Pub” noise. Overall, the results of this thesis...

  8. Evaluating signal-to-noise ratios, loudness, and related measures as indicators of airborne sound insulation.

    Science.gov (United States)

    Park, H K; Bradley, J S

    2009-09-01

    Subjective ratings of the audibility, annoyance, and loudness of music and speech sounds transmitted through 20 different simulated walls were used to identify better single number ratings of airborne sound insulation. The first part of this research considered standard measures such as the sound transmission class the weighted sound reduction index (R(w)) and variations of these measures [H. K. Park and J. S. Bradley, J. Acoust. Soc. Am. 126, 208-219 (2009)]. This paper considers a number of other measures including signal-to-noise ratios related to the intelligibility of speech and measures related to the loudness of sounds. An exploration of the importance of the included frequencies showed that the optimum ranges of included frequencies were different for speech and music sounds. Measures related to speech intelligibility were useful indicators of responses to speech sounds but were not as successful for music sounds. A-weighted level differences, signal-to-noise ratios and an A-weighted sound transmission loss measure were good predictors of responses when the included frequencies were optimized for each type of sound. The addition of new spectrum adaptation terms to R(w) values were found to be the most practical approach for achieving more accurate predictions of subjective ratings of transmitted speech and music sounds.

  9. Using hyperentanglement to enhance resolution, signal-to-noise ratio, and measurement time

    Science.gov (United States)

    Smith, James F.

    2017-03-01

    A hyperentanglement-based atmospheric imaging/detection system involving only a signal and an ancilla photon will be considered for optical and infrared frequencies. Only the signal photon will propagate in the atmosphere and its loss will be classical. The ancilla photon will remain within the sensor experiencing low loss. Closed form expressions for the wave function, normalization, density operator, reduced density operator, symmetrized logarithmic derivative, quantum Fisher information, quantum Cramer-Rao lower bound, coincidence probabilities, probability of detection, probability of false alarm, probability of error after M measurements, signal-to-noise ratio, quantum Chernoff bound, time-on-target expressions related to probability of error, and resolution will be provided. The effect of noise in every mode will be included as well as loss. The system will provide the basic design for an imaging/detection system functioning at optical or infrared frequencies that offers better than classical angular and range resolution. Optimization for enhanced resolution will be included. The signal-to-noise ratio will be increased by a factor equal to the number of modes employed during the hyperentanglement process. Likewise, the measurement time can be reduced by the same factor. The hyperentanglement generator will typically make use of entanglement in polarization, energy-time, orbital angular momentum and so on. Mathematical results will be provided describing the system's performance as a function of loss mechanisms and noise.

  10. Brain-computer interfaces increase whole-brain signal to noise.

    Science.gov (United States)

    Papageorgiou, T Dorina; Lisinski, Jonathan M; McHenry, Monica A; White, Jason P; LaConte, Stephen M

    2013-08-13

    Brain-computer interfaces (BCIs) can convert mental states into signals to drive real-world devices, but it is not known if a given covert task is the same when performed with and without BCI-based control. Using a BCI likely involves additional cognitive processes, such as multitasking, attention, and conflict monitoring. In addition, it is challenging to measure the quality of covert task performance. We used whole-brain classifier-based real-time functional MRI to address these issues, because the method provides both classifier-based maps to examine the neural requirements of BCI and classification accuracy to quantify the quality of task performance. Subjects performed a covert counting task at fast and slow rates to control a visual interface. Compared with the same task when viewing but not controlling the interface, we observed that being in control of a BCI improved task classification of fast and slow counting states. Additional BCI control increased subjects' whole-brain signal-to-noise ratio compared with the absence of control. The neural pattern for control consisted of a positive network comprised of dorsal parietal and frontal regions and the anterior insula of the right hemisphere as well as an expansive negative network of regions. These findings suggest that real-time functional MRI can serve as a platform for exploring information processing and frontoparietal and insula network-based regulation of whole-brain task signal-to-noise ratio.

  11. Multiplane wave imaging increases signal-to-noise ratio in ultrafast ultrasound imaging

    International Nuclear Information System (INIS)

    Tiran, Elodie; Deffieux, Thomas; Correia, Mafalda; Maresca, David; Osmanski, Bruno-Felix; Pernot, Mathieu; Tanter, Mickael; Sieu, Lim-Anna; Bergel, Antoine; Cohen, Ivan

    2015-01-01

    Ultrafast imaging using plane or diverging waves has recently enabled new ultrasound imaging modes with improved sensitivity and very high frame rates. Some of these new imaging modalities include shear wave elastography, ultrafast Doppler, ultrafast contrast-enhanced imaging and functional ultrasound imaging. Even though ultrafast imaging already encounters clinical success, increasing even more its penetration depth and signal-to-noise ratio for dedicated applications would be valuable.Ultrafast imaging relies on the coherent compounding of backscattered echoes resulting from successive tilted plane waves emissions; this produces high-resolution ultrasound images with a trade-off between final frame rate, contrast and resolution. In this work, we introduce multiplane wave imaging, a new method that strongly improves ultrafast images signal-to-noise ratio by virtually increasing the emission signal amplitude without compromising the frame rate. This method relies on the successive transmissions of multiple plane waves with differently coded amplitudes and emission angles in a single transmit event. Data from each single plane wave of increased amplitude can then be obtained, by recombining the received data of successive events with the proper coefficients.The benefits of multiplane wave for B-mode, shear wave elastography and ultrafast Doppler imaging are experimentally demonstrated. Multiplane wave with 4 plane waves emissions yields a 5.8  ±  0.5 dB increase in signal-to-noise ratio and approximately 10 mm in penetration in a calibrated ultrasound phantom (0.7 d MHz −1 cm −1 ). In shear wave elastography, the same multiplane wave configuration yields a 2.07  ±  0.05 fold reduction of the particle velocity standard deviation and a two-fold reduction of the shear wave velocity maps standard deviation. In functional ultrasound imaging, the mapping of cerebral blood volume results in a 3 to 6 dB increase of the contrast-to-noise ratio in

  12. Relevancies of multiple-interaction events and signal-to-noise ratio for Anger-logic based PET detector designs

    Science.gov (United States)

    Peng, Hao

    2015-10-01

    A fundamental challenge for PET block detector designs is to deploy finer crystal elements while limiting the number of readout channels. The standard Anger-logic scheme including light sharing (an 8 by 8 crystal array coupled to a 2×2 photodetector array with an optical diffuser, multiplexing ratio: 16:1) has been widely used to address such a challenge. Our work proposes a generalized model to study the impacts of two critical parameters on spatial resolution performance of a PET block detector: multiple interaction events and signal-to-noise ratio (SNR). The study consists of the following three parts: (1) studying light output profile and multiple interactions of 511 keV photons within crystal arrays of different crystal widths (from 4 mm down to 1 mm, constant height: 20 mm); (2) applying the Anger-logic positioning algorithm to investigate positioning/decoding uncertainties (i.e., "block effect") in terms of peak-to-valley ratio (PVR), with light sharing, multiple interactions and photodetector SNR taken into account; and (3) studying the dependency of spatial resolution on SNR in the context of modulation transfer function (MTF). The proposed model can be used to guide the development and evaluation of a standard Anger-logic based PET block detector including: (1) selecting/optimizing the configuration of crystal elements for a given photodetector SNR; and (2) predicting to what extent additional electronic multiplexing may be implemented to further reduce the number of readout channels.

  13. Macromolecular 3D SEM reconstruction strategies: Signal to noise ratio and resolution

    International Nuclear Information System (INIS)

    Woodward, J.D.; Wepf, R.A.

    2014-01-01

    Three-dimensional scanning electron microscopy generates quantitative volumetric structural data from SEM images of macromolecules. This technique provides a quick and easy way to define the quaternary structure and handedness of protein complexes. Here, we apply a variety of preparation and imaging methods to filamentous actin in order to explore the relationship between resolution, signal-to-noise ratio, structural preservation and dataset size. This information can be used to define successful imaging strategies for different applications. - Highlights: • F-actin SEM datasets were collected using 8 different preparation/ imaging techniques. • Datasets were reconstructed by back projection and compared/analyzed • 3DSEM actin reconstructions can be produced with <100 views of the asymmetric unit. • Negatively stained macromolecules can be reconstructed by 3DSEM to ∼3 nm resolution

  14. Signal-to-noise ratio of FT-IR CO gas spectra

    DEFF Research Database (Denmark)

    Bak, J.; Clausen, Sønnik

    1999-01-01

    in emission and transmission spectrometry, an investigation of the SNR in CO gas spectra as a function of spectral resolution has been carried out. We present a method to (1) determine experimentally the SNR at constant throughput, (2) determine the SNR on the basis of measured noise levels and Hitran......The minimum amount of a gaseous compound which can be detected and quantified with Fourier transform infrared (FT-IR) spectrometers depends on the signal-to-noise ratio (SNR) of the measured gas spectra. In order to use low-resolution FT-IR spectrometers to measure combustion gases like CO and CO2...... simulated signals, and (3) determine the SNR of CO from high to low spectral resolutions related to the molecular linewidth and vibrational-rotational lines spacing. In addition, SNR values representing different spectral resolutions but scaled to equal measurement times were compared. It was found...

  15. Variability of signal-to-noise ratio and the network analysis of gravitational wave burst signals

    International Nuclear Information System (INIS)

    Mohanty, S D; Rakhmanov, M; Klimenko, S; Mitselmakher, G

    2006-01-01

    The detection and estimation of gravitational wave burst signals, with a priori unknown polarization waveforms, requires the use of data from a network of detectors. Maximizing the network likelihood functional over all waveforms and sky positions yields point estimates for them as well as a detection statistic. However, the transformation from the data to estimates can become ill-conditioned over parts of the sky, resulting in significant errors in estimation. We modify the likelihood procedure by introducing a penalty functional which suppresses candidate solutions that display large signal-to-noise ratio (SNR) variability as the source is displaced on the sky. Simulations show that the resulting network analysis method performs significantly better in estimating the sky position of a source. Further, this method can be applied to any network, irrespective of the number or mutual alignment of detectors

  16. Measuring multielectron beam imaging fidelity with a signal-to-noise ratio analysis

    Science.gov (United States)

    Mukhtar, Maseeh; Bunday, Benjamin D.; Quoi, Kathy; Malloy, Matt; Thiel, Brad

    2016-07-01

    Java Monte Carlo Simulator for Secondary Electrons (JMONSEL) simulations are used to generate expected imaging responses of chosen test cases of patterns and defects with the ability to vary parameters for beam energy, spot size, pixel size, and/or defect material and form factor. The patterns are representative of the design rules for an aggressively scaled FinFET-type design. With these simulated images and resulting shot noise, a signal-to-noise framework is developed, which relates to defect detection probabilities. Additionally, with this infrastructure, the effect of detection chain noise and frequency-dependent system response can be made, allowing for targeting of best recipe parameters for multielectron beam inspection validation experiments. Ultimately, these results should lead to insights into how such parameters will impact tool design, including necessary doses for defect detection and estimations of scanning speeds for achieving high throughput for high-volume manufacturing.

  17. Combining of Direct Search and Signal-to-Noise Ratio for economic dispatch optimization

    International Nuclear Information System (INIS)

    Lin, Whei-Min; Gow, Hong-Jey; Tsai, Ming-Tang

    2011-01-01

    This paper integrated the ideas of Direct Search and Signal-to-Noise Ratio (SNR) to develop a Novel Direct Search (NDS) method for solving the non-convex economic dispatch problems. NDS consists of three stages: Direct Search (DS), Global SNR (GSNR) and Marginal Compensation (MC) stages. DS provides a basic solution. GSNR searches the point with optimization strategy. MC fulfills the power balance requirement. With NDS, the infinite solution space becomes finite. Furthermore, a same optimum solution can be repeatedly reached. Effectiveness of NDS is demonstrated with three examples and the solutions were compared with previously published results. Test results show that the proposed method is simple, robust, and more effective than many other previously developed algorithms.

  18. Attitude determination for small satellites using GPS signal-to-noise ratio

    Science.gov (United States)

    Peters, Daniel

    An embedded system for GPS-based attitude determination (AD) using signal-to-noise (SNR) measurements was developed for CubeSat applications. The design serves as an evaluation testbed for conducting ground based experiments using various computational methods and antenna types to determine the optimum AD accuracy. Raw GPS data is also stored to non-volatile memory for downloading and post analysis. Two low-power microcontrollers are used for processing and to display information on a graphic screen for real-time performance evaluations. A new parallel inter-processor communication protocol was developed that is faster and uses less power than existing standard protocols. A shorted annular patch (SAP) antenna was fabricated for the initial ground-based AD experiments with the testbed. Static AD estimations with RMS errors in the range of 2.5° to 4.8° were achieved over a range of off-zenith attitudes.

  19. Correlation techniques for the improvement of signal-to-noise ratio in measurements with stochastic processes

    CERN Document Server

    Reddy, V R; Reddy, T G; Reddy, P Y; Reddy, K R

    2003-01-01

    An AC modulation technique is described to convert stochastic signal variations into an amplitude variation and its retrieval through Fourier analysis. It is shown that this AC detection of signals of stochastic processes when processed through auto- and cross-correlation techniques improve the signal-to-noise ratio; the correlation techniques serve a similar purpose of frequency and phase filtering as that of phase-sensitive detection. A few model calculations applied to nuclear spectroscopy measurements such as Angular Correlations, Mossbauer spectroscopy and Pulse Height Analysis reveal considerable improvement in the sensitivity of signal detection. Experimental implementation of the technique is presented in terms of amplitude variations of harmonics representing the derivatives of normal spectra. Improved detection sensitivity to spectral variations is shown to be significant. These correlation techniques are general and can be made applicable to all the fields of particle counting where measurements ar...

  20. A complex symbol signal-to-noise ratio estimator and its performance

    Science.gov (United States)

    Feria, Y.

    1994-01-01

    This article presents an algorithm for estimating the signal-to-noise ratio (SNR) of signals that contain data on a downconverted suppressed carrier or the first harmonic of a square-wave subcarrier. This algorithm can be used to determine the performance of the full-spectrum combiner for the Galileo S-band (2.2- to 2.3-GHz) mission by measuring the input and output symbol SNR. A performance analysis of the algorithm shows that the estimator can estimate the complex symbol SNR using 10,000 symbols at a true symbol SNR of -5 dB with a mean of -4.9985 dB and a standard deviation of 0.2454 dB, and these analytical results are checked by simulations of 100 runs with a mean of -5.06 dB and a standard deviation of 0.2506 dB.

  1. Symbol signal-to-noise ratio loss in square-wave subcarrier downconversion

    Science.gov (United States)

    Feria, Y.; Statman, J.

    1993-01-01

    This article presents the simulated results of the signal-to-noise ratio (SNR) loss in the process of a square-wave subcarrier down conversion. In a previous article, the SNR degradation was evaluated at the output of the down converter based on the signal and noise power change. Unlike in the previous article, the SNR loss is defined here as the difference between the actual and theoretical symbol SNR's for the same symbol-error rate at the output of the symbol matched filter. The results show that an average SNR loss of 0.3 dB can be achieved with tenth-order infinite impulse response (IIR) filters. This loss is a 0.2-dB increase over the SNR degradation in the previous analysis where neither the signal distortion nor the symbol detector was considered.

  2. Noise in Neural Networks: Thresholds, Hysteresis, and Neuromodulation of Signal-To-Noise

    Science.gov (United States)

    Keeler, James D.; Pichler, Elgar E.; Ross, John

    1989-03-01

    We study a neural-network model including Gaussian noise, higher-order neuronal interactions, and neuromodulation. For a first-order network, there is a threshold in the noise level (phase transition) above which the network displays only disorganized behavior and critical slowing down near the noise threshold. The network can tolerate more noise if it has higher-order feedback interactions, which also lead to hysteresis and multistability in the network dynamics. The signal-to-noise ratio can be adjusted in a biological neural network by neuromodulators such as norepinephrine. Comparisons are made to experimental results and further investigations are suggested to test the effects of hysteresis and neuromodulation in pattern recognition and learning. We propose that norepinephrine may ``quench'' the neural patterns of activity to enhance the ability to learn details.

  3. Signal-to-noise analysis of a birefringent spectral zooming imaging spectrometer

    Science.gov (United States)

    Li, Jie; Zhang, Xiaotong; Wu, Haiying; Qi, Chun

    2018-05-01

    Study of signal-to-noise ratio (SNR) of a novel spectral zooming imaging spectrometer (SZIS) based on two identical Wollaston prisms is conducted. According to the theory of radiometry and Fourier transform spectroscopy, we deduce the theoretical equations of SNR of SZIS in spectral domain with consideration of the incident wavelength and the adjustable spectral resolution. An example calculation of SNR of SZIS is performed over 400-1000 nm. The calculation results indicate that SNR with different spectral resolutions of SZIS can be optionally selected by changing the spacing between the two identical Wollaston prisms. This will provide theoretical basis for the design, development and engineering of the developed imaging spectrometer for broad spectrum and SNR requirements.

  4. Downhole microseismic signal-to-noise ratio enhancement via strip matching shearlet transform

    Science.gov (United States)

    Li, Juan; Ji, Shuo; Li, Yue; Qian, Zhihong; Lu, Weili

    2018-04-01

    Shearlet transform has been proved effective in noise attenuation. However, because of the low magnitude and high frequency of downhole microseismic signals, the coefficient values of valid signals and noise are similar in the shearlet domain. As a result, it is hard to suppress the noise. In this paper, we present a novel signal-to-noise ratio enhancement scheme called strip matching shearlet transform. The method takes into account the directivity of microseismic events and shearlets. Through strip matching, the matching degree in direction between them has been promoted. Then the coefficient values of valid signals are much larger than those of the noise. Consequently, we can separate them well with the help of thresholding. The experimental results on both synthetic records and field data illustrate that our proposed method preserves the useful components and attenuates the noise well.

  5. Signal-to-noise ratio application to seismic marker analysis and fracture detection

    Science.gov (United States)

    Xu, Hui-Qun; Gui, Zhi-Xian

    2014-03-01

    Seismic data with high signal-to-noise ratios (SNRs) are useful in reservoir exploration. To obtain high SNR seismic data, significant effort is required to achieve noise attenuation in seismic data processing, which is costly in materials, and human and financial resources. We introduce a method for improving the SNR of seismic data. The SNR is calculated by using the frequency domain method. Furthermore, we optimize and discuss the critical parameters and calculation procedure. We applied the proposed method on real data and found that the SNR is high in the seismic marker and low in the fracture zone. Consequently, this can be used to extract detailed information about fracture zones that are inferred by structural analysis but not observed in conventional seismic data.

  6. Stimulation of the Locus Ceruleus Modulates Signal-to-Noise Ratio in the Olfactory Bulb.

    Science.gov (United States)

    Manella, Laura C; Petersen, Nicholas; Linster, Christiane

    2017-11-29

    Norepinephrine (NE) has been shown to influence sensory, and specifically olfactory processing at the behavioral and physiological levels, potentially by regulating signal-to-noise ratio (S/N). The present study is the first to look at NE modulation of olfactory bulb (OB) in regards to S/N in vivo We show, in male rats, that locus ceruleus stimulation and pharmacological infusions of NE into the OB modulate both spontaneous and odor-evoked neural responses. NE in the OB generated a non-monotonic dose-response relationship, suppressing mitral cell activity at high and low, but not intermediate, NE levels. We propose that NE enhances odor responses not through direct potentiation of the afferent signal per se, but rather by reducing the intrinsic noise of the system. This has important implications for the ways in which an animal interacts with its olfactory environment, particularly as the animal shifts from a relaxed to an alert behavioral state. SIGNIFICANCE STATEMENT Sensory perception can be modulated by behavioral states such as hunger, fear, stress, or a change in environmental context. Behavioral state often affects neural processing via the release of circulating neurochemicals such as hormones or neuromodulators. We here show that the neuromodulator norepinephrine modulates olfactory bulb spontaneous activity and odor responses so as to generate an increased signal-to-noise ratio at the output of the olfactory bulb. Our results help interpret and improve existing ideas for neural network mechanisms underlying behaviorally observed improvements in near-threshold odor detection and discrimination. Copyright © 2017 the authors 0270-6474/17/3711605-11$15.00/0.

  7. MEASUREMENT OF LOW SIGNAL-TO-NOISE RATIO SOLAR p-MODES IN SPATIALLY RESOLVED HELIOSEISMIC DATA

    International Nuclear Information System (INIS)

    Salabert, D.; Leibacher, J.; Hill, F.; Appourchaux, T.

    2009-01-01

    We present an adaptation of the rotation-corrected, m-averaged spectrum technique designed to observe low signal-to-noise ratio (S/N), low-frequency solar p-modes. The frequency shift of each of the 2l + 1 m spectra of a given (n, l) multiplet is chosen that maximizes the likelihood of the m-averaged spectrum. A high S/N can result from combining individual low S/N, individual-m spectra, none of which would yield a strong enough peak to measure. We apply the technique to Global Oscillation Network Group and Michelson Doppler Imager data and show that it allows us to measure modes with lower frequencies than those obtained with classic peak-fitting analysis of the individual-m spectra. We measure their central frequencies, splittings, asymmetries, lifetimes, and amplitudes. The low frequency, low- and intermediate-angular degrees rendered accessible by this new method correspond to modes that are sensitive to the deep solar interior down to the core (l ≤ 3) and to the radiative interior (4 ≤ l ≤ 35). Moreover, the low-frequency modes have deeper upper turning points, and are thus less sensitive to the turbulence and magnetic fields of the outer layers, as well as uncertainties in the nature of the external boundary condition. As a result of their longer lifetimes (narrower linewidths) at the same S/N the determination of the frequencies of lower frequency modes is more accurate, and the resulting inversions should be more precise.

  8. Theoretical and experimental signal-to-noise ratio assessment in new direction sensing continuous-wave Doppler lidar

    DEFF Research Database (Denmark)

    Pedersen, Anders Tegtmeier; Foroughi Abari, Farzad; Mann, Jakob

    2014-01-01

    A new direction sensing continuous-wave Doppler lidar based on an image-reject homodyne receiver has recently been demonstrated at DTU Wind Energy, Technical University of Denmark. In this contribution we analyse the signal-to-noise ratio resulting from two different data processing methods both...... leading to the direction sensing capability. It is found that using the auto spectrum of the complex signal to determine the wind speed leads to a signal-to-noise ratio equivalent to that of a standard self-heterodyne receiver. Using the imaginary part of the cross spectrum to estimate the Doppler shift...... has the benefit of a zero-mean background spectrum, but comes at the expense of a decrease in the signal-to noise ratio by a factor of √2....

  9. The differential Howland current source with high signal to noise ratio for bioimpedance measurement system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Jinzhen; Li, Gang; Lin, Ling, E-mail: linling@tju.edu.cn [State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University, Tianjin, People' s Republic of China, and Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin (China); Qiao, Xiaoyan [College of Physics and Electronic Engineering, Shanxi University, Shanxi (China); Wang, Mengjun [School of Information Engineering, Hebei University of Technology, Tianjin (China); Zhang, Weibo [Institute of Acupuncture and Moxibustion China Academy of Chinese Medical Sciences, Beijing (China)

    2014-05-15

    The stability and signal to noise ratio (SNR) of the current source circuit are the important factors contributing to enhance the accuracy and sensitivity in bioimpedance measurement system. In this paper we propose a new differential Howland topology current source and evaluate its output characters by simulation and actual measurement. The results include (1) the output current and impedance in high frequencies are stabilized after compensation methods. And the stability of output current in the differential current source circuit (DCSC) is 0.2%. (2) The output impedance of two current circuits below the frequency of 200 KHz is above 1 MΩ, and below 1 MHz the output impedance can arrive to 200 KΩ. Then in total the output impedance of the DCSC is higher than that of the Howland current source circuit (HCSC). (3) The SNR of the DCSC are 85.64 dB and 65 dB in the simulation and actual measurement with 10 KHz, which illustrates that the DCSC effectively eliminates the common mode interference. (4) The maximum load in the DCSC is twice as much as that of the HCSC. Lastly a two-dimensional phantom electrical impedance tomography is well reconstructed with the proposed HCSC. Therefore, the measured performance shows that the DCSC can significantly improve the output impedance, the stability, the maximum load, and the SNR of the measurement system.

  10. Accuracy of signal-to-noise ratio measurement method for magnetic resonance images

    International Nuclear Information System (INIS)

    Ogura, Akio; Miyai, Akira; Maeda, Fumie; Fukutake, Hiroyuki; Kikumoto, Rikiya

    2003-01-01

    The signal-to-noise ratio (SNR) of a magnetic resonance image is a common measure of imager performance. However, evaluations for the calculation of the SNR use various methods. A problem with measuring SNR is caused by the distortion of noise statistics in commonly used magnitude images. In this study, measurement accuracy was compared among four methods of evaluating SNR according to the size and position of regions of interest (ROIs). The results indicated that the method that used the difference between two images showed the best agreement with the theoretical value. In the method that used a single image, the SNR calculated by using a small size of ROI showed better agreement with the theoretical value because of noise bias and image artifacts. However, in the method that used the difference between two images, a large size of ROI was better in reducing statistical errors. In the same way, the methods that used air noise and air signal were better when applied to a large ROI. In addition, the image subtraction process used to calculate pixel-by-pixel differences in images may reach zero on a minus pixel value when using an image processor with the MRI system and apparatuses associated with it. A revised equation is presented for this case. It is important to understand the characteristics of each method and to choose a suitable method carefully according to the purpose of the study. (author)

  11. A Dynamical System Exhibits High Signal-to-noise Ratio Gain by Stochastic Resonance

    Science.gov (United States)

    Makra, Peter; Gingl, Zoltan

    2003-05-01

    On the basis of mixed-signal simulations, we demonstrate that signal-to-noise ratio (SNR) gains much greater than unity can be obtained in the double-well potential through stochastic resonance (SR) with a symmetric periodic pulse train as deterministic and Gaussian white noise as random excitation. We also show that significant SNR improvement is possible in this system even for a sub-threshold sinusoid input if, instead of the commonly used narrow-band SNR, we apply an equally simple but much more realistic wide-band SNR definition. Using the latter result as an argument, we draw attention to the fact that the choice of the measure to reflect signal quality is critical with regard to the extent of signal improvement observed, and urge reconsideration of the practice prevalent in SR studies that most often the narrow-band SNR is used to characterise SR. Finally, we pose some questions concerning the possibilities of applying SNR improvement in practical set-ups.

  12. Ultrasonic correlator versus signal averager as a signal to noise enhancement instrument

    Science.gov (United States)

    Kishoni, Doron; Pietsch, Benjamin E.

    1989-01-01

    Ultrasonic inspection of thick and attenuating materials is hampered by the reduced amplitudes of the propagated waves to a degree that the noise is too high to enable meaningful interpretation of the data. In order to overcome the low Signal to Noise (S/N) ratio, a correlation technique has been developed. In this method, a continuous pseudo-random pattern generated digitally is transmitted and detected by piezoelectric transducers. A correlation is performed in the instrument between the received signal and a variable delayed image of the transmitted one. The result is shown to be proportional to the impulse response of the investigated material, analogous to a signal received from a pulsed system, with an improved S/N ratio. The degree of S/N enhancement depends on the sweep rate. This paper describes the correlator, and compares it to the method of enhancing S/N ratio by averaging the signals. The similarities and differences between the two are highlighted and the potential advantage of the correlator system is explained.

  13. Intrinsic low pass filtering improves signal-to-noise ratio in critical-point flexure biosensors

    International Nuclear Information System (INIS)

    Jain, Ankit; Alam, Muhammad Ashraful

    2014-01-01

    A flexure biosensor consists of a suspended beam and a fixed bottom electrode. The adsorption of the target biomolecules on the beam changes its stiffness and results in change of beam's deflection. It is now well established that the sensitivity of sensor is maximized close to the pull-in instability point, where effective stiffness of the beam vanishes. The question: “Do the signal-to-noise ratio (SNR) and the limit-of-detection (LOD) also improve close to the instability point?”, however remains unanswered. In this article, we systematically analyze the noise response to evaluate SNR and establish LOD of critical-point flexure sensors. We find that a flexure sensor acts like an effective low pass filter close to the instability point due to its relatively small resonance frequency, and rejects high frequency noise, leading to improved SNR and LOD. We believe that our conclusions should establish the uniqueness and the technological relevance of critical-point biosensors.

  14. Relationship of signal-to-noise ratio with acquisition parameters in MRI for a given contrast

    International Nuclear Information System (INIS)

    Bittoun, J.; Leroy-Willig, A.; Idy, I.; Halimi, P.; Syrota, A.; Desgrez, A.; Saint-Jalmes, H.

    1987-01-01

    The signal-to-noise ratio (SNR) is certainly the most important characteristic of medical images, since the spatial resolution and the visualization of contrast are dependent on its value. On the other hand, modifying an acquisition variable in magnetic resonance imaging, in order to improve spatial resolution for example, may induce a SNR loss and finally alter the image quality. We have studied a theoretical relation between SNR and 2DFT method acquisition variables with the exception of parameters such as TR, TE and TI; these parameters are determined by the desired contrast in order to confirm a diagnosis. According to this relation SNR is proportional to each dimension of the slice, and to the square root of the number of averaged signals; it is inversely proportional to the number of frequency points and to the square root of the number of phase points. This relation was experimentally verified with phantoms and on an MR system at 1.5 T. It was then plotted as a multiple-entry graph on which operators at the console can read the number of averaged signals necessary to compensate SNR loss induced by a modification of other parameters [fr

  15. The impact of signal-to-noise ratio on contextual cueing in children and adults.

    Science.gov (United States)

    Yang, Yingying; Merrill, Edward C

    2015-04-01

    Contextual cueing refers to a form of implicit spatial learning where participants incidentally learn to associate a target location with its repeated spatial context. Successful contextual learning produces an efficient visual search through familiar environments. Despite the fact that children exhibit the basic ability of implicit spatial learning, their general effectiveness in this form of learning can be compromised by other development-dependent factors. Learning to extract useful information (signal) in the presence of various amounts of irrelevant or distracting information (noise) characterizes one of the most important changes that occur with cognitive development. This research investigated whether signal-to-noise ratio (S/N) affects contextual cueing differently in children and adults. S/N was operationally defined as the ratio of repeated versus new displays encountered over time. Three ratio conditions were created: high (100%), medium (67%), and low (33%) conditions. Results suggested no difference in the acquisition of contextual learning effects in the high and medium conditions across three age groups (6- to 8-year-olds, 10- to 12-year-olds, and young adults). However, a significant developmental difference emerged in the low S/N condition. As predicted, adults exhibited significant contextual cueing effects, whereas older children showed marginally significant contextual cueing and younger children did not show cueing effects. Group differences in the ability to exhibit implicit contextual learning under low S/N conditions and the implications of this difference are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Signal-to-noise ratio measurement in parallel MRI with subtraction mapping and consecutive methods

    International Nuclear Information System (INIS)

    Imai, Hiroshi; Miyati, Tosiaki; Ogura, Akio; Doi, Tsukasa; Tsuchihashi, Toshio; Machida, Yoshio; Kobayashi, Masato; Shimizu, Kouzou; Kitou, Yoshihiro

    2008-01-01

    When measuring the signal-to-noise ratio (SNR) of an image the used parallel magnetic resonance imaging, it was confirmed that there was a problem in the application of past SNR measurement. With the method of measuring the noise from the background signal, SNR with parallel imaging was higher than that without parallel imaging. In the subtraction method (NEMA standard), which sets a wide region of interest, the white noise was not evaluated correctly although SNR was close to the theoretical value. We proposed two techniques because SNR in parallel imaging was not uniform according to inhomogeneity of the coil sensitivity distribution and geometry factor. Using the first method (subtraction mapping), two images were scanned with identical parameters. The SNR in each pixel divided the running mean (7 by 7 pixels in neighborhood) by standard deviation/√2 in the same region of interest. Using the second (consecutive) method, more than fifty consecutive scans of the uniform phantom were obtained with identical scan parameters. Then the SNR was calculated from the ratio of mean signal intensity to the standard deviation in each pixel on a series of images. Moreover, geometry factors were calculated from SNRs with and without parallel imaging. The SNR and geometry factor using parallel imaging in the subtraction mapping method agreed with those of the consecutive method. Both methods make it possible to obtain a more detailed determination of SNR in parallel imaging and to calculate the geometry factor. (author)

  17. Large signal-to-noise ratio quantification in MLE for ARARMAX models

    Science.gov (United States)

    Zou, Yiqun; Tang, Xiafei

    2014-06-01

    It has been shown that closed-loop linear system identification by indirect method can be generally transferred to open-loop ARARMAX (AutoRegressive AutoRegressive Moving Average with eXogenous input) estimation. For such models, the gradient-related optimisation with large enough signal-to-noise ratio (SNR) can avoid the potential local convergence in maximum likelihood estimation. To ease the application of this condition, the threshold SNR needs to be quantified. In this paper, we build the amplitude coefficient which is an equivalence to the SNR and prove the finiteness of the threshold amplitude coefficient within the stability region. The quantification of threshold is achieved by the minimisation of an elaborately designed multi-variable cost function which unifies all the restrictions on the amplitude coefficient. The corresponding algorithm based on two sets of physically realisable system input-output data details the minimisation and also points out how to use the gradient-related method to estimate ARARMAX parameters when local minimum is present as the SNR is small. Then, the algorithm is tested on a theoretical AutoRegressive Moving Average with eXogenous input model for the derivation of the threshold and a gas turbine engine real system for model identification, respectively. Finally, the graphical validation of threshold on a two-dimensional plot is discussed.

  18. Gating in time domain as a tool for improving the signal-to-noise ratio of beam transfer function measurements

    CERN Document Server

    Oeftiger, U; Caspers, Fritz

    1992-01-01

    For the measurement of Beam Transfer Functions the signal-to-noise ratio is of great importance. In order to get a reasonable quality of the measured data one may apply averaging and smoothing. In the following another technique called time gating to improve the quality of the measurement will be described. By this technique the measurement data are Fourier transformed and then modified in time domain. Tune gating suppresses signal contributions that are correlated to a time interval when no interesting information is expected. Afterivards an inverse Fourier transform leads to data in frequency domain with an improved signal to noise ratio.

  19. [The radial velocity measurement accuracy of different spectral type low resolution stellar spectra at different signal-to-noise ratio].

    Science.gov (United States)

    Wang, Feng-Fei; Luo, A-Li; Zhao, Yong-Heng

    2014-02-01

    The radial velocity of the star is very important for the study of the dynamics structure and chemistry evolution of the Milky Way, is also an useful tool for looking for variable or special objects. In the present work, we focus on calculating the radial velocity of different spectral types of low-resolution stellar spectra by adopting a template matching method, so as to provide effective and reliable reference to the different aspects of scientific research We choose high signal-to-noise ratio (SNR) spectra of different spectral type stellar from the Sloan Digital Sky Survey (SDSS), and add different noise to simulate the stellar spectra with different SNR. Then we obtain theradial velocity measurement accuracy of different spectral type stellar spectra at different SNR by employing a template matching method. Meanwhile, the radial velocity measurement accuracy of white dwarf stars is analyzed as well. We concluded that the accuracy of radial velocity measurements of early-type stars is much higher than late-type ones. For example, the 1-sigma standard error of radial velocity measurements of A-type stars is 5-8 times as large as K-type and M-type stars. We discuss the reason and suggest that the very narrow lines of late-type stars ensure the accuracy of measurement of radial velocities, while the early-type stars with very wide Balmer lines, such as A-type stars, become sensitive to noise and obtain low accuracy of radial velocities. For the spectra of white dwarfs stars, the standard error of radial velocity measurement could be over 50 km x s(-1) because of their extremely wide Balmer lines. The above conclusion will provide a good reference for stellar scientific study.

  20. Investigations on the relationship between power spectrum and signal-to-noise ratio of frequency-swept pulses

    International Nuclear Information System (INIS)

    Zhang Zhuhong; Fan Diayuan

    1993-01-01

    The criterion for obtaining compressed chirp pulses with high signal-to-noise ratio is the shape of the power spectrum, a chirp pulse of Gaussian shaped power spectrum without modulation is needed in CPA system to get the clean compressed pulses. 4 refs., 2 figs

  1. Influence of Signal-to-Noise Ratio and Point Spread Function on Limits of Super-Resolution

    NARCIS (Netherlands)

    Pham, T.Q.; Vliet, L.J. van; Schutte, K.

    2005-01-01

    This paper presents a method to predict the limit of possible resolution enhancement given a sequence of low resolution images. Three important parameters influence the outcome of this limit: the total Point Spread Function (PSF), the Signal-to-Noise Ratio (SNR) and the number of input images.

  2. Influence of signal-to-noise ratio and point spread function on limits of super-resolution

    NARCIS (Netherlands)

    Pham, T.Q.; Van Vliet, L.; Schutte, K.

    2005-01-01

    This paper presents a method to predict the limit of possible resolution enhancement given a sequence of lowresolution images. Three important parameters influence the outcome of this limit: the total Point Spread Function (PSF), the Signal-to-Noise Ratio (SNR) and the number of input images.

  3. Parallel Array Bistable Stochastic Resonance System with Independent Input and Its Signal-to-Noise Ratio Improvement

    Directory of Open Access Journals (Sweden)

    Wei Li

    2014-01-01

    with independent components and averaged output; second, we give a deduction of the output signal-to-noise ratio (SNR for this system to show the performance. Our examples show the enhancement of the system and how different parameters influence the performance of the proposed parallel array.

  4. Effects of the physiological parameters on the signal-to-noise ratio of single myoelectric channel

    Directory of Open Access Journals (Sweden)

    Zhang YT

    2007-08-01

    Full Text Available Abstract Background An important measure of the performance of a myoelectric (ME control system for powered artificial limbs is the signal-to-noise ratio (SNR at the output of ME channel. However, few studies illustrated the neuron-muscular interactive effects on the SNR at ME control channel output. In order to obtain a comprehensive understanding on the relationship between the physiology of individual motor unit and the ME control performance, this study investigates the effects of physiological factors on the SNR of single ME channel by an analytical and simulation approach, where the SNR is defined as the ratio of the mean squared value estimation at the channel output and the variance of the estimation. Methods Mathematical models are formulated based on three fundamental elements: a motoneuron firing mechanism, motor unit action potential (MUAP module, and signal processor. Myoelectric signals of a motor unit are synthesized with different physiological parameters, and the corresponding SNR of single ME channel is numerically calculated. Effects of physiological multi factors on the SNR are investigated, including properties of the motoneuron, MUAP waveform, recruitment order, and firing pattern, etc. Results The results of the mathematical model, supported by simulation, indicate that the SNR of a single ME channel is associated with the voluntary contraction level. We showed that a model-based approach can provide insight into the key factors and bioprocess in ME control. The results of this modelling work can be potentially used in the improvement of ME control performance and for the training of amputees with powered prostheses. Conclusion The SNR of single ME channel is a force, neuronal and muscular property dependent parameter. The theoretical model provides possible guidance to enhance the SNR of ME channel by controlling physiological variables or conscious contraction level.

  5. Theory and Measurement of Signal-to-Noise Ratio in Continuous-Wave Noise Radar.

    Science.gov (United States)

    Stec, Bronisław; Susek, Waldemar

    2018-05-06

    Determination of the signal power-to-noise power ratio on the input and output of reception systems is essential to the estimation of their quality and signal reception capability. This issue is especially important in the case when both signal and noise have the same characteristic as Gaussian white noise. This article considers the problem of how a signal-to-noise ratio is changed as a result of signal processing in the correlation receiver of a noise radar in order to determine the ability to detect weak features in the presence of strong clutter-type interference. These studies concern both theoretical analysis and practical measurements of a noise radar with a digital correlation receiver for 9.2 GHz bandwidth. Firstly, signals participating individually in the correlation process are defined and the terms signal and interference are ascribed to them. Further studies show that it is possible to distinguish a signal and a noise on the input and output of a correlation receiver, respectively, when all the considered noises are in the form of white noise. Considering the above, a measurement system is designed in which it is possible to represent the actual conditions of noise radar operation and power measurement of a useful noise signal and interference noise signals—in particular the power of an internal leakage signal between a transmitter and a receiver of the noise radar. The proposed measurement stands and the obtained results show that it is possible to optimize with the use of the equipment and not with the complex processing of a noise signal. The radar parameters depend on its prospective application, such as short- and medium-range radar, ground-penetrating radar, and through-the-wall detection radar.

  6. A consistency evaluation of signal-to-noise ratio in the quality assessment of human brain magnetic resonance images.

    Science.gov (United States)

    Yu, Shaode; Dai, Guangzhe; Wang, Zhaoyang; Li, Leida; Wei, Xinhua; Xie, Yaoqin

    2018-05-16

    Quality assessment of medical images is highly related to the quality assurance, image interpretation and decision making. As to magnetic resonance (MR) images, signal-to-noise ratio (SNR) is routinely used as a quality indicator, while little knowledge is known of its consistency regarding different observers. In total, 192, 88, 76 and 55 brain images are acquired using T 2 * , T 1 , T 2 and contrast-enhanced T 1 (T 1 C) weighted MR imaging sequences, respectively. To each imaging protocol, the consistency of SNR measurement is verified between and within two observers, and white matter (WM) and cerebral spinal fluid (CSF) are alternately used as the tissue region of interest (TOI) for SNR measurement. The procedure is repeated on another day within 30 days. At first, overlapped voxels in TOIs are quantified with Dice index. Then, test-retest reliability is assessed in terms of intra-class correlation coefficient (ICC). After that, four models (BIQI, BLIINDS-II, BRISQUE and NIQE) primarily used for the quality assessment of natural images are borrowed to predict the quality of MR images. And in the end, the correlation between SNR values and predicted results is analyzed. To the same TOI in each MR imaging sequence, less than 6% voxels are overlapped between manual delineations. In the quality estimation of MR images, statistical analysis indicates no significant difference between observers (Wilcoxon rank sum test, p w  ≥ 0.11; paired-sample t test, p p  ≥ 0.26), and good to very good intra- and inter-observer reliability are found (ICC, p icc  ≥ 0.74). Furthermore, Pearson correlation coefficient (r p ) suggests that SNR wm correlates strongly with BIQI, BLIINDS-II and BRISQUE in T 2 * (r p  ≥ 0.78), BRISQUE and NIQE in T 1 (r p  ≥ 0.77), BLIINDS-II in T 2 (r p  ≥ 0.68) and BRISQUE and NIQE in T 1 C (r p  ≥ 0.62) weighted MR images, while SNR csf correlates strongly with BLIINDS-II in T 2 * (r p  ≥ 0.63) and in T

  7. Improving signal-to-noise in the direct imaging of exoplanets and circumstellar disks with MLOCI

    Science.gov (United States)

    Wahhaj, Zahed; Cieza, Lucas A.; Mawet, Dimitri; Yang, Bin; Canovas, Hector; de Boer, Jozua; Casassus, Simon; Ménard, François; Schreiber, Matthias R.; Liu, Michael C.; Biller, Beth A.; Nielsen, Eric L.; Hayward, Thomas L.

    2015-09-01

    We present a new algorithm designed to improve the signal-to-noise ratio (S/N) of point and extended source detections around bright stars in direct imaging data.One of our innovations is that we insert simulated point sources into the science images, which we then try to recover with maximum S/N. This improves the S/N of real point sources elsewhere in the field. The algorithm, based on the locally optimized combination of images (LOCI) method, is called Matched LOCI or MLOCI. We show with Gemini Planet Imager (GPI) data on HD 135344 B and Near-Infrared Coronagraphic Imager (NICI) data on several stars that the new algorithm can improve the S/N of point source detections by 30-400% over past methods. We also find no increase in false detections rates. No prior knowledge of candidate companion locations is required to use MLOCI. On the other hand, while non-blind applications may yield linear combinations of science images that seem to increase the S/N of true sources by a factor >2, they can also yield false detections at high rates. This is a potential pitfall when trying to confirm marginal detections or to redetect point sources found in previous epochs. These findings are relevant to any method where the coefficients of the linear combination are considered tunable, e.g., LOCI and principal component analysis (PCA). Thus we recommend that false detection rates be analyzed when using these techniques. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (USA), the Science and Technology Facilities Council (UK), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).

  8. Low signal-to-noise FDEM in-phase data: Practical potential for magnetic susceptibility modelling

    Science.gov (United States)

    Delefortrie, Samuël; Hanssens, Daan; De Smedt, Philippe

    2018-05-01

    In this paper, we consider the use of land-based frequency-domain electromagnetics (FDEM) for magnetic susceptibility modelling. FDEM data comprises both out-of-phase and in-phase components, which can be related to the electrical conductivity and magnetic susceptibility of the subsurface. Though applying the FDEM method to obtain information on the subsurface conductivity is well established in various domains (e.g. through the low induction number approximation of subsurface apparent conductivity), the potential for susceptibility mapping is often overlooked. Especially given a subsurface with a low magnetite and maghemite content (e.g. most sedimentary environments), it is generally assumed that susceptibility is negligible. Nonetheless, the heterogeneity of the near surface and the impact of anthropogenic disturbances on the soil can cause sufficient variation in susceptibility for it to be detectable in a repeatable way. Unfortunately, it can be challenging to study the potential for susceptibility mapping due to systematic errors, an often poor low signal-to-noise ratio, and the intricacy of correlating in-phase responses with subsurface susceptibility and conductivity. Alongside use of an accurate forward model - accounting for out-of-phase/in-phase coupling - any attempt at relating the in-phase response with subsurface susceptibility requires overcoming instrument-specific limitations that burden the real-world application of FDEM susceptibility mapping. Firstly, the often erratic and drift-sensitive nature of in-phase responses calls for relative data levelling. In addition, a correction for absolute levelling offsets may be equally necessary: ancillary (subsurface) susceptibility data can be used to assess the importance of absolute in-phase calibration though hereby accurate in-situ data is required. To allow assessing the (importance of) in-phase calibration alongside the potential of FDEM data for susceptibility modelling, we consider an experimental

  9. Electron dose dependence of signal-to-noise ratio, atom contrast and resolution in transmission electron microscope images

    International Nuclear Information System (INIS)

    Lee, Z.; Rose, H.; Lehtinen, O.; Biskupek, J.; Kaiser, U.

    2014-01-01

    In order to achieve the highest resolution in aberration-corrected (AC) high-resolution transmission electron microscopy (HRTEM) images, high electron doses are required which only a few samples can withstand. In this paper we perform dose-dependent AC-HRTEM image calculations, and study the dependence of the signal-to-noise ratio, atom contrast and resolution on electron dose and sampling. We introduce dose-dependent contrast, which can be used to evaluate the visibility of objects under different dose conditions. Based on our calculations, we determine optimum samplings for high and low electron dose imaging conditions. - Highlights: • The definition of dose-dependent atom contrast is introduced. • The dependence of the signal-to-noise ratio, atom contrast and specimen resolution on electron dose and sampling is explored. • The optimum sampling can be determined according to different dose conditions

  10. Signal-to-Noise Enhancement of a Nanospring Redox-Based Sensor by Lock-in Amplification

    Directory of Open Access Journals (Sweden)

    Pavel V. Bakharev

    2015-06-01

    Full Text Available A significant improvement of the response characteristics of a redox chemical gas sensor (chemiresistor constructed with a single ZnO coated silica nanospring has been achieved with the technique of lock-in signal amplification. The comparison of DC and analog lock-in amplifier (LIA AC measurements of the electrical sensor response to toluene vapor, at the ppm level, has been conducted. When operated in the DC detection mode, the sensor exhibits a relatively high sensitivity to the analyte vapor, as well as a low detection limit at the 10 ppm level. However, at 10 ppm the signal-to-noise ratio is 5 dB, which is less than desirable. When operated in the analog LIA mode, the signal-to-noise ratio at 10 ppm increases by 30 dB and extends the detection limit to the ppb range.

  11. The primordial deuterium abundance at zabs = 2.504 from a high signal-to-noise spectrum of Q1009+2956

    Science.gov (United States)

    Zavarygin, E. O.; Webb, J. K.; Dumont, V.; Riemer-Sørensen, S.

    2018-04-01

    The spectrum of the zem = 2.63 quasar Q1009+2956 has been observed extensively on the Keck telescope. The Lyman limit absorption system zabs = 2.504 was previously used to measure D/H by Burles & Tytler using a spectrum with signal to noise approximately 60 per pixel in the continuum near Ly α at zabs = 2.504. The larger dataset now available combines to form an exceptionally high signal to noise spectrum, around 147 per pixel. Several heavy element absorption lines are detected in this LLS, providing strong constraints on the kinematic structure. We explore a suite of absorption system models and find that the deuterium feature is likely to be contaminated by weak interloping Ly α absorption from a low column density H I cloud, reducing the expected D/H precision. We find D/H =2.48^{+0.41}_{-0.35} × 10^{-5} for this system. Combining this new measurement with others from the literature and applying the method of Least Trimmed Squares to a statistical sample of 15 D/H measurements results in a "reliable" sample of 13 values. This sample yields a primordial deuterium abundance of (D/H)p = (2.545 ± 0.025) × 10-5. The corresponding mean baryonic density of the Universe is Ωbh2 = 0.02174 ± 0.00025. The quasar absorption data is of the same precision as, and marginally inconsistent with, the 2015 CMB Planck (TT+lowP+lensing) measurement, Ωbh2 = 0.02226 ± 0.00023. Further quasar and more precise nuclear data are required to establish whether this is a random fluctuation.

  12. Receiver Signal to Noise Ratios for IPDA Lidars Using Sine-wave and Pulsed Laser Modulation and Direct Detections

    Science.gov (United States)

    Sun, Xiaoli; Abshire, James B.

    2011-01-01

    Integrated path differential absorption (IPDA) lidar can be used to remotely measure the column density of gases in the path to a scattering target [1]. The total column gas molecular density can be derived from the ratio of the laser echo signal power with the laser wavelength on the gas absorption line (on-line) to that off the line (off-line). 80th coherent detection and direct detection IPDA lidar have been used successfully in the past in horizontal path and airborne remote sensing measurements. However, for space based measurements, the signal propagation losses are often orders of magnitude higher and it is important to use the most efficient laser modulation and detection technique to minimize the average laser power and the electrical power from the spacecraft. This paper gives an analysis the receiver signal to noise ratio (SNR) of several laser modulation and detection techniques versus the average received laser power under similar operation environments. Coherent detection [2] can give the best receiver performance when the local oscillator laser is relatively strong and the heterodyne mixing losses are negligible. Coherent detection has a high signal gain and a very narrow bandwidth for the background light and detector dark noise. However, coherent detection must maintain a high degree of coherence between the local oscillator laser and the received signal in both temporal and spatial modes. This often results in a high system complexity and low overall measurement efficiency. For measurements through atmosphere the coherence diameter of the received signal also limits the useful size of the receiver telescope. Direct detection IPDA lidars are simpler to build and have fewer constraints on the transmitter and receiver components. They can use much larger size 'photon-bucket' type telescopes to reduce the demands on the laser transmitter. Here we consider the two most widely used direct detection IPDA lidar techniques. The first technique uses two CW

  13. Direct Signal-to-Noise Quality Comparison between an Electronic and Conventional Stethoscope aboard the International Space Station

    Science.gov (United States)

    Marshburn, Thomas; Cole, Richard; Ebert, Doug; Bauer, Pete

    2014-01-01

    Introduction: Evaluation of heart, lung, and bowel sounds is routinely performed with the use of a stethoscope to help detect a broad range of medical conditions. Stethoscope acquired information is even more valuable in a resource limited environments such as the International Space Station (ISS) where additional testing is not available. The high ambient noise level aboard the ISS poses a specific challenge to auscultation by stethoscope. An electronic stethoscope's ambient noise-reduction, greater sound amplification, recording capabilities, and sound visualization software may be an advantage to a conventional stethoscope in this environment. Methods: A single operator rated signal-to-noise quality from a conventional stethoscope (Littman 2218BE) and an electronic stethoscope (Litmann 3200). Borborygmi, pulmonic, and cardiac sound quality was ranked with both stethoscopes. Signal-to-noise rankings were preformed on a 1 to 10 subjective scale with 1 being inaudible, 6 the expected quality in an emergency department, 8 the expected quality in a clinic, and 10 the clearest possible quality. Testing took place in the Japanese Pressurized Module (JPM), Unity (Node 2), Destiny (US Lab), Tranquility (Node 3), and the Cupola of the International Space Station. All examinations were conducted at a single point in time. Results: The electronic stethoscope's performance ranked higher than the conventional stethoscope for each body sound in all modules tested. The electronic stethoscope's sound quality was rated between 7 and 10 in all modules tested. In comparison, the traditional stethoscope's sound quality was rated between 4 and 7. The signal to noise ratio of borborygmi showed the biggest difference between stethoscopes. In the modules tested, the auscultation of borborygmi was rated between 5 and 7 by the conventional stethoscope and consistently 10 by the electronic stethoscope. Discussion: This stethoscope comparison was limited to a single operator. However, we

  14. Signal-to-noise ratio and detective quantum efficiency determination by and alternative use of photographic detectors

    International Nuclear Information System (INIS)

    Burgudzhiev, Z.; Koleva, D.

    1986-01-01

    A known theoretical model of an alternative use of silver-halogenid pnotographic emulsions in which the number of the granulas forming the photographic image is used as a detector output instead of the microdensiometric blackening density is applied to some real photographic emulsions. It is found that by this use the Signal-to-Noise ratio of the photographic detector can be increased to about 5 times while its detective quantum efficiency can reach about 20%, being close to that of some photomultipliers

  15. Signal-to-noise characterization of time-gated intensifiers used for wide-field time-domain FLIM

    Energy Technology Data Exchange (ETDEWEB)

    McGinty, J; Requejo-Isidro, J; Munro, I; Talbot, C B; Dunsby, C; Neil, M A A; French, P M W [Photonics Group, Blackett Laboratory, Imperial College London, Prince Consort Road, London, SW7 2BW (United Kingdom); Kellett, P A; Hares, J D, E-mail: james.mcginty@imperial.ac.u [Kentech Instruments Ltd, Isis Building, Howbery Park, Wallingford, OX10 8BA (United Kingdom)

    2009-07-07

    Time-gated imaging using gated optical intensifiers provides a means to realize high speed fluorescence lifetime imaging (FLIM) for the study of fast events and for high throughput imaging. We present a signal-to-noise characterization of CCD-coupled micro-channel plate gated intensifiers used with this technique and determine the optimal acquisition parameters (intensifier gain voltage, CCD integration time and frame averaging) for measuring mono-exponential fluorescence lifetimes in the shortest image acquisition time for a given signal flux. We explore the use of unequal CCD integration times for different gate delays and show that this can improve the lifetime accuracy for a given total acquisition time.

  16. Cervical vertebral maturation method and mandibular growth peak: a longitudinal study of diagnostic reliability.

    Science.gov (United States)

    Perinetti, Giuseppe; Primozic, Jasmina; Sharma, Bhavna; Cioffi, Iacopo; Contardo, Luca

    2018-03-28

    The capability of the cervical vertebral maturation (CVM) method in the identification of the mandibular growth peak on an individual basis remains undetermined. The diagnostic reliability of the six-stage CVM method in the identification of the mandibular growth peak was thus investigated. From the files of the Oregon and Burlington Growth Studies (data obtained between early 1950s and middle 1970s), 50 subjects (26 females, 24 males) with at least seven annual lateral cephalograms taken from 9 to 16 years were identified. Cervical vertebral maturation was assessed according to the CVM code staging system, and mandibular growth was defined as annual increments in Co-Gn distance. A diagnostic reliability analysis was carried out to establish the capability of the circumpubertal CVM stages 2, 3, and 4 in the identification of the imminent mandibular growth peak. Variable durations of each of the CVM stages 2, 3, and 4 were seen. The overall diagnostic accuracy values for the CVM stages 2, 3, and 4 were 0.70, 0.76, and 0.77, respectively. These low values appeared to be due to false positive cases. Secular trends in conjunction with the use of a discrete staging system. In most of the Burlington Growth Study sample, the lateral head film at age 15 was missing. None of the CVM stages 2, 3, and 4 reached a satisfactorily diagnostic reliability in the identification of imminent mandibular growth peak.

  17. Signal-to-noise ratio estimation in digital computer simulation of lowpass and bandpass systems with applications to analog and digital communications, volume 3

    Science.gov (United States)

    Tranter, W. H.; Turner, M. D.

    1977-01-01

    Techniques are developed to estimate power gain, delay, signal-to-noise ratio, and mean square error in digital computer simulations of lowpass and bandpass systems. The techniques are applied to analog and digital communications. The signal-to-noise ratio estimates are shown to be maximum likelihood estimates in additive white Gaussian noise. The methods are seen to be especially useful for digital communication systems where the mapping from the signal-to-noise ratio to the error probability can be obtained. Simulation results show the techniques developed to be accurate and quite versatile in evaluating the performance of many systems through digital computer simulation.

  18. Optimum Boundaries of Signal-to-Noise Ratio for Adaptive Code Modulations

    Science.gov (United States)

    2017-11-14

    possible ACM modes. This will decrease the searching time by half when compared to the mode search using a linear searching ( sequential ) method. The... simultaneously on with same 10 dB transmit power gain and parameters………………………………………………………………65 Fig. B-12. PSD when signal is transmitted from vector network...dB transmit power gain. Observe in Fig. B-11 that the peak height of the summed signal PSD increases when the second USRP 2932 is simultaneously

  19. Improving the signal-to-noise ratio in ultrasound-modulated optical tomography by a lock-in amplifier

    Science.gov (United States)

    Zhu, Lili; Wu, Jingping; Lin, Guimin; Hu, Liangjun; Li, Hui

    2016-10-01

    With high spatial resolution of ultrasonic location and high sensitivity of optical detection, ultrasound-modulated optical tomography (UOT) is a promising noninvasive biological tissue imaging technology. In biological tissue, the ultrasound-modulated light signals are very weak and are overwhelmed by the strong unmodulated light signals. It is a difficulty and key to efficiently pick out the weak modulated light from strong unmodulated light in UOT. Under the effect of an ultrasonic field, the scattering light intensity presents a periodic variation as the ultrasonic frequency changes. So the modulated light signals would be escape from the high unmodulated light signals, when the modulated light signals and the ultrasonic signal are processed cross correlation operation by a lock-in amplifier and without a chopper. Experimental results indicated that the signal-to-noise ratio of UOT is significantly improved by a lock-in amplifier, and the higher the repetition frequency of pulsed ultrasonic wave, the better the signal-to-noise ratio of UOT.

  20. Acoustics of fish shelters: background noise and signal-to-noise ratio.

    Science.gov (United States)

    Lugli, Marco

    2014-12-01

    Fish shelters (flat stones, shells, artificial covers, etc., with a hollow beneath) increase the sound pressure levels of low frequency sounds (noise ratio (SNR) in the nest. Background noise amplification by the shelter was examined under both laboratory (stones and shells) and field (stones) conditions, and the SNR of tones inside the nest cavity was measured by performing acoustic tests on stones in the stream. Stone and shell shelters amplify the background noise pressure levels inside the cavity with comparable gains and at similar frequencies of an active sound source. Inside the cavity of stream stones, the mean SNR of tones increased significantly below 125 Hz and peaked at 65 Hz (+10 dB). Implications for fish acoustic communication inside nest enclosures are discussed.

  1. Gamma processes and peaks-over-threshold distributions for time-dependent reliability

    International Nuclear Information System (INIS)

    Noortwijk, J.M. van; Weide, J.A.M. van der; Kallen, M.J.; Pandey, M.D.

    2007-01-01

    In the evaluation of structural reliability, a failure is defined as the event in which stress exceeds a resistance that is liable to deterioration. This paper presents a method to combine the two stochastic processes of deteriorating resistance and fluctuating load for computing the time-dependent reliability of a structural component. The deterioration process is modelled as a gamma process, which is a stochastic process with independent non-negative increments having a gamma distribution with identical scale parameter. The stochastic process of loads is generated by a Poisson process. The variability of the random loads is modelled by a peaks-over-threshold distribution (such as the generalised Pareto distribution). These stochastic processes of deterioration and load are combined to evaluate the time-dependent reliability

  2. Influence of spectral resolution, spectral range and signal-to-noise ratio of Fourier transform infra-red spectra on identification of high explosive substances

    Science.gov (United States)

    Banas, Krzysztof; Banas, Agnieszka M.; Heussler, Sascha P.; Breese, Mark B. H.

    2018-01-01

    In the contemporary spectroscopy there is a trend to record spectra with the highest possible spectral resolution. This is clearly justified if the spectral features in the spectrum are very narrow (for example infra-red spectra of gas samples). However there is a plethora of samples (in the liquid and especially in the solid form) where there is a natural spectral peak broadening due to collisions and proximity predominately. Additionally there is a number of portable devices (spectrometers) with inherently restricted spectral resolution, spectral range or both, which are extremely useful in some field applications (archaeology, agriculture, food industry, cultural heritage, forensic science). In this paper the investigation of the influence of spectral resolution, spectral range and signal-to-noise ratio on the identification of high explosive substances by applying multivariate statistical methods on the Fourier transform infra-red spectral data sets is studied. All mathematical procedures on spectral data for dimension reduction, clustering and validation were implemented within R open source environment.

  3. Real-time dynamic range and signal to noise enhancement in beam-scanning microscopy by integration of sensor characteristics, data acquisition hardware, and statistical methods

    Science.gov (United States)

    Kissick, David J.; Muir, Ryan D.; Sullivan, Shane Z.; Oglesbee, Robert A.; Simpson, Garth J.

    2013-02-01

    Despite the ubiquitous use of multi-photon and confocal microscopy measurements in biology, the core techniques typically suffer from fundamental compromises between signal to noise (S/N) and linear dynamic range (LDR). In this study, direct synchronous digitization of voltage transients coupled with statistical analysis is shown to allow S/N approaching the theoretical maximum throughout an LDR spanning more than 8 decades, limited only by the dark counts of the detector on the low end and by the intrinsic nonlinearities of the photomultiplier tube (PMT) detector on the high end. Synchronous digitization of each voltage transient represents a fundamental departure from established methods in confocal/multi-photon imaging, which are currently based on either photon counting or signal averaging. High information-density data acquisition (up to 3.2 GB/s of raw data) enables the smooth transition between the two modalities on a pixel-by-pixel basis and the ultimate writing of much smaller files (few kB/s). Modeling of the PMT response allows extraction of key sensor parameters from the histogram of voltage peak-heights. Applications in second harmonic generation (SHG) microscopy are described demonstrating S/N approaching the shot-noise limit of the detector over large dynamic ranges.

  4. A nontoxic, photostable and high signal-to-noise ratio mitochondrial probe with mitochondrial membrane potential and viscosity detectivity

    Science.gov (United States)

    Chen, Yanan; Qi, Jianguo; Huang, Jing; Zhou, Xiaomin; Niu, Linqiang; Yan, Zhijie; Wang, Jianhong

    2018-01-01

    Herein, we reported a yellow emission probe 1-methyl-4-(6-morpholino-1, 3-dioxo-1H-benzo[de]isoquinolin-2(3H)-yl) pyridin-1-ium iodide which could specifically stain mitochondria in living immortalized and normal cells. In comparison to the common mitochondria tracker (Mitotracker Deep Red, MTDR), this probe was nontoxic, photostable and ultrahigh signal-to-noise ratio, which could real-time monitor mitochondria for a long time. Moreover, this probe also showed high sensitivity towards mitochondrial membrane potential and intramitochondrial viscosity change. Consequently, this probe was used for imaging mitochondria, detecting changes in mitochondrial membrane potential and intramitochondrial viscosity in physiological and pathological processes.

  5. Signal-to-noise optimization and evaluation of a home-made visible diode-array spectrophotometer

    Science.gov (United States)

    Raimundo, Jr., Ivo M.; Pasquini, Celio

    1993-01-01

    This paper describes a simple low-cost multichannel visible spectrophotometer built with an RL512G EGG-Reticon photodiode array. A symmetric Czerny-Turner optical design was employed; instrument control was via a single-board microcomputer based on the 8085 Intel microprocessor. Spectral intensity data are stored in the single-board's RAM and then transferred to an IBM-AT 3865X compatible microcomputer through a RS-232C interface. This external microcomputer processes the data to recover transmittance, absorbance or relative intensity of the spectra. The signal-to-noise ratio and dynamic range were improved by using variable integration times, which increase during the same scan; and by the use of either weighted or unweighted sliding average of consecutive diodes. The instrument is suitable for automatic methods requiring quasi-simultaneous multiwavelength detections, such as multivariative calibration and flow-injection gradient scan techniques. PMID:18924979

  6. Real-time photonic sampling with improved signal-to-noise and distortion ratio using polarization-dependent modulators

    Science.gov (United States)

    Liang, Dong; Zhang, Zhiyao; Liu, Yong; Li, Xiaojun; Jiang, Wei; Tan, Qinggui

    2018-04-01

    A real-time photonic sampling structure with effective nonlinearity suppression and excellent signal-to-noise ratio (SNR) performance is proposed. The key points of this scheme are the polarization-dependent modulators (P-DMZMs) and the sagnac loop structure. Thanks to the polarization sensitive characteristic of P-DMZMs, the differences between transfer functions of the fundamental signal and the distortion become visible. Meanwhile, the selection of specific biases in P-DMZMs is helpful to achieve a preferable linearized performance with a low noise level for real-time photonic sampling. Compared with the quadrature-biased scheme, the proposed scheme is capable of valid nonlinearity suppression and is able to provide a better SNR performance even in a large frequency range. The proposed scheme is proved to be effective and easily implemented for real time photonic applications.

  7. The position dependent influence that sensitivity correction processing gives the signal-to-noise ratio measurement in parallel imaging

    International Nuclear Information System (INIS)

    Murakami, Koichi; Yoshida, Koji; Yanagimoto, Shinichi

    2012-01-01

    We studied the position dependent influence that sensitivity correction processing gave the signal-to-noise ratio (SNR) measurement of parallel imaging (PI). Sensitivity correction processing that referred to the sensitivity distribution of the body coil improved regional uniformity more than the sensitivity uniformity correction filter with a fixed correction factor. In addition, the position dependent influence to give the SNR measurement in PI was different from the sensitivity correction processing. Therefore, if we divide SNR of the sensitivity correction processing image by SNR of the original image in each pixel and calculate SNR ratio, we can show the position dependent influence that sensitivity correction processing gives the SNR measurement in PI. It is with an index of the sensitivity correction processing precision. (author)

  8. Bootstrap Signal-to-Noise Confidence Intervals: An Objective Method for Subject Exclusion and Quality Control in ERP Studies

    Science.gov (United States)

    Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.

    2016-01-01

    Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849

  9. Optimization of number and signal to noise ratio radiographs for defects 3D reconstruction in industrial control

    International Nuclear Information System (INIS)

    Bruandet, J.-P.

    2001-01-01

    Among numerous techniques for non-destructive evaluation (NOE), X-rays systems are well suited to inspect inner objects. Acquiring several radiographs of inspected objects under different points of view enables to recover a three dimensional structural information. In this NOE application, a tomographic testing is considered. This work deals with two tomographic testing optimizations in order to improve the characterization of defects that may occur into metallic welds. The first one consists in the optimization of the acquisition strategy. Because tomographic testing is made on-line, the total duration for image acquisition is fixed, limiting the number of available views. Hence, for a given acquisition duration, it is possible either to acquire a very limited number of radiographs with a good signal to noise ratio in each single acquisition or a larger number of radiographs with a limited signal to noise ratio. The second one consists in optimizing the 3D reconstruction algorithms from a limited number of cone-beam projections. To manage the lack of data, we first used algebraic reconstruction algorithms such as ART or regularized ICM. In terms of acquisition strategy optimization, an increase of the number of projections was proved to be valuable. Taking into account specific prior knowledge such as support constraint or physical noise model in attenuation images also improved reconstruction quality. Then, a new regularized region based reconstruction approach was developed. Defects to reconstruct are binary (lack of material in a homogeneous object). As a consequence, they are entirely described by their shapes. Because the number of defects to recover is unknown and is totally arbitrary, a level set formulation allowing handling topological changes was used. Results obtained with a regularized level-set reconstruction algorithm are optimistic in the proposed context. (author) [fr

  10. Fast Metabolite Identification in Nuclear Magnetic Resonance Metabolomic Studies: Statistical Peak Sorting and Peak Overlap Detection for More Reliable Database Queries.

    Science.gov (United States)

    Hoijemberg, Pablo A; Pelczer, István

    2018-01-05

    A lot of time is spent by researchers in the identification of metabolites in NMR-based metabolomic studies. The usual metabolite identification starts employing public or commercial databases to match chemical shifts thought to belong to a given compound. Statistical total correlation spectroscopy (STOCSY), in use for more than a decade, speeds the process by finding statistical correlations among peaks, being able to create a better peak list as input for the database query. However, the (normally not automated) analysis becomes challenging due to the intrinsic issue of peak overlap, where correlations of more than one compound appear in the STOCSY trace. Here we present a fully automated methodology that analyzes all STOCSY traces at once (every peak is chosen as driver peak) and overcomes the peak overlap obstacle. Peak overlap detection by clustering analysis and sorting of traces (POD-CAST) first creates an overlap matrix from the STOCSY traces, then clusters the overlap traces based on their similarity and finally calculates a cumulative overlap index (COI) to account for both strong and intermediate correlations. This information is gathered in one plot to help the user identify the groups of peaks that would belong to a single molecule and perform a more reliable database query. The simultaneous examination of all traces reduces the time of analysis, compared to viewing STOCSY traces by pairs or small groups, and condenses the redundant information in the 2D STOCSY matrix into bands containing similar traces. The COI helps in the detection of overlapping peaks, which can be added to the peak list from another cross-correlated band. POD-CAST overcomes the generally overlooked and underestimated presence of overlapping peaks and it detects them to include them in the search of all compounds contributing to the peak overlap, enabling the user to accelerate the metabolite identification process with more successful database queries and searching all tentative

  11. Modeling signal-to-noise ratio of otoacoustic emissions in workers exposed to different industrial noise levels

    Directory of Open Access Journals (Sweden)

    Parvin Nassiri

    2016-01-01

    Full Text Available Introduction: Noise is considered as the most common cause of harmful physical effects in the workplace. A sound that is generated from within the inner ear is known as an otoacoustic emission (OAE. Distortion-product otoacoustic emissions (DPOAEs assess evoked emission and hearing capacity. The aim of this study was to assess the signal-to-noise ratio in different frequencies and at different times of the shift work in workers exposed to various levels of noise. It was also aimed to provide a statistical model for signal-to-noise ratio (SNR of OAEs in different frequencies based on the two variables of sound pressure level (SPL and exposure time. Materials and Methods: This case–control study was conducted on 45 workers during autumn 2014. The workers were divided into three groups based on the level of noise exposure. The SNR was measured in frequencies of 1000, 2000, 3000, 4000, and 6000 Hz in both ears, and in three different time intervals during the shift work. According to the inclusion criterion, SNR of 6 dB or greater was included in the study. The analysis was performed using repeated measurements of analysis of variance, spearman correlation coefficient, and paired samples t-test. Results: The results showed that there was no statistically significant difference between the three exposed groups in terms of the mean values of SNR (P > 0.05. Only in signal pressure levels of 88 dBA with an interval time of 10:30–11:00 AM, there was a statistically significant difference between the right and left ears with the mean SNR values of 3000 frequency (P = 0.038. The SPL had a significant effect on the SNR in both the right and left ears (P = 0.023, P = 0.041. The effect of the duration of measurement on the SNR was statistically significant in both the right and left ears (P = 0.027, P < 0.001. Conclusion: The findings of this study demonstrated that after noise exposure during the shift, SNR of OAEs reduced from the

  12. Evaluation and comparison of signal to noise ratio according to histogram equalization of heart shadow on chest image

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ki Won [Dept. of Radiology, Kyung Hee University Hospital at Gang-dong, Seoul (Korea, Republic of); Lee, Eul Kyu [Inje Paik University Hospital at Jeo-dong, Seoul (Korea, Republic of); Jeong, Hoi Woun [The Baekseok Culture University, Cheonan (Korea, Republic of); Kang, Byung Sam; Kim, Hyun Soo; Min, Jung Whan; Son, Jin Hyun [The Shingu University, Seongnam (Korea, Republic of)

    2017-06-15

    The purpose of this study was to measure signal to noise ratio (SNR) according to change of equalization from region of interest (ROI) of heart shadow in chest image. We examined images of chest image of 87 patients in a University-affiliated hospital, Seoul, Korea. Chest images of each patient were calculated by using Image. We have analysis socio-demographical variables, SNR according to images, 95% confidence according to SNR of difference in a mean of SNR. Differences of SNR among change of equalization were tested by SPSS Statistics21 ANOVA test for there was statistical significance 95%(p < 0.05). In SNR results, with the quality of distributions in the order of original chest image, original chest image heart shadow and equalization chest image, equalization chest image heart shadow(p < 0.001). In conclusion, this study would be that quantitative evaluation of heart shadow on chest image can be used as an adjunct to the histogram equalization chest image.

  13. Limits of visual communication: the effect of signal-to-noise ratio on the intelligibility of American Sign Language.

    Science.gov (United States)

    Pavel, M; Sperling, G; Riedl, T; Vanderbeek, A

    1987-12-01

    To determine the limits of human observers' ability to identify visually presented American Sign Language (ASL), the contrast s and the amount of additive noise n in dynamic ASL images were varied independently. Contrast was tested over a 4:1 range; the rms signal-to-noise ratios (s/n) investigated were s/n = 1/4, 1/2, 1, and infinity (which is used to designate the original, uncontaminated images). Fourteen deaf subjects were tested with an intelligibility test composed of 85 isolated ASL signs, each 2-3 sec in length. For these ASL signs (64 x 96 pixels, 30 frames/sec), subjects' performance asymptotes between s/n = 0.5 and 1.0; further increases in s/n do not improve intelligibility. Intelligibility was found to depend only on s/n and not on contrast. A formulation in terms of logistic functions was proposed to derive intelligibility of ASL signs from s/n, sign familiarity, and sign difficulty. Familiarity (ignorance) is represented by additive signal-correlated noise; it represents the likelihood of a subject's knowing a particular ASL sign, and it adds to s/n. Difficulty is represented by a multiplicative difficulty coefficient; it represents the perceptual vulnerability of an ASL sign to noise and it adds to log(s/n).

  14. Mechanism for optimization of signal-to-noise ratio of dopamine release based on short-term bidirectional plasticity.

    Science.gov (United States)

    Da Cunha, Claudio; McKimm, Eric; Da Cunha, Rafael M; Boschen, Suelen L; Redgrave, Peter; Blaha, Charles D

    2017-07-15

    Repeated electrical stimulation of dopamine (dopamine) fibers can cause variable effects on further dopamine release; sometimes there are short-term decreases while in other cases short-term increases have been reported. Previous studies have failed to discover what factors determine in which way dopamine neurons will respond to repeated stimulation. The aim of the present study was therefore to investigate what determines the direction and magnitude of this particular form of short-term plasticity. Fixed potential amperometry was used to measure dopamine release in the nucleus accumbens in response to two trains of electrical pulses administered to the ventral tegmental area of anesthetized mice. When the pulse trains were of equal magnitude we found that low magnitude stimulation was associated with short-term suppression and high magnitude stimulation with short-term facilitation of dopamine release. Secondly, we found that the magnitude of the second pulse train was critical for determining the sign of the plasticity (suppression or facilitation), while the magnitude of the first pulse train determined the extent to which the response to the second train was suppressed or facilitated. This form of bidirectional plasticity might provide a mechanism to enhance signal-to-noise ratio of dopamine neurotransmission. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Assessing denoising strategies to increase signal to noise ratio in spinal cord and in brain cortical and subcortical regions

    Science.gov (United States)

    Maugeri, L.; Moraschi, M.; Summers, P.; Favilla, S.; Mascali, D.; Cedola, A.; Porro, C. A.; Giove, F.; Fratini, M.

    2018-02-01

    Functional Magnetic Resonance Imaging (fMRI) based on Blood Oxygenation Level Dependent (BOLD) contrast has become one of the most powerful tools in neuroscience research. On the other hand, fMRI approaches have seen limited use in the study of spinal cord and subcortical brain regions (such as the brainstem and portions of the diencephalon). Indeed obtaining good BOLD signal in these areas still represents a technical and scientific challenge, due to poor control of physiological noise and to a limited overall quality of the functional series. A solution can be found in the combination of optimized experimental procedures at acquisition stage, and well-adapted artifact mitigation procedures in the data processing. In this framework, we studied two different data processing strategies to reduce physiological noise in cortical and subcortical brain regions and in the spinal cord, based on the aCompCor and RETROICOR denoising tools respectively. The study, performed in healthy subjects, was carried out using an ad hoc isometric motor task. We observed an increased signal to noise ratio in the denoised functional time series in the spinal cord and in the subcortical brain region.

  16. Evaluation and comparison of signal to noise ratio according to histogram equalization of heart shadow on chest image

    International Nuclear Information System (INIS)

    Kim, Ki Won; Lee, Eul Kyu; Jeong, Hoi Woun; Kang, Byung Sam; Kim, Hyun Soo; Min, Jung Whan; Son, Jin Hyun

    2017-01-01

    The purpose of this study was to measure signal to noise ratio (SNR) according to change of equalization from region of interest (ROI) of heart shadow in chest image. We examined images of chest image of 87 patients in a University-affiliated hospital, Seoul, Korea. Chest images of each patient were calculated by using Image. We have analysis socio-demographical variables, SNR according to images, 95% confidence according to SNR of difference in a mean of SNR. Differences of SNR among change of equalization were tested by SPSS Statistics21 ANOVA test for there was statistical significance 95%(p < 0.05). In SNR results, with the quality of distributions in the order of original chest image, original chest image heart shadow and equalization chest image, equalization chest image heart shadow(p < 0.001). In conclusion, this study would be that quantitative evaluation of heart shadow on chest image can be used as an adjunct to the histogram equalization chest image

  17. Improving Signal-to-Noise Ratio in Susceptibility Weighted Imaging: A Novel Multicomponent Non-Local Approach.

    Directory of Open Access Journals (Sweden)

    Pasquale Borrelli

    Full Text Available In susceptibility-weighted imaging (SWI, the high resolution required to obtain a proper contrast generation leads to a reduced signal-to-noise ratio (SNR. The application of a denoising filter to produce images with higher SNR and still preserve small structures from excessive blurring is therefore extremely desirable. However, as the distributions of magnitude and phase noise may introduce biases during image restoration, the application of a denoising filter is non-trivial. Taking advantage of the potential multispectral nature of MR images, a multicomponent approach using a Non-Local Means (MNLM denoising filter may perform better than a component-by-component image restoration method. Here we present a new MNLM-based method (Multicomponent-Imaginary-Real-SWI, hereafter MIR-SWI to produce SWI images with high SNR and improved conspicuity. Both qualitative and quantitative comparisons of MIR-SWI with the original SWI scheme and previously proposed SWI restoring pipelines showed that MIR-SWI fared consistently better than the other approaches. Noise removal with MIR-SWI also provided improvement in contrast-to-noise ratio (CNR and vessel conspicuity at higher factors of phase mask multiplications than the one suggested in the literature for SWI vessel imaging. We conclude that a proper handling of noise in the complex MR dataset may lead to improved image quality for SWI data.

  18. Optimization of signal-to-noise ratio for wireless light-emitting diode communication in modern lighting layouts

    Science.gov (United States)

    Azizan, Luqman A.; Ab-Rahman, Mohammad S.; Hassan, Mazen R.; Bakar, A. Ashrif A.; Nordin, Rosdiadee

    2014-04-01

    White light-emitting diodes (LEDs) are predicted to be widely used in domestic applications in the future, because they are becoming widespread in commercial lighting applications. The ability of LEDs to be modulated at high speeds offers the possibility of using them as sources for communication instead of illumination. The growing interest in using these devices for both illumination and communication requires attention to combine this technology with modern lighting layouts. A dual-function system is applied to three models of modern lighting layouts: the hybrid corner lighting layout (HCLL), the hybrid wall lighting layout (HWLL), and the hybrid edge lighting layout (HELL). Based on the analysis, the relationship between the space adversity and the signal-to-noise ratio (SNR) performance is demonstrated for each model. The key factor that affects the SNR performance of visible light communication is the reliance on the design parameter that is related to the number and position of LED lights. The model of HWLL is chosen as the best layout, since 61% of the office area is considered as an excellent communication area and the difference between the area classification, Δp, is 22%. Thus, this system is applicable to modern lighting layouts.

  19. The effect of signal to noise ratio on accuracy of temperature measurements for Brillouin lidar in water

    Science.gov (United States)

    Liang, Kun; Niu, Qunjie; Wu, Xiangkui; Xu, Jiaqi; Peng, Li; Zhou, Bo

    2017-09-01

    A lidar system with Fabry-Pérot etalon and an intensified charge coupled device can be used to obtain the scattering spectrum of the ocean and retrieve oceanic temperature profiles. However, the spectrum would be polluted by noise and result in a measurement error. To analyze the effect of signal to noise ratio (SNR) on the accuracy of measurements for Brillouin lidar in water, the theory model and characteristics of SNR are researched. The noise spectrums with different SNR are repetitiously measured based on simulation and experiment. The results show that accuracy is related to SNR, and considering the balance of time consumption and quality, the average of five measurements is adapted for real remote sensing under the pulse laser conditions of wavelength 532 nm, pulse energy 650 mJ, repetition rate 10 Hz, pulse width 8 ns and linewidth 0.003 cm-1 (90 MHz). Measuring with the Brillouin linewidth has a better accuracy at a lower temperature (15 °C), based on the classical retrieval model we adopt. The experimental results show that the temperature error is 0.71 °C and 0.06 °C based on shift and linewidth respectively when the image SNR is at the range of 3.2 dB-3.9 dB.

  20. Low noise signal-to-noise ratio enhancing readout circuit for current-mediated active pixel sensors

    International Nuclear Information System (INIS)

    Ottaviani, Tony; Karim, Karim S.; Nathan, Arokia; Rowlands, John A.

    2006-01-01

    Diagnostic digital fluoroscopic applications continuously expose patients to low doses of x-ray radiation, posing a challenge to both the digital imaging pixel and readout electronics when amplifying small signal x-ray inputs. Traditional switch-based amorphous silicon imaging solutions, for instance, have produced poor signal-to-noise ratios (SNRs) at low exposure levels owing to noise sources from the pixel readout circuitry. Current-mediated amorphous silicon pixels are an improvement over conventional pixel amplifiers with an enhanced SNR across the same low-exposure range, but whose output also becomes nonlinear with increasing dosage. A low-noise SNR enhancing readout circuit has been developed that enhances the charge gain of the current-mediated active pixel sensor (C-APS). The solution takes advantage of the current-mediated approach, primarily integrating the signal input at the desired frequency necessary for large-area imaging, while adding minimal noise to the signal readout. Experimental data indicates that the readout circuit can detect pixel outputs over a large bandwidth suitable for real-time digital diagnostic x-ray fluoroscopy. Results from hardware testing indicate that the minimum achievable C-APS output current that can be discerned at the digital fluoroscopic output from the enhanced SNR readout circuit is 0.341 nA. The results serve to highlight the applicability of amorphous silicon current-mediated pixel amplifiers for large-area flat panel x-ray imagers

  1. Changes in signal-to-noise ratios and contrast-to-noise ratios of hypervascular hepatocellular carcinomas on ferucarbotran-enhanced dynamic MR imaging

    International Nuclear Information System (INIS)

    Park, Yulri; Choi, Dongil; Kim, Seong Hyun; Kim, Seung Hoon; Kim, Min Ju; Lee, Jongmee; Lim, Jae Hoon; Lee, Won Jae; Lim, Hyo K.

    2006-01-01

    Purpose: To verify changes in the signal-to-noise ratios (SNRs) and contrast-to-noise ratios (CNRs) of hypervascular hepatocellular carcinomas (HCCs) on ferucarbotran-enhanced dynamic T1-weighted MR imaging. Materials and methods: Fifty-two patients with 61 hypervascular HCCs underwent ferucarbotran-enhanced dynamic MR imaging, and then hepatic resection. Hypervascular HCCs were identified when definite enhancement was noted during the arterial dominant phase of three-phase MDCT. Dynamic MR Images with T1-weighted fast multiplanar spoiled gradient-recalled echo sequence (TR200/TE4.2) were obtained before and 20 s, and 1, 3, 5, and 10 min, after bolus injection of ferucarbotran. We estimated the signal intensities of tumors and livers, and calculated the SNRs and CNRs of the tumors. Results: On ferucarbotran-enhanced dynamic MR imaging, SNR measurements showed a fluctuating pattern, namely, an increase in SNR followed by a decrease and a subsequent increase (or a decrease in SNR followed by a increase and a subsequent decrease) in 50 (82.0%) of 61 tumors, a single-peak SNR pattern (highest SNR on 20 s, 1, 3, or 5 min delayed images followed by a decrease) in seven (11.5%), and a decrease in SNR followed by an increase in four (6.6%). Maximum absolute CNRs with positive value were noted on 10 min delayed images in 41 (67.2%) tumors, and maximum absolute CNRs with negative value were observed on 20 s delayed images in 12 (19.7%) and on 1 min delayed images in eight (13.1%). Conclusion: Despite showing various SNR and CNR changes, the majority of hypervascular HCCs demonstrated a fluctuating SNR pattern on ferucarbotran-enhanced dynamic MR imaging and a highest CNR on 10 min delayed image, which differed from the classic enhancement pattern on multiphasic CT

  2. Some infant ventilators do not limit peak inspiratory pressure reliably during active expiration.

    Science.gov (United States)

    Kirpalani, H; Santos-Lyn, R; Roberts, R

    1988-09-01

    In order to minimize barotrauma in newborn infants with respiratory failure, peak inspiratory pressures should not exceed those required for adequate gas exchange. We examined whether four commonly used pressure-limited, constant flow ventilators limit pressure reliably during simulated active expiration against the inspiratory stroke of the ventilator. Three machines of each type were tested at 13 different expiratory flow rates (2 to 14 L/min). Flow-dependent pressure overshoot above a dialed pressure limit of 20 cm H2O was observed in all machines. However, the magnitude differed significantly between ventilators from different manufacturers (p = .0009). Pressure overshoot above 20 cm H2O was consistently lowest in the Healthdyne (0.8 cm H2O at 2 L/min, 3.6 cm H2O at 14 L/min) and highest in the Bourns BP200 (3.0 cm H2O at 2 L/min, 15.4 cm H2O at 14 L/min). We conclude that peak inspiratory pressure overshoots on pressure-limited ventilators occur during asynchronous expiration. This shortcoming may contribute to barotrauma in newborn infants who "fight" positive-pressure ventilation.

  3. Synthesis of multi-wavelength temporal phase-shifting algorithms optimized for high signal-to-noise ratio and high detuning robustness using the frequency transfer function.

    Science.gov (United States)

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-05-02

    Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength (DW) PSA is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesis herein presented may be used for interferometric contouring of discontinuous industrial objects. Also DW-PSA may be useful for DW shop-testing of deep free-form aspheres. As shown here, using the FTF-based synthesis one may easily find explicit DW-PSA formulae optimized for high signal-to-noise and high detuning robustness. To this date, no general synthesis and analysis for temporal DW-PSAs has been given; only ad hoc DW-PSAs formulas have been reported. Consequently, no explicit formulae for their spectra, their signal-to-noise, their detuning and harmonic robustness has been given. Here for the first time a fully general procedure for designing DW-PSAs (or triple-wavelengths PSAs) with desire spectrum, signal-to-noise ratio and detuning robustness is given. We finally generalize DW-PSA to higher number of wavelength temporal PSAs.

  4. Quality assurance in MRI breast screening: comparing signal-to-noise ratio in dynamic contrast-enhanced imaging protocols

    Science.gov (United States)

    Kousi, Evanthia; Borri, Marco; Dean, Jamie; Panek, Rafal; Scurr, Erica; Leach, Martin O.; Schmidt, Maria A.

    2016-01-01

    MRI has been extensively used in breast cancer staging, management and high risk screening. Detection sensitivity is paramount in breast screening, but variations of signal-to-noise ratio (SNR) as a function of position are often overlooked. We propose and demonstrate practical methods to assess spatial SNR variations in dynamic contrast-enhanced (DCE) breast examinations and apply those methods to different protocols and systems. Four different protocols in three different MRI systems (1.5 and 3.0 T) with receiver coils of different design were employed on oil-filled test objects with and without uniformity filters. Twenty 3D datasets were acquired with each protocol; each dataset was acquired in under 60 s, thus complying with current breast DCE guidelines. In addition to the standard SNR calculated on a pixel-by-pixel basis, we propose other regional indices considering the mean and standard deviation of the signal over a small sub-region centred on each pixel. These regional indices include effects of the spatial variation of coil sensitivity and other structured artefacts. The proposed regional SNR indices demonstrate spatial variations in SNR as well as the presence of artefacts and sensitivity variations, which are otherwise difficult to quantify and might be overlooked in a clinical setting. Spatial variations in SNR depend on protocol choice and hardware characteristics. The use of uniformity filters was shown to lead to a rise of SNR values, altering the noise distribution. Correlation between noise in adjacent pixels was associated with data truncation along the phase encoding direction. Methods to characterise spatial SNR variations using regional information were demonstrated, with implications for quality assurance in breast screening and multi-centre trials.

  5. Design of an adaptive CubeSat transmitter for achieving optimum signal-to-noise ratio (SNR)

    Science.gov (United States)

    Jaswar, F. D.; Rahman, T. A.; Hindia, M. N.; Ahmad, Y. A.

    2017-12-01

    CubeSat technology has opened the opportunity to conduct space-related researches at a relatively low cost. Typical approach to maintain an affordable cubeSat mission is to use a simple communication system, which is based on UHF link with fixed-transmit power and data rate. However, CubeSat in the Low Earth Orbit (LEO) does not have relative motion with the earth rotation, resulting in variable propagation path length that affects the transmission signal. A transmitter with adaptive capability to select multiple sets of data rate and radio frequency (RF) transmit power is proposed to improve and optimise the link. This paper presents the adaptive UHF transmitter design as a solution to overcome the variability of the propagation path. The transmitter output power is adjustable from 0.5W to 2W according to the mode of operations and satellite power limitations. The transmitter is designed to have four selectable modes to achieve the optimum signal-to-noise ratio (SNR) and efficient power consumption based on the link budget analysis and satellite requirement. Three prototypes are developed and tested for space-environment conditions such as the radiation test. The Total Ionizing Dose measurements are conducted in the radiation test done at Malaysia Nuclear Agency Laboratory. The results from this test have proven that the adaptive transmitter can perform its operation with estimated more than seven months in orbit. This radiation test using gamma source with 1.5krad exposure is the first one conducted for a satellite program in Malaysia.

  6. The Effect of Signal-to-Noise Ratio on Linguistic Processing in a Semantic Judgment Task: An Aging Study.

    Science.gov (United States)

    Stanley, Nicholas; Davis, Tara; Estis, Julie

    2017-03-01

    Aging effects on speech understanding in noise have primarily been assessed through speech recognition tasks. Recognition tasks, which focus on bottom-up, perceptual aspects of speech understanding, intentionally limit linguistic and cognitive factors by asking participants to only repeat what they have heard. On the other hand, linguistic processing tasks require bottom-up and top-down (linguistic, cognitive) processing skills and are, therefore, more reflective of speech understanding abilities used in everyday communication. The effect of signal-to-noise ratio (SNR) on linguistic processing ability is relatively unknown for either young (YAs) or older adults (OAs). To determine if reduced SNRs would be more deleterious to the linguistic processing of OAs than YAs, as measured by accuracy and reaction time in a semantic judgment task in competing speech. In the semantic judgment task, participants indicated via button press whether word pairs were a semantic Match or No Match. This task was performed in quiet, as well as, +3, 0, -3, and -6 dB SNR with two-talker speech competition. Seventeen YAs (20-30 yr) with normal hearing sensitivity and 17 OAs (60-68 yr) with normal hearing sensitivity or mild-to-moderate sensorineural hearing loss within age-appropriate norms. Accuracy, reaction time, and false alarm rate were measured and analyzed using a mixed design analysis of variance. A decrease in SNR level significantly reduced accuracy and increased reaction time in both YAs and OAs. However, poor SNRs affected accuracy and reaction time of Match and No Match word pairs differently. Accuracy for Match pairs declined at a steeper rate than No Match pairs in both groups as SNR decreased. In addition, reaction time for No Match pairs increased at a greater rate than Match pairs in more difficult SNRs, particularly at -3 and -6 dB SNR. False-alarm rates indicated that participants had a response bias to No Match pairs as the SNR decreased. Age-related differences were

  7. Increasing the number and signal-to-noise ratio of OBS traces with supervirtual refraction interferometry and free-surface multiples

    KAUST Repository

    Bharadwaj, P.; Wang, X.; Schuster, Gerard T.; McIntosh, K.

    2013-01-01

    The theory of supervirtual interferometry is modified so that free-surface related multiple refractions can be used to enhance the signal-to-noise ratio (SNR) of primary refraction events by a factor proportional to√Ns, where Ns is the number of post-critical sources for a specified refraction multiple. We also show that refraction multiples can be transformed into primary refraction events recorded at virtual hydrophones located between the actual hydrophones. Thus, data recorded by a coarse sampling of ocean bottom seismic (OBS) stations can be transformed, in principle, into a virtual survey with P times more OBS stations, where P is the order of the visible free-surface related multiple refractions. The key assumption is that the refraction arrivals are those of head waves, not pure diving waves. The effectiveness of this method is validated with both synthetic OBS data and an OBS data set recorded offshore from Taiwan. Results show the successful reconstruction of far-offset traces out to a source-receiver offset of 120 km. The primary supervirtual traces increase the number of pickable first arrivals from approximately 1600 to more than 3100 for a subset of the OBS data set where the source is only on one side of the recording stations. In addition, the head waves associated with the first-order free-surface refraction multiples allow for the creation of six new common receiver gathers recorded at virtual OBS station located about half way between the actual OBS stations. This doubles the number of OBS stations compared to the original survey and increases the total number of pickable traces from approximately 1600 to more than 6200. In summary, our results with the OBS data demonstrate that refraction interferometry can sometimes more than quadruple the number of usable traces, increase the source-receiver offsets, fill in the receiver line with a denser distribution of OBS stations, and provide more reliable picking of first arrivals. Apotential liability

  8. Increasing the number and signal-to-noise ratio of OBS traces with supervirtual refraction interferometry and free-surface multiples

    KAUST Repository

    Bharadwaj, P.

    2013-01-10

    The theory of supervirtual interferometry is modified so that free-surface related multiple refractions can be used to enhance the signal-to-noise ratio (SNR) of primary refraction events by a factor proportional to√Ns, where Ns is the number of post-critical sources for a specified refraction multiple. We also show that refraction multiples can be transformed into primary refraction events recorded at virtual hydrophones located between the actual hydrophones. Thus, data recorded by a coarse sampling of ocean bottom seismic (OBS) stations can be transformed, in principle, into a virtual survey with P times more OBS stations, where P is the order of the visible free-surface related multiple refractions. The key assumption is that the refraction arrivals are those of head waves, not pure diving waves. The effectiveness of this method is validated with both synthetic OBS data and an OBS data set recorded offshore from Taiwan. Results show the successful reconstruction of far-offset traces out to a source-receiver offset of 120 km. The primary supervirtual traces increase the number of pickable first arrivals from approximately 1600 to more than 3100 for a subset of the OBS data set where the source is only on one side of the recording stations. In addition, the head waves associated with the first-order free-surface refraction multiples allow for the creation of six new common receiver gathers recorded at virtual OBS station located about half way between the actual OBS stations. This doubles the number of OBS stations compared to the original survey and increases the total number of pickable traces from approximately 1600 to more than 6200. In summary, our results with the OBS data demonstrate that refraction interferometry can sometimes more than quadruple the number of usable traces, increase the source-receiver offsets, fill in the receiver line with a denser distribution of OBS stations, and provide more reliable picking of first arrivals. Apotential liability

  9. Using optical fibers with different modes to improve the signal-to-noise ratio of diffuse correlation spectroscopy flow-oximeter measurements

    OpenAIRE

    He, Lian; Lin, Yu; Shang, Yu; Shelton, Brent J.; Yu, Guoqiang

    2013-01-01

    The dual-wavelength diffuse correlation spectroscopy (DCS) flow-oximeter is an emerging technique enabling simultaneous measurements of blood flow and blood oxygenation changes in deep tissues. High signal-to-noise ratio (SNR) is crucial when applying DCS technologies in the study of human tissues where the detected signals are usually very weak. In this study, single-mode, few-mode, and multimode fibers are compared to explore the possibility of improving the SNR of DCS flow-oximeter measure...

  10. Data reduction, radial velocities and stellar parameters from spectra in the very low signal-to-noise domain

    Science.gov (United States)

    Malavolta, Luca

    2013-10-01

    Large astronomical facilities usually provide data reduction pipeline designed to deliver ready-to-use scientific data, and too often as- tronomers are relying on this to avoid the most difficult part of an astronomer job Standard data reduction pipelines however are usu- ally designed and tested to have good performance on data with av- erage Signal to Noise Ratio (SNR) data, and the issues that are related with the reduction of data in the very low SNR domain are not taken int account properly. As a result, informations in data with low SNR are not optimally exploited. During the last decade our group has collected thousands of spec- tra using the GIRAFFE spectrograph at Very Large Telescope (Chile) of the European Southern Observatory (ESO) to determine the ge- ometrical distance and dynamical state of several Galactic Globular Clusters but ultimately the analysis has been hampered by system- atics in data reduction, calibration and radial velocity measurements. Moreover these data has never been exploited to get other informa- tions like temperature and metallicity of stars, because considered too noisy for these kind of analyses. In this thesis we focus our attention on data reduction and analysis of spectra with very low SNR. The dataset we analyze in this thesis comprises 7250 spectra for 2771 stars of the Globular Cluster M 4 (NGC 6121) in the wavelength region 5145-5360Å obtained with GIRAFFE. Stars from the upper Red Giant Branch down to the Main Sequence have been observed in very different conditions, including nights close to full moon, and reaching SNR - 10 for many spectra in the dataset. We will first review the basic steps of data reduction and spec- tral extraction, adapting techniques well tested in other field (like photometry) but still under-developed in spectroscopy. We improve the wavelength dispersion solution and the correction of radial veloc- ity shift between day-time calibrations and science observations by following a completely

  11. High Electricity Demand in the Northeast U.S.: PJM Reliability Network and Peaking Unit Impacts on Air Quality.

    Science.gov (United States)

    Farkas, Caroline M; Moeller, Michael D; Felder, Frank A; Henderson, Barron H; Carlton, Annmarie G

    2016-08-02

    On high electricity demand days, when air quality is often poor, regional transmission organizations (RTOs), such as PJM Interconnection, ensure reliability of the grid by employing peak-use electric generating units (EGUs). These "peaking units" are exempt from some federal and state air quality rules. We identify RTO assignment and peaking unit classification for EGUs in the Eastern U.S. and estimate air quality for four emission scenarios with the Community Multiscale Air Quality (CMAQ) model during the July 2006 heat wave. Further, we population-weight ambient values as a surrogate for potential population exposure. Emissions from electricity reliability networks negatively impact air quality in their own region and in neighboring geographic areas. Monitored and controlled PJM peaking units are generally located in economically depressed areas and can contribute up to 87% of hourly maximum PM2.5 mass locally. Potential population exposure to peaking unit PM2.5 mass is highest in the model domain's most populated cities. Average daily temperature and national gross domestic product steer peaking unit heat input. Air quality planning that capitalizes on a priori knowledge of local electricity demand and economics may provide a more holistic approach to protect human health within the context of growing energy needs in a changing world.

  12. Autopiquer - a Robust and Reliable Peak Detection Algorithm for Mass Spectrometry.

    Science.gov (United States)

    Kilgour, David P A; Hughes, Sam; Kilgour, Samantha L; Mackay, C Logan; Palmblad, Magnus; Tran, Bao Quoc; Goo, Young Ah; Ernst, Robert K; Clarke, David J; Goodlett, David R

    2017-02-01

    We present a simple algorithm for robust and unsupervised peak detection by determining a noise threshold in isotopically resolved mass spectrometry data. Solving this problem will greatly reduce the subjective and time-consuming manual picking of mass spectral peaks and so will prove beneficial in many research applications. The Autopiquer approach uses autocorrelation to test for the presence of (isotopic) structure in overlapping windows across the spectrum. Within each window, a noise threshold is optimized to remove the most unstructured data, whilst keeping as much of the (isotopic) structure as possible. This algorithm has been successfully demonstrated for both peak detection and spectral compression on data from many different classes of mass spectrometer and for different sample types, and this approach should also be extendible to other types of data that contain regularly spaced discrete peaks. Graphical Abstract ᅟ.

  13. High signal to noise ratio THz spectroscopy with ASOPS and signal processing schemes for mapping and controlling molecular and bulk relaxation processes

    International Nuclear Information System (INIS)

    Hadjiloucas, S; Walker, G C; Bowen, J W; Becerra, V M; Zafiropoulos, A; Galvao, R K H

    2009-01-01

    Asynchronous Optical Sampling has the potential to improve signal to noise ratio in THz transient sperctrometry. The design of an inexpensive control scheme for synchronising two femtosecond pulse frequency comb generators at an offset frequency of 20 kHz is discussed. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing recorded THz transients in the time and frequency domain are outlined. Finally, possibilities for femtosecond pulse shaping using genetic algorithms are mentioned.

  14. Synthesis of multi-wavelength temporal phase-shifting algorithms optimized for high signal-to-noise ratio and high detuning robustness using the frequency transfer function

    OpenAIRE

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-01-01

    Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength PSA (DW-PSA) is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesi...

  15. Measurements of noise immission from wind turbines at receptor locations: Use of a vertical microphone board to improve the signal-to-noise ratio

    International Nuclear Information System (INIS)

    Fegeant, Olivier

    1999-01-01

    The growing interest in wind energy has increased the need of accuracy in wind turbine noise immission measurements and thus, the need of new measurement techniques. This paper shows that mounting the microphone on a vertical board improves the signal-to-noise ratio over the whole frequency range compared to the free microphone technique. Indeed, the wind turbine is perceived two times noisier by the microphone due to the signal reflection by the board while, in addition, the wind noise is reduced. Furthermore, the board shielding effect allows the measurements to be carried out in the presence of reflecting surfaces such as building facades

  16. High signal to noise ratio THz spectroscopy with ASOPS and signal processing schemes for mapping and controlling molecular and bulk relaxation processes

    Energy Technology Data Exchange (ETDEWEB)

    Hadjiloucas, S; Walker, G C; Bowen, J W; Becerra, V M [Cybernetics, School of Systems Engineering, University of Reading, RG6 6AY (United Kingdom); Zafiropoulos, A [Biosystems Engineering Department, School of Agricultural Technology, Technological Educational Institute of Larissa, 411 10, Larissa (Greece); Galvao, R K H, E-mail: s.hadjiloucas@reading.ac.u [Divisao de Engenharia Eletronica, Instituto Tecnologico de Aeronautica, Sao Jose dos Campos, SP, 12228-900 Brazil (Brazil)

    2009-08-01

    Asynchronous Optical Sampling has the potential to improve signal to noise ratio in THz transient sperctrometry. The design of an inexpensive control scheme for synchronising two femtosecond pulse frequency comb generators at an offset frequency of 20 kHz is discussed. The suitability of a range of signal processing schemes adopted from the Systems Identification and Control Theory community for further processing recorded THz transients in the time and frequency domain are outlined. Finally, possibilities for femtosecond pulse shaping using genetic algorithms are mentioned.

  17. The effect of the signal-to-noise ratio and window width on image information in intravenous DSA of various vascular regions

    International Nuclear Information System (INIS)

    Arlart, I.P.; Ertel, R.; Siemens A.G., Erlangen

    1986-01-01

    The diagnostic quality of DSA images depends on numerous factors related to the apparatus and the technique of examination. An improvement in image can be brought about by correct choice of the mask and injected frames, by subsequent correct manipulation of the images and by the choice of the signal-to-noise ratio and window width. In the present study, the effect of these factors was demonstrated on image quality of venous DSA studies in various vascular regions. Practical advice is given for the examination of particular regions and for various diagnostic problems. (orig.)

  18. Online Reliable Peak Charge/Discharge Power Estimation of Series-Connected Lithium-Ion Battery Packs

    Directory of Open Access Journals (Sweden)

    Bo Jiang

    2017-03-01

    Full Text Available The accurate peak power estimation of a battery pack is essential to the power-train control of electric vehicles (EVs. It helps to evaluate the maximum charge and discharge capability of the battery system, and thus to optimally control the power-train system to meet the requirement of acceleration, gradient climbing and regenerative braking while achieving a high energy efficiency. A novel online peak power estimation method for series-connected lithium-ion battery packs is proposed, which considers the influence of cell difference on the peak power of the battery packs. A new parameter identification algorithm based on adaptive ratio vectors is designed to online identify the parameters of each individual cell in a series-connected battery pack. The ratio vectors reflecting cell difference are deduced strictly based on the analysis of battery characteristics. Based on the online parameter identification, the peak power estimation considering cell difference is further developed. Some validation experiments in different battery aging conditions and with different current profiles have been implemented to verify the proposed method. The results indicate that the ratio vector-based identification algorithm can achieve the same accuracy as the repetitive RLS (recursive least squares based identification while evidently reducing the computation cost, and the proposed peak power estimation method is more effective and reliable for series-connected battery packs due to the consideration of cell difference.

  19. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions.

    Science.gov (United States)

    Robson, Philip M; Grant, Aaron K; Madhuranthakam, Ananth J; Lattanzi, Riccardo; Sodickson, Daniel K; McKenzie, Charles A

    2008-10-01

    Parallel imaging reconstructions result in spatially varying noise amplification characterized by the g-factor, precluding conventional measurements of noise from the final image. A simple Monte Carlo based method is proposed for all linear image reconstruction algorithms, which allows measurement of signal-to-noise ratio and g-factor and is demonstrated for SENSE and GRAPPA reconstructions for accelerated acquisitions that have not previously been amenable to such assessment. Only a simple "prescan" measurement of noise amplitude and correlation in the phased-array receiver, and a single accelerated image acquisition are required, allowing robust assessment of signal-to-noise ratio and g-factor. The "pseudo multiple replica" method has been rigorously validated in phantoms and in vivo, showing excellent agreement with true multiple replica and analytical methods. This method is universally applicable to the parallel imaging reconstruction techniques used in clinical applications and will allow pixel-by-pixel image noise measurements for all parallel imaging strategies, allowing quantitative comparison between arbitrary k-space trajectories, image reconstruction, or noise conditioning techniques. (c) 2008 Wiley-Liss, Inc.

  20. Estimating achievable signal-to-noise ratios of MRI transmit-receive coils from radiofrequency power measurements: applications in quality control

    International Nuclear Information System (INIS)

    Redpath, T.W.

    2000-01-01

    The inverse relationship between the radiofrequency (RF) power needed to transmit a 90 deg. RF pulse, and the signal-to-noise ratio (SNR) available from a transmit-receive RF coil is well known. The theory is restated and a formula given for the signal-to-noise ratio from water, achievable from a single-shot MRI experiment, in terms of the net forward RF power needed for a rectangular 90 deg. RF pulse of known shape and duration. The result is normalized to a signal bandwidth of 1 Hz and a sample mass of 1 g. The RF power information needed is available on most commercial scanners, as it is used to calculate specific absorption rates for RF tissue heating. The achievable SNR figure will normally be larger that that actually observed, mainly because of receiver noise, but also because of inaccuracies in setting RF pulse angles, and relaxation effects. Phantom experiments were performed on the transmit-receive RF head coil of a commercial MRI system at 0.95 T using a projection method. The measured SNR agreed with that expected from the formula for achievable SNR once a correction was made for the noise figure of the receiving chain. Comparisons of measured SNR figures with those calculated from RF power measurements are expected to be of value in acceptance testing and quality control. (author)

  1. Improving signal to noise in labeled biological specimens using energy-filtered TEM of sections with a drift correction strategy and a direct detection device.

    Science.gov (United States)

    Ramachandra, Ranjan; Bouwer, James C; Mackey, Mason R; Bushong, Eric; Peltier, Steven T; Xuong, Nguyen-Huu; Ellisman, Mark H

    2014-06-01

    Energy filtered transmission electron microscopy techniques are regularly used to build elemental maps of spatially distributed nanoparticles in materials and biological specimens. When working with thick biological sections, electron energy loss spectroscopy techniques involving core-loss electrons often require exposures exceeding several minutes to provide sufficient signal to noise. Image quality with these long exposures is often compromised by specimen drift, which results in blurring and reduced resolution. To mitigate drift artifacts, a series of short exposure images can be acquired, aligned, and merged to form a single image. For samples where the target elements have extremely low signal yields, the use of charge coupled device (CCD)-based detectors for this purpose can be problematic. At short acquisition times, the images produced by CCDs can be noisy and may contain fixed pattern artifacts that impact subsequent correlative alignment. Here we report on the use of direct electron detection devices (DDD's) to increase the signal to noise as compared with CCD's. A 3× improvement in signal is reported with a DDD versus a comparably formatted CCD, with equivalent dose on each detector. With the fast rolling-readout design of the DDD, the duty cycle provides a major benefit, as there is no dead time between successive frames.

  2. Failure to pop out: Feature singletons do not capture attention under low signal-to-noise ratio conditions.

    Science.gov (United States)

    Rangelov, Dragan; Müller, Hermann J; Zehetleitner, Michael

    2017-05-01

    Pop-out search implies that the target is always the first item selected, no matter how many distractors are presented. However, increasing evidence indicates that search is not entirely independent of display density even for pop-out targets: search is slower with sparse (few distractors) than with dense displays (many distractors). Despite its significance, the cause of this anomaly remains unclear. We investigated several mechanisms that could slow down search for pop-out targets. Consistent with the assumption that pop-out targets frequently fail to pop out in sparse displays, we observed greater variability of search duration for sparse displays relative to dense. Computational modeling of the response time distributions also supported the view that pop-out targets fail to pop out in sparse displays. Our findings strongly question the classical assumption that early processing of pop-out targets is independent of the distractors. Rather, the density of distractors critically influences whether or not a stimulus pops out. These results call for new, more reliable measures of pop-out search and potentially a reinterpretation of studies that used relatively sparse displays. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Predicting the effect of spectral subtraction on the speech recognition threshold based on the signal-to-noise ratio in the envelope domain

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2011-01-01

    rarely been evaluated perceptually in terms of speech intelligibility. This study analyzed the effects of the spectral subtraction strategy proposed by Berouti at al. [ICASSP 4 (1979), 208-211] on the speech recognition threshold (SRT) obtained with sentences presented in stationary speech-shaped noise....... The SRT was measured in five normal-hearing listeners in six conditions of spectral subtraction. The results showed an increase of the SRT after processing, i.e. a decreased speech intelligibility, in contrast to what is predicted by the Speech Transmission Index (STI). Here, another approach is proposed......, denoted the speech-based envelope power spectrum model (sEPSM) which predicts the intelligibility based on the signal-to-noise ratio in the envelope domain. In contrast to the STI, the sEPSM is sensitive to the increased amount of the noise envelope power as a consequence of the spectral subtraction...

  4. Available number of multiplexed holograms based on signal-to-noise ratio analysis in reflection-type holographic memory using three-dimensional speckle-shift multiplexing.

    Science.gov (United States)

    Nishizaki, Tatsuya; Matoba, Osamu; Nitta, Kouichi

    2014-09-01

    The recording properties of three-dimensional speckle-shift multiplexing in reflection-type holographic memory are analyzed numerically. Three-dimensional recording can increase the number of multiplexed holograms by suppressing the cross-talk noise from adjacent holograms by using depth-direction multiplexing rather than in-plane multiplexing. Numerical results indicate that the number of multiplexed holograms in three-layer recording can be increased by 1.44 times as large as that of a single-layer recording when an acceptable signal-to-noise ratio is set to be 2 when NA=0.43 and the thickness of the recording medium is 0.5 mm.

  5. Tests of variable-band multilayers designed for investigating optimal signal-to-noise vs artifact signal ratios in Dual-Energy Digital Subtraction Angiography (DDSA) imaging systems

    International Nuclear Information System (INIS)

    Boyers, D.; Ho, A.; Li, Q.; Piestrup, M.; Rice, M.; Tatchyn, R.

    1993-08-01

    In recent work, various design techniques were applied to investigate the feasibility of controlling the bandwidth and bandshape profiles of tungsten/boron-carbon (W/B 4 C) and tungsten/silicon (W/Si) multilayers for optimizing their performance in synchrotron radiation based angiographical imaging systems at 33 keV. Varied parameters included alternative spacing geometries, material thickness ratios, and numbers of layer pairs. Planar optics with nominal design reflectivities of 30%--94% and bandwidths ranging from 0.6%--10% were designed at the Stanford Radiation Laboratory, fabricated by the Ovonic Synthetic Materials Company, and characterized on Beam Line 4-3 at the Stanford Synchrotron Radiation Laboratory, in this paper we report selected results of these tests and review the possible use of the multilayers for determining optimal signal to noise vs. artifact signal ratios in practical Dual-Energy Digital Subtraction Angiography systems

  6. Statistical mechanics of stochastic neural networks: Relationship between the self-consistent signal-to-noise analysis, Thouless-Anderson-Palmer equation, and replica symmetric calculation approaches

    International Nuclear Information System (INIS)

    Shiino, Masatoshi; Yamana, Michiko

    2004-01-01

    We study the statistical mechanical aspects of stochastic analog neural network models for associative memory with correlation type learning. We take three approaches to derive the set of the order parameter equations for investigating statistical properties of retrieval states: the self-consistent signal-to-noise analysis (SCSNA), the Thouless-Anderson-Palmer (TAP) equation, and the replica symmetric calculation. On the basis of the cavity method the SCSNA can be generalized to deal with stochastic networks. We establish the close connection between the TAP equation and the SCSNA to elucidate the relationship between the Onsager reaction term of the TAP equation and the output proportional term of the SCSNA that appear in the expressions for the local fields

  7. Toward quantitative fast diffusion kurtosis imaging with b-values chosen in consideration of signal-to-noise ratio and model fidelity.

    Science.gov (United States)

    Kuo, Yen-Shu; Yang, Shun-Chung; Chung, Hsiao-Wen; Wu, Wen-Chau

    2018-02-01

    Diffusion kurtosis (DK) imaging is a variant of conventional diffusion magnetic resonance (MR) imaging that allows assessment of non-Gaussian diffusion. Fast DK imaging expedites the procedure by decreasing both scan time (acquiring the minimally required number of b-values) and computation time (obviating least-square curve fitting). This study aimed to investigate the applicability of fast DK imaging for both cerebral gray matter and white matter as a quantitative method. Seventeen healthy volunteers were recruited and each provided written informed consent before participation. On a 3-Tesla clinical MR system, diffusion imaging was performed with 12 b-values ranging from 0 to 4000 s/mm 2 . Diffusion encoding was along three orthogonal directions (slice selection, phase encoding, and frequency encoding) in separate series. Candidate b-values were chosen by first determining the maximum b-value (b max ) in the context of signal-to-noise ratio and then assessing the model fidelity for all b-value combinations within b max . Diffusion coefficient (D) and diffusion kurtosis coefficient (K) were derived from these candidates and assessed for their dependence on b-value combination. Our data suggested b max to be 2200 s/mm 2 as a trade-off between the percentage (~80%) of voxels statistically detectable against background and the sensitivity to non-Gaussian diffusion in both gray matter and white matter. The measurement dependence on b-value was observed predominantly in areas with a considerable amount of cerebrospinal fluid. In most gray matter and white matter, b-value combinations do not cause statistical difference in the calculated D and K. For fast DK imaging to be quantitatively applicable in both gray matter and white matter, b max should be chosen to ensure adequate signal-to-noise ratio in the majority of gray/white matter and the two nonzero b-values should be chosen in consideration of model fidelity to mitigate the dependence of derived indices on b

  8. Statistical approach of measurement of signal to noise ratio in according to change pulse sequence on brain MRI meningioma and cyst images

    International Nuclear Information System (INIS)

    Lee, Eul Kyu; Choi, Kwan Woo; Jeong, Hoi Woun; Jang, Seo Goo; Kim, Ki Won; Son, Soon Yong; Min, Jung Whan; Son, Jin Hyun

    2016-01-01

    The purpose of this study was to needed basis of measure MRI CAD development for signal to noise ratio (SNR) by pulse sequence analysis from region of interest (ROI) in brain magnetic resonance imaging (MRI) contrast. We examined images of brain MRI contrast enhancement of 117 patients, from January 2005 to December 2015 in a University-affiliated hospital, Seoul, Korea. Diagnosed as one of two brain diseases such as meningioma and cysts SNR for each patient's image of brain MRI were calculated by using Image J. Differences of SNR among two brain diseases were tested by SPSS Statistics21 ANOVA test for there was statistical significance (p < 0.05). We have analysis socio-demographical variables, SNR according to sequence disease, 95% confidence according to SNR of sequence and difference in a mean of SNR. Meningioma results, with the quality of distributions in the order of T1CE, T2 and T1, FLAIR. Cysts results, with the quality of distributions in the order of T2 and T1, T1CE and FLAIR. SNR of MRI sequences of the brain would be useful to classify disease. Therefore, this study will contribute to evaluate brain diseases, and be a fundamental to enhancing the accuracy of CAD development

  9. Turning Fiction Into Non-fiction for Signal-to-Noise Ratio Estimation -- The Time-Multiplexed and Adaptive Split-Symbol Moments Estimator

    Science.gov (United States)

    Simon, M.; Dolinar, S.

    2005-08-01

    A means is proposed for realizing the generalized split-symbol moments estimator (SSME) of signal-to-noise ratio (SNR), i.e., one whose implementation on the average allows for a number of subdivisions (observables), 2L, per symbol beyond the conventional value of two, with other than an integer value of L. In theory, the generalized SSME was previously shown to yield optimum performance for a given true SNR, R, when L=R/sqrt(2) and thus, in general, the resulting estimator was referred to as the fictitious SSME. Here we present a time-multiplexed version of the SSME that allows it to achieve its optimum value of L as above (to the extent that it can be computed as the average of a sum of integers) at each value of SNR and as such turns fiction into non-fiction. Also proposed is an adaptive algorithm that allows the SSME to rapidly converge to its optimum value of L when in fact one has no a priori information about the true value of SNR.

  10. Influence of skew rays on the sensitivity and signal-to-noise ratio of a fiber-optic surface-plasmon-resonance sensor: a theoretical study

    International Nuclear Information System (INIS)

    Dwivedi, Yogendra S.; Sharma, Anuj K.; Gupta, Banshi D.

    2007-01-01

    We have theoretically analyzed the influence of skew rays on the performance of a fiber-optic sensor based on surface plasmon resonance. The performance of the sensor has been evaluated in terms of its sensitivity and signal-to-noise ratio (SNR). The theoretical model for skewness dependence includes the material dispersion in fiber cores and metal layers, simultaneous excitation of skew rays, and meridional rays in the fiber core along with all guided rays launching from a collimated light source. The effect of skew rays on the SNR and the sensitivity of the sensor with two different metals has been compared. The same comparison is carried out for the different values of design parameters such as numerical aperture, fiber core diameter, and the length of the surface-plasmon-resonance (SPR)active sensing region. This detailed analysis for the effect of skewness on the SNR and the sensitivity of the sensor leads us to achieve the best possible performance from a fiber-optic SPR sensor against the skewness in the optical fiber

  11. High-Resolution Ultrasound-Switchable Fluorescence Imaging in Centimeter-Deep Tissue Phantoms with High Signal-To-Noise Ratio and High Sensitivity via Novel Contrast Agents.

    Science.gov (United States)

    Cheng, Bingbing; Bandi, Venugopal; Wei, Ming-Yuan; Pei, Yanbo; D'Souza, Francis; Nguyen, Kytai T; Hong, Yi; Yuan, Baohong

    2016-01-01

    For many years, investigators have sought after high-resolution fluorescence imaging in centimeter-deep tissue because many interesting in vivo phenomena-such as the presence of immune system cells, tumor angiogenesis, and metastasis-may be located deep in tissue. Previously, we developed a new imaging technique to achieve high spatial resolution in sub-centimeter deep tissue phantoms named continuous-wave ultrasound-switchable fluorescence (CW-USF). The principle is to use a focused ultrasound wave to externally and locally switch on and off the fluorophore emission from a small volume (close to ultrasound focal volume). By making improvements in three aspects of this technique: excellent near-infrared USF contrast agents, a sensitive frequency-domain USF imaging system, and an effective signal processing algorithm, for the first time this study has achieved high spatial resolution (~ 900 μm) in 3-centimeter-deep tissue phantoms with high signal-to-noise ratio (SNR) and high sensitivity (3.4 picomoles of fluorophore in a volume of 68 nanoliters can be detected). We have achieved these results in both tissue-mimic phantoms and porcine muscle tissues. We have also demonstrated multi-color USF to image and distinguish two fluorophores with different wavelengths, which might be very useful for simultaneously imaging of multiple targets and observing their interactions in the future. This work has opened the door for future studies of high-resolution centimeter-deep tissue fluorescence imaging.

  12. Combination of fat saturation and variable bandwidth imaging to increase signal-to-noise ratio and decrease motion artifacts for body MR imaging at high field

    International Nuclear Information System (INIS)

    Chew, W.M.

    1989-01-01

    The signal-to-noise ratio (SNR) of the MR imaging examination is a critical component of the quality of the image. Standard methods to increase SNR include signal averaging with multiple excitations, at the expense of imaging time (which on T2-weighted images could be quite significant), or increasing pixel volume by manipulation of field of view, matrix size, and/or section thickness, all at the expense of resolution. Another available method to increase SNR is to reduce the bandwidth of the receiver, which increases SNR by the square root of the amount of the reduction. The penalty imposed on high-field-strength MR examinations of the body is an unacceptable increase in chemical shift artifact. However, presaturating the fat resonance eliminates the chemical shift artifact. Thus, a combination of imaging techniques, fat suppression, and decreased bandwidth imaging can produce images free of chemical shift artifact with increased SNR and no penalty in resolution or imaging time. Early studies also show a reduction in motion artifact when fat saturation is used. This paper reports MR imaging performed with a 1.5-T Signa imager. With this technique, T2-weighted images (2,500/20/80 [repetition time msec/echo time msec/inversion time msec]) illustrating the increase in SNR and T1-weighted images (600/20) demonstrating a decrease in motion artifact are shown

  13. Signal-to-noise ratio enhancement on SEM images using a cubic spline interpolation with Savitzky-Golay filters and weighted least squares error.

    Science.gov (United States)

    Kiani, M A; Sim, K S; Nia, M E; Tso, C P

    2015-05-01

    A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  14. Signal-to-noise ratio and MR tissue parameters in human brain imaging at 3, 7, and 9.4 tesla using current receive coil arrays.

    Science.gov (United States)

    Pohmann, Rolf; Speck, Oliver; Scheffler, Klaus

    2016-02-01

    Relaxation times, transmit homogeneity, signal-to-noise ratio (SNR) and parallel imaging g-factor were determined in the human brain at 3T, 7T, and 9.4T, using standard, tight-fitting coil arrays. The same human subjects were scanned at all three field strengths, using identical sequence parameters and similar 31- or 32-channel receive coil arrays. The SNR of three-dimensional (3D) gradient echo images was determined using a multiple replica approach and corrected with measured flip angle and T2 (*) distributions and the T1 of white matter to obtain the intrinsic SNR. The g-factor maps were derived from 3D gradient echo images with several GRAPPA accelerations. As expected, T1 values increased, T2 (*) decreased and the B1 -homogeneity deteriorated with increasing field. The SNR showed a distinctly supralinear increase with field strength by a factor of 3.10 ± 0.20 from 3T to 7T, and 1.76 ± 0.13 from 7T to 9.4T over the entire cerebrum. The g-factors did not show the expected decrease, indicating a dominating role of coil design. In standard experimental conditions, SNR increased supralinearly with field strength (SNR ∼ B0 (1.65) ). To take full advantage of this gain, the deteriorating B1 -homogeneity and the decreasing T2 (*) have to be overcome. © 2015 Wiley Periodicals, Inc.

  15. Evaluation and comparison of contrast to noise ratio and signal to noise ratio according to change of reconstruction on breast PET/CT

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jae [Dept. of Nuclear Medicine, Seoul National University Hospital, Seoul (Korea, Republic of); Lee, Eul Kyu [Dept. of Radiology, Inje Paik University Hospital Jeo-dong, Seoul (Korea, Republic of); Kim, Ki Won [Dept. of Radiology, Kyung Hee University Hospital at Gang-dong, Seoul (Korea, Republic of); Jeong, Hoi Woun [Dept. of Radiological Technology, The Baekseok Culture University, Cheonan (Korea, Republic of); Lyu, Kwang Yeul; Park, Hoon Hee; Son, Jin Hyun; Min, Jung Whan [Dept. of Radiological Technology, The Shingu University, Sungnam (Korea, Republic of)

    2017-03-15

    The purpose of this study was to measure contrast to noise ratio (CNR) and signal to noise ratio (SNR) according to change of reconstruction from region of interest (ROI) in breast positron emission tomography- computed tomography (PET-CT), and to analyze the CNR and SNR statically. We examined images of breast PET-CT of 100 patients in a University-affiliated hospital, Seoul, Korea. Each patient's image of breast PET-CT were calculated by using Image J. Differences of CNR and SNR among four reconstruction algorithms were tested by SPSS Statistics21 ANOVA test for there was statistical significance (p<0.05). We have analysis socio-demographical variables, CNR and SNR according to reconstruction images, 95% confidence according to CNR and SNR of reconstruction and difference in a mean of CNR and SNR. SNR results, with the quality of distributions in the order of PSF{sub T}OF, Iterative and Iterative-TOF, FBP-TOF. CNR, with the quality of distributions in the order of PSF{sub T}OF, Iterative and Iterative-TOF, FBP-TOF. CNR and SNR of PET-CT reconstruction methods of the breast would be useful to evaluate breast diseases.

  16. Statistical approach of measurement of signal to noise ratio in according to change pulse sequence on brain MRI meningioma and cyst images

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eul Kyu [Inje Paik University Hospital Jeo-dong, Seoul (Korea, Republic of); Choi, Kwan Woo [Asan Medical Center, Seoul (Korea, Republic of); Jeong, Hoi Woun [The Baekseok Culture University, Cheonan (Korea, Republic of); Jang, Seo Goo [The Soonchunhyang University, Asan (Korea, Republic of); Kim, Ki Won [Kyung Hee University Hospital at Gang-dong, Seoul (Korea, Republic of); Son, Soon Yong [The Wonkwang Health Science University, Iksan (Korea, Republic of); Min, Jung Whan; Son, Jin Hyun [The Shingu University, Sungnam (Korea, Republic of)

    2016-09-15

    The purpose of this study was to needed basis of measure MRI CAD development for signal to noise ratio (SNR) by pulse sequence analysis from region of interest (ROI) in brain magnetic resonance imaging (MRI) contrast. We examined images of brain MRI contrast enhancement of 117 patients, from January 2005 to December 2015 in a University-affiliated hospital, Seoul, Korea. Diagnosed as one of two brain diseases such as meningioma and cysts SNR for each patient's image of brain MRI were calculated by using Image J. Differences of SNR among two brain diseases were tested by SPSS Statistics21 ANOVA test for there was statistical significance (p < 0.05). We have analysis socio-demographical variables, SNR according to sequence disease, 95% confidence according to SNR of sequence and difference in a mean of SNR. Meningioma results, with the quality of distributions in the order of T1CE, T2 and T1, FLAIR. Cysts results, with the quality of distributions in the order of T2 and T1, T1CE and FLAIR. SNR of MRI sequences of the brain would be useful to classify disease. Therefore, this study will contribute to evaluate brain diseases, and be a fundamental to enhancing the accuracy of CAD development.

  17. Using optical fibers with different modes to improve the signal-to-noise ratio of diffuse correlation spectroscopy flow-oximeter measurements

    Science.gov (United States)

    He, Lian; Lin, Yu; Shang, Yu; Shelton, Brent J.; Yu, Guoqiang

    2013-03-01

    The dual-wavelength diffuse correlation spectroscopy (DCS) flow-oximeter is an emerging technique enabling simultaneous measurements of blood flow and blood oxygenation changes in deep tissues. High signal-to-noise ratio (SNR) is crucial when applying DCS technologies in the study of human tissues where the detected signals are usually very weak. In this study, single-mode, few-mode, and multimode fibers are compared to explore the possibility of improving the SNR of DCS flow-oximeter measurements. Experiments on liquid phantom solutions and in vivo muscle tissues show only slight improvements in flow measurements when using the few-mode fiber compared with using the single-mode fiber. However, light intensities detected by the few-mode and multimode fibers are increased, leading to significant SNR improvements in detections of phantom optical property and tissue blood oxygenation. The outcomes from this study provide useful guidance for the selection of optical fibers to improve DCS flow-oximeter measurements.

  18. High signal-to-noise ratio sensing with Shack–Hartmann wavefront sensor based on auto gain control of electron multiplying CCD

    International Nuclear Information System (INIS)

    Zhu Zhao-Yi; Li Da-Yu; Hu Li-Fa; Mu Quan-Quan; Yang Cheng-Liang; Cao Zhao-Liang; Xuan Li

    2016-01-01

    High signal-to-noise ratio can be achieved with the electron multiplying charge-coupled-device (EMCCD) applied in the Shack–Hartmann wavefront sensor (S–H WFS) in adaptive optics (AO). However, when the brightness of the target changes in a large scale, the fixed electron multiplying (EM) gain will not be suited to the sensing limitation. Therefore an auto-gain-control method based on the brightness of light-spots array in S–H WFS is proposed in this paper. The control value is the average of the maximum signals of every light spot in an array, which has been demonstrated to be kept stable even under the influence of some noise and turbulence, and sensitive enough to the change of target brightness. A goal value is needed in the control process and it is predetermined based on the characters of EMCCD. Simulations and experiments have demonstrated that this auto-gain-control method is valid and robust, the sensing SNR reaches the maximum for the corresponding signal level, and especially is greatly improved for those dim targets from 6 to 4 magnitude in the visual band. (special topic)

  19. Effect of Simultaneous Bilingualism on Speech Intelligibility across Different Masker Types, Modalities, and Signal-to-Noise Ratios in School-Age Children.

    Science.gov (United States)

    Reetzke, Rachel; Lam, Boji Pak-Wing; Xie, Zilong; Sheng, Li; Chandrasekaran, Bharath

    2016-01-01

    Recognizing speech in adverse listening conditions is a significant cognitive, perceptual, and linguistic challenge, especially for children. Prior studies have yielded mixed results on the impact of bilingualism on speech perception in noise. Methodological variations across studies make it difficult to converge on a conclusion regarding the effect of bilingualism on speech-in-noise performance. Moreover, there is a dearth of speech-in-noise evidence for bilingual children who learn two languages simultaneously. The aim of the present study was to examine the extent to which various adverse listening conditions modulate differences in speech-in-noise performance between monolingual and simultaneous bilingual children. To that end, sentence recognition was assessed in twenty-four school-aged children (12 monolinguals; 12 simultaneous bilinguals, age of English acquisition ≤ 3 yrs.). We implemented a comprehensive speech-in-noise battery to examine recognition of English sentences across different modalities (audio-only, audiovisual), masker types (steady-state pink noise, two-talker babble), and a range of signal-to-noise ratios (SNRs; 0 to -16 dB). Results revealed no difference in performance between monolingual and simultaneous bilingual children across each combination of modality, masker, and SNR. Our findings suggest that when English age of acquisition and socioeconomic status is similar between groups, monolingual and bilingual children exhibit comparable speech-in-noise performance across a range of conditions analogous to everyday listening environments.

  20. Mean Velocity vs. Mean Propulsive Velocity vs. Peak Velocity: Which Variable Determines Bench Press Relative Load With Higher Reliability?

    Science.gov (United States)

    García-Ramos, Amador; Pestaña-Melero, Francisco L; Pérez-Castilla, Alejandro; Rojas, Francisco J; Gregory Haff, G

    2018-05-01

    García-Ramos, A, Pestaña-Melero, FL, Pérez-Castilla, A, Rojas, FJ, and Haff, GG. Mean velocity vs. mean propulsive velocity vs. peak velocity: which variable determines bench press relative load with higher reliability? J Strength Cond Res 32(5): 1273-1279, 2018-This study aimed to compare between 3 velocity variables (mean velocity [MV], mean propulsive velocity [MPV], and peak velocity [PV]): (a) the linearity of the load-velocity relationship, (b) the accuracy of general regression equations to predict relative load (%1RM), and (c) the between-session reliability of the velocity attained at each percentage of the 1-repetition maximum (%1RM). The full load-velocity relationship of 30 men was evaluated by means of linear regression models in the concentric-only and eccentric-concentric bench press throw (BPT) variants performed with a Smith machine. The 2 sessions of each BPT variant were performed within the same week separated by 48-72 hours. The main findings were as follows: (a) the MV showed the strongest linearity of the load-velocity relationship (median r = 0.989 for concentric-only BPT and 0.993 for eccentric-concentric BPT), followed by MPV (median r = 0.983 for concentric-only BPT and 0.980 for eccentric-concentric BPT), and finally PV (median r = 0.974 for concentric-only BPT and 0.969 for eccentric-concentric BPT); (b) the accuracy of the general regression equations to predict relative load (%1RM) from movement velocity was higher for MV (SEE = 3.80-4.76%1RM) than for MPV (SEE = 4.91-5.56%1RM) and PV (SEE = 5.36-5.77%1RM); and (c) the PV showed the lowest within-subjects coefficient of variation (3.50%-3.87%), followed by MV (4.05%-4.93%), and finally MPV (5.11%-6.03%). Taken together, these results suggest that the MV could be the most appropriate variable for monitoring the relative load (%1RM) in the BPT exercise performed in a Smith machine.

  1. Signal-to-noise ratio, T2 , and T2* for hyperpolarized helium-3 MRI of the human lung at three magnetic field strengths.

    Science.gov (United States)

    Komlosi, Peter; Altes, Talissa A; Qing, Kun; Mooney, Karen E; Miller, G Wilson; Mata, Jaime F; de Lange, Eduard E; Tobias, William A; Cates, Gordon D; Mugler, John P

    2017-10-01

    To evaluate T 2 , T2*, and signal-to-noise ratio (SNR) for hyperpolarized helium-3 ( 3 He) MRI of the human lung at three magnetic field strengths ranging from 0.43T to 1.5T. Sixteen healthy volunteers were imaged using a commercial whole body scanner at 0.43T, 0.79T, and 1.5T. Whole-lung T 2 values were calculated from a Carr-Purcell-Meiboom-Gill spin-echo-train acquisition. T2* maps and SNR were determined from dual-echo and single-echo gradient-echo images, respectively. Mean whole-lung SNR values were normalized by ventilated lung volume and administered 3 He dose. As expected, T 2 and T2* values demonstrated a significant inverse relationship to field strength. Hyperpolarized 3 He images acquired at all three field strengths had comparable SNR values and thus appeared visually very similar. Nonetheless, the relatively small SNR differences among field strengths were statistically significant. Hyperpolarized 3 He images of the human lung with similar image quality were obtained at three field strengths ranging from 0.43T and 1.5T. The decrease in susceptibility effects at lower fields that are reflected in longer T 2 and T2* values may be advantageous for optimizing pulse sequences inherently sensitive to such effects. The three-fold increase in T2* at lower field strength would allow lower receiver bandwidths, providing a concomitant decrease in noise and relative increase in SNR. Magn Reson Med 78:1458-1463, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  2. 1H-MRS evaluation of breast lesions by using total choline signal-to-noise ratio as an indicator of malignancy: a meta-analysis.

    Science.gov (United States)

    Wang, Xin; Wang, Xiang Jiang; Song, Hui Sheng; Chen, Long Hua

    2015-05-01

    The aim of this study was to evaluate the diagnostic performance of the use of total choline signal-to-noise ratio (tCho SNR) criteria in MRS studies for benign/malignant discrimination of focal breast lesions. We conducted (1) a meta-analysis based on 10 studies including 480 malignant breast lesions and 312 benign breast lesions and (2) a subgroup meta-analysis of tCho SNR ≥ 2 as cutoff for malignancy based on 7 studies including 371 malignant breast lesions and 239 benign breast lesions. (1) The pooled sensitivity and specificity of proton MRS with tCho SNR were 0.74 (95 % CI 0.69-0.77) and 0.76 (95 % CI 0.71-0.81), respectively. The PLR and NLR were 3.67 (95 % CI 2.30-5.83) and 0.25 (95 % CI 0.14-0.42), respectively. From the fitted SROC, the AUC and Q* index were 0.89 and 0.82. Publication bias was present (t = 2.46, P = 0.039). (2) Meta-regression analysis suggested that neither threshold effect nor evaluated covariates including strength of field, pulse sequence, TR and TE were sources of heterogeneity (all P value >0.05). (3) Subgroup meta-analysis: The pooled sensitivity and specificity were 0.79 and 0.72, respectively. The PLR and NLR were 3.49 and 0.20, respectively. The AUC and Q* index were 0.92 and 0.85. The use of tCho SNR criteria in MRS studies was helpful for differentiation between malignant and benign breast lesions. However, pooled diagnostic measures might be overestimated due to publication bias. A tCho SNR ≥ 2 as cutoff for malignancy resulted in higher diagnostic accuracy.

  3. MEASUREMENT OF THE RADIUS OF NEUTRON STARS WITH HIGH SIGNAL-TO-NOISE QUIESCENT LOW-MASS X-RAY BINARIES IN GLOBULAR CLUSTERS

    International Nuclear Information System (INIS)

    Guillot, Sebastien; Rutledge, Robert E.; Servillat, Mathieu; Webb, Natalie A.

    2013-01-01

    This paper presents the measurement of the neutron star (NS) radius using the thermal spectra from quiescent low-mass X-ray binaries (qLMXBs) inside globular clusters (GCs). Recent observations of NSs have presented evidence that cold ultra dense matter—present in the core of NSs—is best described by ''normal matter'' equations of state (EoSs). Such EoSs predict that the radii of NSs, R NS , are quasi-constant (within measurement errors, of ∼10%) for astrophysically relevant masses (M NS >0.5 M ☉ ). The present work adopts this theoretical prediction as an assumption, and uses it to constrain a single R NS value from five qLMXB targets with available high signal-to-noise X-ray spectroscopic data. Employing a Markov chain Monte-Carlo approach, we produce the marginalized posterior distribution for R NS , constrained to be the same value for all five NSs in the sample. An effort was made to include all quantifiable sources of uncertainty into the uncertainty of the quoted radius measurement. These include the uncertainties in the distances to the GCs, the uncertainties due to the Galactic absorption in the direction of the GCs, and the possibility of a hard power-law spectral component for count excesses at high photon energy, which are observed in some qLMXBs in the Galactic plane. Using conservative assumptions, we found that the radius, common to the five qLMXBs and constant for a wide range of masses, lies in the low range of possible NS radii, R NS =9.1 +1.3 -1.5 km (90%-confidence). Such a value is consistent with low-R NS equations of state. We compare this result with previous radius measurements of NSs from various analyses of different types of systems. In addition, we compare the spectral analyses of individual qLMXBs to previous works.

  4. Maximizing signal-to-noise ratio (SNR) in 3-D large bandgap semiconductor pixelated detectors in optimal and non-optimal filtering conditions

    International Nuclear Information System (INIS)

    Rodrigues, Miesher L.; Serra, Andre da S.; He, Zhong; Zhu, Yuefeng

    2009-01-01

    3-D pixelated semiconductor detectors are used in radiation detection applications requiring spectroscopic and imaging information from radiation sources. Reconstruction algorithms used to determine direction and energy of incoming gamma rays can be improved by reducing electronic noise and using optimum filtering techniques. Position information can be improved by achieving sub-pixel resolution. Electronic noise is the limiting factor. Achieving sub-pixel resolution - position of the interaction better than one pixel pitch - in 3-D pixelated semiconductor detectors is a challenging task due to the fast transient characteristics of these signals. This work addresses two fundamental questions: the first is to determine the optimum filter, while the second is to estimate the achievable sub-pixel resolution using this filter. It is shown that the matched filter is the optimum filter when applying the signal-to-noise ratio criteria. Also, non-optimum filters are studied. The framework of 3-D waveform simulation using the Shockley-Ramo Theorem and the Hecht Equation for electron and hole trapping is presented in this work. This waveform simulator can be used to analyze current detectors as well as explore new ideas and concepts in future work. Numerical simulations show that assuming an electronic noise of 3.3 keV it is possible to subdivide the pixel region into 5x5 sub-pixels. After analyzing these results, it is suggested that sub-pixel information can also improve energy resolution. Current noise levels present the major drawback to both achieve sub-pixel resolution as well as improve energy resolution below the current limits. (author)

  5. Signal-to-noise ratio of bilateral nonimaging transcranial Doppler recordings of the middle cerebral artery is not affected by age and sex.

    Science.gov (United States)

    Katsogridakis, Emmanuel; Dineen, Nicky E; Brodie, Fiona G; Robinson, Thompson G; Panerai, Ronney B

    2011-04-01

    Differences between transcranial Doppler ultrasonography (TCD) recordings of symmetrical vessels can show true physiologic differences, but can also be caused by measurement error and other sources of noise. The aim of this project was to assess the influence of noise on estimates of dynamic cerebral autoregulation (dCA), and of age, sex and breathing manoeuvres on the signal-to-noise ratio (SNR). Cerebral blood flow (CBF) was monitored in 30 young (60 years) during baseline conditions, breath-holding and hyperventilation. Noise was defined as the difference between beat-to-beat values of the two mean CBF velocity (CBFV) signals. Magnitude squared coherence estimates of noise vs. ABP and ABP vs. CBFV were obtained and averaged. A similar approach was adopted for the CBFV step response. The effect of age and breathing manoeuvre on the SNR was assessed using a two-way analysis of variance (ANOVA), whilst the effect of sex was investigated using a Student's t test. No significant differences were observed in SNR (baseline 6.07 ± 3.07 dB and 7.33 ± 3.84 dB, breath-hold: 13.53 ± 3.93 dB and 14.64 ± 4.52 dB, and hyperventilation: 14.69 ± 4.04 dB and 14.84 ± 4.05 dB) estimates between young and old groups, respectively. The use of breathing manoeuvres significantly improved the SNR (p < 10(-4)) without a significant difference between manoeuvres. Sex does not appear to have an effect on SNR (p = 0.365). Coherence estimates were not influenced by the SNR, but significant differences were found in the amplitude of the CBFV step response. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  6. Improved signal to noise ratio and sensitivity of an infrared imaging video bolometer on large helical device by using an infrared periscope

    International Nuclear Information System (INIS)

    Pandya, Shwetang N.; Sano, Ryuichi; Peterson, Byron J.; Mukai, Kiyofumi; Enokuchi, Akito; Takeyama, Norihide

    2014-01-01

    An Infrared imaging Video Bolometer (IRVB) diagnostic is currently being used in the Large Helical Device (LHD) for studying the localization of radiation structures near the magnetic island and helical divertor X-points during plasma detachment and for 3D tomography. This research demands high signal to noise ratio (SNR) and sensitivity to improve the temporal resolution for studying the evolution of radiation structures during plasma detachment and a wide IRVB field of view (FoV) for tomography. Introduction of an infrared periscope allows achievement of a higher SNR and higher sensitivity, which in turn, permits a twofold improvement in the temporal resolution of the diagnostic. Higher SNR along with wide FoV is achieved simultaneously by reducing the separation of the IRVB detector (metal foil) from the bolometer's aperture and the LHD plasma. Altering the distances to meet the aforesaid requirements results in an increased separation between the foil and the IR camera. This leads to a degradation of the diagnostic performance in terms of its sensitivity by 1.5-fold. Using an infrared periscope to image the IRVB foil results in a 7.5-fold increase in the number of IR camera pixels imaging the foil. This improves the IRVB sensitivity which depends on the square root of the number of IR camera pixels being averaged per bolometer channel. Despite the slower f-number (f/# = 1.35) and reduced transmission (τ 0 = 89%, due to an increased number of lens elements) for the periscope, the diagnostic with an infrared periscope operational on LHD has improved in terms of sensitivity and SNR by a factor of 1.4 and 4.5, respectively, as compared to the original diagnostic without a periscope (i.e., IRVB foil being directly imaged by the IR camera through conventional optics). The bolometer's field of view has also increased by two times. The paper discusses these improvements in apt details

  7. Signal-to-noise ratio, contrast-to-noise ratio and their trade-offs with resolution in axial-shear strain elastography

    International Nuclear Information System (INIS)

    Thitaikumar, Arun; Krouskop, Thomas A; Ophir, Jonathan

    2007-01-01

    In axial-shear strain elastography, the local axial-shear strain resulting from the application of quasi-static axial compression to an inhomogeneous material is imaged. In this paper, we investigated the image quality of the axial-shear strain estimates in terms of the signal-to-noise ratio (SNR asse ) and contrast-to-noise ratio (CNR asse ) using simulations and experiments. Specifically, we investigated the influence of the system parameters (beamwidth, transducer element pitch and bandwidth), signal processing parameters (correlation window length and axial window shift) and mechanical parameters (Young's modulus contrast, applied axial strain) on the SNR asse and CNR asse . The results of the study show that the CNR asse (SNR asse ) is maximum for axial-shear strain values in the range of 0.005-0.03. For the inclusion/background modulus contrast range considered in this study ( asse (SNR asse ) is maximum for applied axial compressive strain values in the range of 0.005%-0.03%. This suggests that the RF data acquired during axial elastography can be used to obtain axial-shear strain elastograms, since this range is typically used in axial elastography as well. The CNR asse (SNR asse ) remains almost constant with an increase in the beamwidth while it increases as the pitch increases. As expected, the axial shift had only a weak influence on the CNR asse (SNR asse ) of the axial-shear strain estimates. We observed that the differential estimates of the axial-shear strain involve a trade-off between the CNR asse (SNR asse ) and the spatial resolution only with respect to pitch and not with respect to signal processing parameters. Simulation studies were performed to confirm such an observation. The results demonstrate a trade-off between CNR asse and the resolution with respect to pitch

  8. Investigation of the signal-to-noise ratio on a state-of-the-art PET system: measurements with the EEC whole-body phantom

    International Nuclear Information System (INIS)

    Jaegel, M.; Adam, L.E.; Bellemann, M.E.; Zaers, J.; Trojan, H.; Brix, G.; Rauschnabel, K.

    1998-01-01

    Aim: The spatial resolution of PET scanners can be improved by using smaller detector elements. This approach, however, results in poorer counting statistics of the reconstructed images. Therefore, the aim of this study was to investigate the influence of different acquisition parameters on the signal-to-noise ratio (SNR) and thus to optimize PET image quality. Methods: The experiments were performed with the latest-generation whole-body PET system (ECAT Exact HR + , Siemens/CTI) using the standard 2D and 3D data acquisition parameters recommended by the manufacturer. The EEC whole-body phantom with different inserts was used to simulate patient examinations of the thorax. Emission and transmission scans were acquired with varying numbers of events and at different settings of the lower level energy discriminator. The influence of the number of counts on the SNR was parameterized using a simple model function. Results: For count rates frequently encountered in clinical PET studies, the emission scan has a stronger influence on the SNR in the reconstructed image than the transmission scan. The SNR can be improved by using a higher setting of the lower energy level provided that the total number of counts is kept constant. Based on the established model function, the relative duration of the emission scan with respect to the total acquistion time was optimized, yielding a value of about 75% for both the 2D and 3D mode. Conclusion: The presented phenomenological approach can readily be employed to optimize the SNR and thus the quality of PET images acquired at different scanners or with different examination protocols. (orig.) [de

  9. Measurement of the Radius of Neutron Stars with High Signal-to-noise Quiescent Low-mass X-Ray Binaries in Globular Clusters

    Science.gov (United States)

    Guillot, Sebastien; Servillat, Mathieu; Webb, Natalie A.; Rutledge, Robert E.

    2013-07-01

    This paper presents the measurement of the neutron star (NS) radius using the thermal spectra from quiescent low-mass X-ray binaries (qLMXBs) inside globular clusters (GCs). Recent observations of NSs have presented evidence that cold ultra dense matter—present in the core of NSs—is best described by "normal matter" equations of state (EoSs). Such EoSs predict that the radii of NSs, R NS, are quasi-constant (within measurement errors, of ~10%) for astrophysically relevant masses (M NS>0.5 M ⊙). The present work adopts this theoretical prediction as an assumption, and uses it to constrain a single R NS value from five qLMXB targets with available high signal-to-noise X-ray spectroscopic data. Employing a Markov chain Monte-Carlo approach, we produce the marginalized posterior distribution for R NS, constrained to be the same value for all five NSs in the sample. An effort was made to include all quantifiable sources of uncertainty into the uncertainty of the quoted radius measurement. These include the uncertainties in the distances to the GCs, the uncertainties due to the Galactic absorption in the direction of the GCs, and the possibility of a hard power-law spectral component for count excesses at high photon energy, which are observed in some qLMXBs in the Galactic plane. Using conservative assumptions, we found that the radius, common to the five qLMXBs and constant for a wide range of masses, lies in the low range of possible NS radii, R_NS =9.1^{+ 1.3}_{- 1.5} \\,km (90%-confidence). Such a value is consistent with low-R NS equations of state. We compare this result with previous radius measurements of NSs from various analyses of different types of systems. In addition, we compare the spectral analyses of individual qLMXBs to previous works.

  10. Regional improvement of signal-to-noise and contrast-to-noise ratios in dual-screen CR chest imaging - a phantom study

    International Nuclear Information System (INIS)

    Liu Xinming; Shaw, Chris C.

    2001-01-01

    The improvement of signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) in dual-screen computed radiography (CR) has been investigated for various regions in images of an anthropomorphic chest phantom. With the dual-screen CR technique, two image plates are placed in a cassette and exposed together during imaging. The exposed plates are separately scanned to form a front image and a back image, which are then registered and superimposed to form a composite image with improved SNRs and CNRs. The improvement can be optimized by applying specifically selected weighting factors during superimposition. In this study, dual-screen CR images of an anthropomorphic chest phantom were acquired and formed with four different combinations of standard resolution (ST) and high-resolution (HR) screens: ST-ST, ST-HR, HR-ST, and HR-HR. SNRs and their improvements were measured and compared over twelve representative regions-of-interest (ROIs) in these images. A 19.1%-45.7% increase of the SNR was observed, depending on the ROI and screen combination used. The optimal weighting factors were found to vary by only 4.5%-12.4%. Largest improvement was found in the lung field for all screen combinations. Improvement of CNRs was investigated over two ROIs in the lung field using the rib bones as the contrast objects and a 29.2%-43.9% improvement of the CNR was observed. Among the four screen combinations, ST-ST resulted in the most SNR and CNR improvement, followed in order by HR-ST, HR-HR, and ST-HR. The HR-ST combination yielded the lowest spatial variation of the optimal weighting factors with improved SNRs and CNRs close to those of the ST-ST combination

  11. Signal-to-Noise Ratio in PVT Performance as a Cognitive Measure of the Effect of Sleep Deprivation on the Fidelity of Information Processing.

    Science.gov (United States)

    Chavali, Venkata P; Riedy, Samantha M; Van Dongen, Hans P A

    2017-03-01

    There is a long-standing debate about the best way to characterize performance deficits on the psychomotor vigilance test (PVT), a widely used assay of cognitive impairment in human sleep deprivation studies. Here, we address this issue through the theoretical framework of the diffusion model and propose to express PVT performance in terms of signal-to-noise ratio (SNR). From the equations of the diffusion model for one-choice, reaction-time tasks, we derived an expression for a novel SNR metric for PVT performance. We also showed that LSNR-a commonly used log-transformation of SNR-can be reasonably well approximated by a linear function of the mean response speed, LSNRapx. We computed SNR, LSNR, LSNRapx, and number of lapses for 1284 PVT sessions collected from 99 healthy young adults who participated in laboratory studies with 38 hr of total sleep deprivation. All four PVT metrics captured the effects of time awake and time of day on cognitive performance during sleep deprivation. The LSNR had the best psychometric properties, including high sensitivity, high stability, high degree of normality, absence of floor and ceiling effects, and no bias in the meaning of change scores related to absolute baseline performance. The theoretical motivation of SNR and LSNR permits quantitative interpretation of PVT performance as an assay of the fidelity of information processing in cognition. Furthermore, with a conceptual and statistical meaning grounded in information theory and generalizable across scientific fields, LSNR in particular is a useful tool for systems-integrated fatigue risk management. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  12. The ultimate intrinsic signal-to-noise ratio of loop- and dipole-like current patterns in a realistic human head model.

    Science.gov (United States)

    Pfrommer, Andreas; Henning, Anke

    2018-03-13

    The ultimate intrinsic signal-to-noise ratio (UISNR) represents an upper bound for the achievable SNR of any receive coil. To reach this threshold a complete basis set of equivalent surface currents is required. This study systematically investigated to what extent either loop- or dipole-like current patterns are able to reach the UISNR threshold in a realistic human head model between 1.5 T and 11.7 T. Based on this analysis, we derived guidelines for coil designers to choose the best array element at a given field strength. Moreover, we present ideal current patterns yielding the UISNR in a realistic body model. We distributed generic current patterns on a cylindrical and helmet-shaped surface around a realistic human head model. We excited electromagnetic fields in the human head by using eigenfunctions of the spherical and cylindrical Helmholtz operator. The electromagnetic field problem was solved by a fast volume integral equation solver. At 7 T and above, adding curl-free current patterns to divergence-free current patterns substantially increased the SNR in the human head (locally >20%). This was true for the helmet-shaped and the cylindrical surface. On the cylindrical surface, dipole-like current patterns had high SNR performance in central regions at ultra-high field strength. The UISNR increased superlinearly with B0 in most parts of the cerebrum but only sublinearly in the periphery of the human head. The combination of loop and dipole elements could enhance the SNR performance in the human head at ultra-high field strength. © 2018 International Society for Magnetic Resonance in Medicine.

  13. Design Optimization and Fabrication of High-Sensitivity SOI Pressure Sensors with High Signal-to-Noise Ratios Based on Silicon Nanowire Piezoresistors

    Directory of Open Access Journals (Sweden)

    Jiahong Zhang

    2016-10-01

    Full Text Available In order to meet the requirement of high sensitivity and signal-to-noise ratios (SNR, this study develops and optimizes a piezoresistive pressure sensor by using double silicon nanowire (SiNW as the piezoresistive sensing element. First of all, ANSYS finite element method and voltage noise models are adopted to optimize the sensor size and the sensor output (such as sensitivity, voltage noise and SNR. As a result, the sensor of the released double SiNW has 1.2 times more sensitivity than that of single SiNW sensor, which is consistent with the experimental result. Our result also displays that both the sensitivity and SNR are closely related to the geometry parameters of SiNW and its doping concentration. To achieve high performance, a p-type implantation of 5 × 1018 cm−3 and geometry of 10 µm long SiNW piezoresistor of 1400 nm × 100 nm cross area and 6 µm thick diaphragm of 200 µm × 200 µm are required. Then, the proposed SiNW pressure sensor is fabricated by using the standard complementary metal-oxide-semiconductor (CMOS lithography process as well as wet-etch release process. This SiNW pressure sensor produces a change in the voltage output when the external pressure is applied. The involved experimental results show that the pressure sensor has a high sensitivity of 495 mV/V·MPa in the range of 0–100 kPa. Nevertheless, the performance of the pressure sensor is influenced by the temperature drift. Finally, for the sake of obtaining accurate and complete information over wide temperature and pressure ranges, the data fusion technique is proposed based on the back-propagation (BP neural network, which is improved by the particle swarm optimization (PSO algorithm. The particle swarm optimization–back-propagation (PSO–BP model is implemented in hardware using a 32-bit STMicroelectronics (STM32 microcontroller. The results of calibration and test experiments clearly prove that the PSO–BP neural network can be effectively applied

  14. Assessment of the Speech Intelligibility Performance of Post Lingual Cochlear Implant Users at Different Signal-to-Noise Ratios Using the Turkish Matrix Test

    Directory of Open Access Journals (Sweden)

    Zahra Polat

    2016-10-01

    Full Text Available Background: Spoken word recognition and speech perception tests in quiet are being used as a routine in assessment of the benefit which children and adult cochlear implant users receive from their devices. Cochlear implant users generally demonstrate high level performances in these test materials as they are able to achieve high level speech perception ability in quiet situations. Although these test materials provide valuable information regarding Cochlear Implant (CI users’ performances in optimal listening conditions, they do not give realistic information regarding performances in adverse listening conditions, which is the case in the everyday environment. Aims: The aim of this study was to assess the speech intelligibility performance of post lingual CI users in the presence of noise at different signal-to-noise ratio with the Matrix Test developed for Turkish language. Study Design: Cross-sectional study. Methods: The thirty post lingual implant user adult subjects, who had been using implants for a minimum of one year, were evaluated with Turkish Matrix test. Subjects’ speech intelligibility was measured using the adaptive and non-adaptive Matrix Test in quiet and noisy environments. Results: The results of the study show a correlation between Pure Tone Average (PTA values of the subjects and Matrix test Speech Reception Threshold (SRT values in the quiet. Hence, it is possible to asses PTA values of CI users using the Matrix Test also. However, no correlations were found between Matrix SRT values in the quiet and Matrix SRT values in noise. Similarly, the correlation between PTA values and intelligibility scores in noise was also not significant. Therefore, it may not be possible to assess the intelligibility performance of CI users using test batteries performed in quiet conditions. Conclusion: The Matrix Test can be used to assess the benefit of CI users from their systems in everyday life, since it is possible to perform

  15. Self-consistent signal-to-noise analysis of the statistical behavior of analog neural networks and enhancement of the storage capacity

    Science.gov (United States)

    Shiino, Masatoshi; Fukai, Tomoki

    1993-08-01

    Based on the self-consistent signal-to-noise analysis (SCSNA) capable of dealing with analog neural networks with a wide class of transfer functions, enhancement of the storage capacity of associative memory and the related statistical properties of neural networks are studied for random memory patterns. Two types of transfer functions with the threshold parameter θ are considered, which are derived from the sigmoidal one to represent the output of three-state neurons. Neural networks having a monotonically increasing transfer function FM, FM(u)=sgnu (||u||>θ), FM(u)=0 (||u||memory patterns), implying the reduction of the number of spurious states. The behavior of the storage capacity with changing θ is qualitatively the same as that of the Ising spin neural networks with varying temperature. On the other hand, the nonmonotonic transfer function FNM, FNM(u)=sgnu (||u||=θ) gives rise to remarkable features in several respects. First, it yields a large enhancement of the storage capacity compared with the Amit-Gutfreund-Sompolinsky (AGS) value: with decreasing θ from θ=∞, the storage capacity αc of such a network is increased from the AGS value (~=0.14) to attain its maximum value of ~=0.42 at θ~=0.7 and afterwards is decreased to vanish at θ=0. Whereas for θ>~1 the storage capacity αc coincides with the value αc~ determined by the SCSNA as the upper bound of α ensuring the existence of retrieval solutions, for θr≠0 (i.e., finite width of the local field distribution), which is implied by the order-parameter equations of the SCSNA, disappears at a certain critical loading rate α0, and for αr=0+). As a consequence, memory retrieval without errors becomes possible even in the saturation limit α≠0. Results of the computer simulations on the statistical properties of the novel phase with αstorage capacity is also analyzed for the two types of networks. It is conspicuous for the networks with FNM, where the self-couplings increase the stability of

  16. A novel technique for determination of two dimensional signal-to-noise ratio improvement factor of an antiscatter grid in digital radiography

    Science.gov (United States)

    Nøtthellen, Jacob; Konst, Bente; Abildgaard, Andreas

    2014-08-01

    Purpose: to present a new and simplified method for pixel-wise determination of the signal-to-noise ratio improvement factor KSNR of an antiscatter grid, when used with a digital imaging system. The method was based on approximations of published formulas. The simplified estimate of K2SNR may be used as a decision tool for whether or not to use an antiscatter grid. Methods: the primary transmission of the grid Tp was determined with and without a phantom present using a pattern of beam stops. The Bucky factor B was measured with and without a phantom present. Hence K2SNR maps were created based on Tp and B. A formula was developed to calculate K2SNR from the measured Bs without using the measured Tp. The formula was applied on two exposures of anthropomorphic phantoms, adult legs and baby chest, and on two homogeneous poly[methyl methacrylate] (PMMA) phantoms, 5 cm and 10 cm thick. The results from anthropomorphic phantoms were compared to those based on the beam stop method. The results for the PMMA-phantoms were compared to a study that used a contrast-detail phantom. Results: 2D maps of K2SNR over the entire adult legs and baby chest phantoms were created. The maps indicate that it is advantageous to use the antiscatter grid for imaging of the adult legs. For baby chest imaging the antiscatter grid is not recommended if only the lung regions are of interest. The K2SNR maps based on the new method correspond to those from the beam stop method, and the K2SNR from the homogenous phantoms arising from two different approaches also agreed well with each other. Conclusion: a method to measure 2D K2SNR associated with grid use in digital radiography system was developed and validated. The proposed method requires four exposures and use of a simple formula. It is fast and provides adequate estimates for K2SNR.

  17. MEASUREMENT OF THE RADIUS OF NEUTRON STARS WITH HIGH SIGNAL-TO-NOISE QUIESCENT LOW-MASS X-RAY BINARIES IN GLOBULAR CLUSTERS

    Energy Technology Data Exchange (ETDEWEB)

    Guillot, Sebastien; Rutledge, Robert E. [Department of Physics, McGill University, 3600 rue University, Montreal, QC, H2X-3R4 (Canada); Servillat, Mathieu [Laboratoire AIM (CEA/DSM/IRFU/SAp, CNRS, Universite Paris Diderot), CEA Saclay, Bat. 709, F-91191 Gif-sur-Yvette (France); Webb, Natalie A., E-mail: guillots@physics.mcgill.ca, E-mail: rutledge@physics.mcgill.ca [Universite de Toulouse, UPS-OMP, IRAP, Toulouse (France)

    2013-07-20

    This paper presents the measurement of the neutron star (NS) radius using the thermal spectra from quiescent low-mass X-ray binaries (qLMXBs) inside globular clusters (GCs). Recent observations of NSs have presented evidence that cold ultra dense matter-present in the core of NSs-is best described by ''normal matter'' equations of state (EoSs). Such EoSs predict that the radii of NSs, R{sub NS}, are quasi-constant (within measurement errors, of {approx}10%) for astrophysically relevant masses (M{sub NS}>0.5 M{sub Sun }). The present work adopts this theoretical prediction as an assumption, and uses it to constrain a single R{sub NS} value from five qLMXB targets with available high signal-to-noise X-ray spectroscopic data. Employing a Markov chain Monte-Carlo approach, we produce the marginalized posterior distribution for R{sub NS}, constrained to be the same value for all five NSs in the sample. An effort was made to include all quantifiable sources of uncertainty into the uncertainty of the quoted radius measurement. These include the uncertainties in the distances to the GCs, the uncertainties due to the Galactic absorption in the direction of the GCs, and the possibility of a hard power-law spectral component for count excesses at high photon energy, which are observed in some qLMXBs in the Galactic plane. Using conservative assumptions, we found that the radius, common to the five qLMXBs and constant for a wide range of masses, lies in the low range of possible NS radii, R{sub NS}=9.1{sup +1.3}{sub -1.5} km (90%-confidence). Such a value is consistent with low-R{sub NS} equations of state. We compare this result with previous radius measurements of NSs from various analyses of different types of systems. In addition, we compare the spectral analyses of individual qLMXBs to previous works.

  18. Correlation between the signal-to-noise ratio improvement factor (KSNR) and clinical image quality for chest imaging with a computed radiography system

    International Nuclear Information System (INIS)

    Moore, C S; Wood, T J; Saunderson, J R; Beavis, A W

    2015-01-01

    This work assessed the appropriateness of the signal-to-noise ratio improvement factor (K SNR ) as a metric for the optimisation of computed radiography (CR) of the chest. The results of a previous study in which four experienced image evaluators graded computer simulated chest images using a visual grading analysis scoring (VGAS) scheme to quantify the benefit of using an anti-scatter grid were used for the clinical image quality measurement (number of simulated patients  =  80). The K SNR was used to calculate the improvement in physical image quality measured in a physical chest phantom. K SNR correlation with VGAS was assessed as a function of chest region (lung, spine and diaphragm/retrodiaphragm), and as a function of x-ray tube voltage in a given chest region. The correlation of the latter was determined by the Pearson correlation coefficient. VGAS and K SNR image quality metrics demonstrated no correlation in the lung region but did show correlation in the spine and diaphragm/retrodiaphragmatic regions. However, there was no correlation as a function of tube voltage in any region; a Pearson correlation coefficient (R) of  −0.93 (p  =  0.015) was found for lung, a coefficient (R) of  −0.95 (p  =  0.46) was found for spine, and a coefficient (R) of  −0.85 (p  =  0.015) was found for diaphragm. All demonstrate strong negative correlations indicating conflicting results, i.e. K SNR increases with tube voltage but VGAS decreases. Medical physicists should use the K SNR metric with caution when assessing any potential improvement in clinical chest image quality when introducing an anti-scatter grid for CR imaging, especially in the lung region. This metric may also be a limited descriptor of clinical chest image quality as a function of tube voltage when a grid is used routinely. (paper)

  19. Comparison of entrance exposure and signal-to-noise ratio between an SBDX prototype and a wide-beam cardiac angiographic system

    International Nuclear Information System (INIS)

    Speidel, Michael A.; Wilfley, Brian P.; Star-Lack, Josh M.; Heanue, Joseph A.; Betts, Timothy D.; Van Lysel, Michael S.

    2006-01-01

    The scanning-beam digital x-ray (SBDX) system uses an inverse geometry, narrow x-ray beam, and a 2-mm thick CdTe detector to improve the dose efficiency of the coronary angiographic procedure. Entrance exposure and large-area iodine signal-to-noise ratio (SNR) were measured with the SBDX prototype and compared to that of a clinical cardiac interventional system with image intensifier (II) and charge coupled device (CCD) camera (Philips H5000, MRC-200 x-ray tube, 72 kWp max). Phantoms were 18.6-35.0 cm acrylic with an iohexol-equivalent disk placed at midthickness (35 mg/cm 2 iodine radiographic density). Imaging was performed at 15 frame/s, with the disk at mechanical isocenter and an 11-cm object-plane field width. The II/CCD system was operated in cine mode with automatic exposure control. With the SBDX prototype at maximum x-ray output (120 kVp, 24.3 kWp), the SBDX SNR was 107%-69% of the II/CCD SNR, depending on phantom thickness, and the SBDX entrance exposure rate was 10.7-9.3 R/min (9.4-8.2 cGy/min air kerma). For phantoms where an equal-kVp imaging comparison was possible (≥23.3 cm), the SBDX SNR ranged from 47% to 69% of the II/CCD SNR while delivering 6% to 9% of the II/CCD entrance exposure rate. From these measurements it was determined that the relative SBDX entrance exposure at equal SNR would be 31%-16%. Results were consistent with a model for relative entrance exposure at equal SNR, which predicted a 3-7 times reduction in entrance exposure due to SBDX's comparatively low scatter fraction (5.5%-8.1% measured, including off-focus radiation), high detector detective quantum efficiency (66%-73%, measured from 70 to 120 kVp), and large entrance field area (1.7x-2.3x, for the same object-plane field width). With improvements to the system geometry, detector, and x-ray source, SBDX technology is projected to achieve conventional cine-quality SNR over a full range of patient thicknesses, with 5-10 times lower skin dose

  20. Wavelet transform for the evaluation of peak intensities in flow-injection analysis

    NARCIS (Netherlands)

    Bos, M.; Hoogendam, E.

    1992-01-01

    The application of the wavelet transform in the determination of peak intensities in flow-injection analysis was studied with regard to its properties of minimizing the effects of noise and baseline drift. The results indicate that for white noise and a favourable peak shape a signal-to-noise ratio

  1. Wavelett transform for the evaluation of peak intensities in flow-injection analysis

    NARCIS (Netherlands)

    Bos, M.; Hoogendam, E.; Hoogendam, E.

    1992-01-01

    The application of the wavelet transform in the determination of peak intensities in flow-injection analysis was studied with regard to its properties of minimizing the effects of noise and baseline drift. The results indicate that for white noise and a favourable peak shape a signal-to-noise ratio

  2. Multi-Scale Peak and Trough Detection Optimised for Periodic and Quasi-Periodic Neuroscience Data.

    Science.gov (United States)

    Bishop, Steven M; Ercole, Ari

    2018-01-01

    The reliable detection of peaks and troughs in physiological signals is essential to many investigative techniques in medicine and computational biology. Analysis of the intracranial pressure (ICP) waveform is a particular challenge due to multi-scale features, a changing morphology over time and signal-to-noise limitations. Here we present an efficient peak and trough detection algorithm that extends the scalogram approach of Scholkmann et al., and results in greatly improved algorithm runtime performance. Our improved algorithm (modified Scholkmann) was developed and analysed in MATLAB R2015b. Synthesised waveforms (periodic, quasi-periodic and chirp sinusoids) were degraded with white Gaussian noise to achieve signal-to-noise ratios down to 5 dB and were used to compare the performance of the original Scholkmann and modified Scholkmann algorithms. The modified Scholkmann algorithm has false-positive (0%) and false-negative (0%) detection rates identical to the original Scholkmann when applied to our test suite. Actual compute time for a 200-run Monte Carlo simulation over a multicomponent noisy test signal was 40.96 ± 0.020 s (mean ± 95%CI) for the original Scholkmann and 1.81 ± 0.003 s (mean ± 95%CI) for the modified Scholkmann, demonstrating the expected improvement in runtime complexity from [Formula: see text] to [Formula: see text]. The accurate interpretation of waveform data to identify peaks and troughs is crucial in signal parameterisation, feature extraction and waveform identification tasks. Modification of a standard scalogram technique has produced a robust algorithm with linear computational complexity that is particularly suited to the challenges presented by large, noisy physiological datasets. The algorithm is optimised through a single parameter and can identify sub-waveform features with minimal additional overhead, and is easily adapted to run in real time on commodity hardware.

  3. Effects of exposure equalization on image signal-to-noise ratios in digital mammography: A simulation study with an anthropomorphic breast phantom

    Energy Technology Data Exchange (ETDEWEB)

    Liu Xinming; Lai Chaojen; Whitman, Gary J.; Geiser, William R.; Shen Youtao; Yi Ying; Shaw, Chris C. [Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States); Department of Diagnostic Radiology, University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States); Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States)

    2011-12-15

    Purpose: The scan equalization digital mammography (SEDM) technique combines slot scanning and exposure equalization to improve low-contrast performance of digital mammography in dense tissue areas. In this study, full-field digital mammography (FFDM) images of an anthropomorphic breast phantom acquired with an anti-scatter grid at various exposure levels were superimposed to simulate SEDM images and investigate the improvement of low-contrast performance as quantified by primary signal-to-noise ratios (PSNRs). Methods: We imaged an anthropomorphic breast phantom (Gammex 169 ''Rachel,'' Gammex RMI, Middleton, WI) at various exposure levels using a FFDM system (Senographe 2000D, GE Medical Systems, Milwaukee, WI). The exposure equalization factors were computed based on a standard FFDM image acquired in the automatic exposure control (AEC) mode. The equalized image was simulated and constructed by superimposing a selected set of FFDM images acquired at 2, 1, 1/2, 1/4, 1/8, 1/16, and 1/32 times of exposure levels to the standard AEC timed technique (125 mAs) using the equalization factors computed for each region. Finally, the equalized image was renormalized regionally with the exposure equalization factors to result in an appearance similar to that with standard digital mammography. Two sets of FFDM images were acquired to allow for two identically, but independently, formed equalized images to be subtracted from each other to estimate the noise levels. Similarly, two identically but independently acquired standard FFDM images were subtracted to estimate the noise levels. Corrections were applied to remove the excess system noise accumulated during image superimposition in forming the equalized image. PSNRs over the compressed area of breast phantom were computed and used to quantitatively study the effects of exposure equalization on low-contrast performance in digital mammography. Results: We found that the highest achievable PSNR improvement

  4. Effects of exposure equalization on image signal-to-noise ratios in digital mammography: A simulation study with an anthropomorphic breast phantom

    International Nuclear Information System (INIS)

    Liu Xinming; Lai Chaojen; Whitman, Gary J.; Geiser, William R.; Shen Youtao; Yi Ying; Shaw, Chris C.

    2011-01-01

    Purpose: The scan equalization digital mammography (SEDM) technique combines slot scanning and exposure equalization to improve low-contrast performance of digital mammography in dense tissue areas. In this study, full-field digital mammography (FFDM) images of an anthropomorphic breast phantom acquired with an anti-scatter grid at various exposure levels were superimposed to simulate SEDM images and investigate the improvement of low-contrast performance as quantified by primary signal-to-noise ratios (PSNRs). Methods: We imaged an anthropomorphic breast phantom (Gammex 169 ''Rachel,'' Gammex RMI, Middleton, WI) at various exposure levels using a FFDM system (Senographe 2000D, GE Medical Systems, Milwaukee, WI). The exposure equalization factors were computed based on a standard FFDM image acquired in the automatic exposure control (AEC) mode. The equalized image was simulated and constructed by superimposing a selected set of FFDM images acquired at 2, 1, 1/2, 1/4, 1/8, 1/16, and 1/32 times of exposure levels to the standard AEC timed technique (125 mAs) using the equalization factors computed for each region. Finally, the equalized image was renormalized regionally with the exposure equalization factors to result in an appearance similar to that with standard digital mammography. Two sets of FFDM images were acquired to allow for two identically, but independently, formed equalized images to be subtracted from each other to estimate the noise levels. Similarly, two identically but independently acquired standard FFDM images were subtracted to estimate the noise levels. Corrections were applied to remove the excess system noise accumulated during image superimposition in forming the equalized image. PSNRs over the compressed area of breast phantom were computed and used to quantitatively study the effects of exposure equalization on low-contrast performance in digital mammography. Results: We found that the highest achievable PSNR improvement factor was 1.89 for

  5. [A new peak detection algorithm of Raman spectra].

    Science.gov (United States)

    Jiang, Cheng-Zhi; Sun, Qiang; Liu, Ying; Liang, Jing-Qiu; An, Yan; Liu, Bing

    2014-01-01

    The authors proposed a new Raman peak recognition method named bi-scale correlation algorithm. The algorithm uses the combination of the correlation coefficient and the local signal-to-noise ratio under two scales to achieve Raman peak identification. We compared the performance of the proposed algorithm with that of the traditional continuous wavelet transform method through MATLAB, and then tested the algorithm with real Raman spectra. The results show that the average time for identifying a Raman spectrum is 0.51 s with the algorithm, while it is 0.71 s with the continuous wavelet transform. When the signal-to-noise ratio of Raman peak is greater than or equal to 6 (modern Raman spectrometers feature an excellent signal-to-noise ratio), the recognition accuracy with the algorithm is higher than 99%, while it is less than 84% with the continuous wavelet transform method. The mean and the standard deviations of the peak position identification error of the algorithm are both less than that of the continuous wavelet transform method. Simulation analysis and experimental verification prove that the new algorithm possesses the following advantages: no needs of human intervention, no needs of de-noising and background removal operation, higher recognition speed and higher recognition accuracy. The proposed algorithm is operable in Raman peak identification.

  6. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  7. Habitat-induced degradation of sound signals: Quantifying the effects of communication sounds and bird location on blur ratio, excess attenuation, and signal-to-noise ratio in blackbird song

    DEFF Research Database (Denmark)

    Dabelsteen, T.; Larsen, O N; Pedersen, Simon Boel

    1993-01-01

    measures were calculated from changes of the amplitude functions (i.e., envelopes) of the degraded songs using a new technique which allowed a compensation for the contribution of the background noise to the amplitude values. Representative songs were broadcast in a deciduous forest without leaves......The habitat-induced degradation of the full song of the blackbird (Turdus merula) was quantified by measuring excess attenuation, reduction of the signal-to-noise ratio, and blur ratio, the latter measure representing the degree of blurring of amplitude and frequency patterns over time. All three...

  8. Signal to Noise Ratio (SNR Enhancement Comparison of Impulse-, Coding- and Novel Linear-Frequency-Chirp-Based Optical Time Domain Reflectometry (OTDR for Passive Optical Network (PON Monitoring Based on Unique Combinations of Wavelength Selective Mirrors

    Directory of Open Access Journals (Sweden)

    Christopher M. Bentz

    2014-03-01

    Full Text Available We compare optical time domain reflectometry (OTDR techniques based on conventional single impulse, coding and linear frequency chirps concerning their signal to noise ratio (SNR enhancements by measurements in a passive optical network (PON with a maximum one-way attenuation of 36.6 dB. A total of six subscribers, each represented by a unique mirror pair with narrow reflection bandwidths, are installed within a distance of 14 m. The spatial resolution of the OTDR set-up is 3.0 m.

  9. Cosmology constraints from shear peak statistics in Dark Energy Survey Science Verification data

    International Nuclear Information System (INIS)

    Kacprzak, T.; Kirk, D.; Friedrich, O.; Amara, A.; Refregier, A.

    2016-01-01

    Shear peak statistics has gained a lot of attention recently as a practical alternative to the two-point statistics for constraining cosmological parameters. We perform a shear peak statistics analysis of the Dark Energy Survey (DES) Science Verification (SV) data, using weak gravitational lensing measurements from a 139 deg"2 field. We measure the abundance of peaks identified in aperture mass maps, as a function of their signal-to-noise ratio, in the signal-to-noise range 0 4 would require significant corrections, which is why we do not include them in our analysis. We compare our results to the cosmological constraints from the two-point analysis on the SV field and find them to be in good agreement in both the central value and its uncertainty. Lastly, we discuss prospects for future peak statistics analysis with upcoming DES data.

  10. Explaining the price of oil 1971–2014 : The need to use reliable data on oil discovery and to account for ‘mid-point’ peak

    International Nuclear Information System (INIS)

    Bentley, Roger; Bentley, Yongmei

    2015-01-01

    This paper explains, in broad terms, the price of oil from 1971 to 2014 and focuses on the large price increases after 1973 and 2004. The explanation for these increases includes the quantity of conventional oil (i.e. oil in fields) discovered, combined with the decline in production of this oil that occurs typically once ‘mid-point’ is passed. Many past explanations of oil price have overlooked these two constraints, and hence provided insufficient explanations of oil price. Reliable data on conventional oil discovery cannot come from public-domain proved (‘1P’) oil reserves, as such data are very misleading. Instead oil industry backdated proved-plus-probable (‘2P’) data must be used. It is recognised that accessing 2P data can be expensive, or difficult. The ‘mid-point’ peak of conventional oil production results from a region's field-size distribution, its fall-off in oil discovery, and the physics of field decline. In terms of the future price of oil, estimates of the global recoverable resource of conventional oil show that the oil price will remain high on average, unless dramatic changes occur in the volume of production and cost of non-conventional oils, or if the overall demand for oil were to decline. The paper concludes with policy recommendations. - Highlights: • We show that understanding the oil price is assisted by reliable data on oil discovery. • These data need to be combined with the ‘peak at mid-point’ concept. • Results show that the world has probably entered an era of constrained oil supply. • Oil price stays high unless non-conventional supply, or demand, change significantly.

  11. Assessment of peak oxygen uptake during handcycling: Test-retest reliability and comparison of a ramp-incremented and perceptually-regulated exercise test.

    Science.gov (United States)

    Hutchinson, Michael J; Paulson, Thomas A W; Eston, Roger; Goosey-Tolfrey, Victoria L

    2017-01-01

    To examine the reliability of a perceptually-regulated maximal exercise test (PRETmax) to measure peak oxygen uptake ([Formula: see text]) during handcycle exercise and to compare peak responses to those derived from a ramp-incremented protocol (RAMP). Twenty recreationally active individuals (14 male, 6 female) completed four trials across a 2-week period, using a randomised, counterbalanced design. Participants completed two RAMP protocols (20 W·min-1) in week 1, followed by two PRETmax in week 2, or vice versa. The PRETmax comprised five, 2-min stages clamped at Ratings of Perceived Exertion (RPE) 11, 13, 15, 17 and 20. Participants changed power output (PO) as often as required to maintain target RPE. Gas exchange variables (oxygen uptake, carbon dioxide production, minute ventilation), heart rate (HR) and PO were collected throughout. Differentiated RPE were collected at the end of each stage throughout trials. For relative [Formula: see text], coefficient of variation (CV) was equal to 4.1% and 4.8%, with ICC(3,1) of 0.92 and 0.85 for repeated measures from PRETmax and RAMP, respectively. Measurement error was 0.15 L·min-1 and 2.11 ml·kg-1·min-1 in PRETmax and 0.16 L·min-1 and 2.29 ml·kg-1·min-1 during RAMP for determining absolute and relative [Formula: see text], respectively. The difference in [Formula: see text] between PRETmax and RAMP was tending towards statistical significance (26.2 ± 5.1 versus 24.3 ± 4.0 ml·kg-1·min-1, P = 0.055). The 95% LoA were -1.9 ± 4.1 (-9.9 to 6.2) ml·kg-1·min-1. The PRETmax can be used as a reliable test to measure [Formula: see text] during handcycle exercise in recreationally active participants. Whilst PRETmax tended towards significantly greater [Formula: see text] values than RAMP, the difference is smaller than measurement error of determining [Formula: see text] from PRETmax and RAMP.

  12. Reliability of perfusion MR imaging in symptomatic carotid occlusive disease. Cerebral blood volume, mean transit time and time-to-peak

    International Nuclear Information System (INIS)

    Kim, J.H.; Lee, E.J.; Lee, S.J.; Choi, N.C.; Lim, B.H.; Shin, T.

    2002-01-01

    Purpose: Perfusion MR imaging offers an easy quantitative evaluation of relative regional cerebral blood volume (rrCBV), relative mean transit time (rMTT) and time-to-peak (TTP). The purpose of this study was to investigate the reliability of these parameters in assessing the hemodynamic disturbance of carotid occlusive disease in comparison with normative data. Material and Methods: Dynamic contrast-enhanced T2*-weighted perfusion MR imaging was performed in 19 patients with symptomatic unilateral internal carotid artery occlusion and 20 control subjects. The three parameters were calculated from the concentration-time curve fitted by gamma-variate function. Lesion-to-contralateral ratios of each parameter were compared between patients and control subjects. Results: Mean±SD of rrCBV, rMTT and TTP ratios of patients were 1.089±0.118, 1.054±0.031 and 1.062±0.039, respectively, and those of control subjects were 1.002±0.045, 1.000±0.006, 1.001±0.006, respectively. The rMTT and TTP ratios of all patients were greater than 2SDs of control data, whereas in only 6 patients (32%), rrCBV ratios were greater than 2SDs of control data. The three parameter ratios of the patients were significantly high compared with those of control subjects, respectively (p<0.01 for rrCBV ratios, p<0.0001 for rMTT ratios, and p<0.0001 for TTP ratios). Conclusion: Our results indicate that rMTT and TTP of patients, in contrast to rrCBV, are distributed in narrow ranges minimally overlapped with control data. The rMTT and TTP could be more reliable parameters than rrCBV in assessing the hemodynamic disturbance in carotid occlusive disease

  13. Calculations of B1 Distribution, Specific Energy Absorption Rate, and Intrinsic Signal-to-Noise Ratio for a Body-Size Birdcage Coil Loaded with Different Human Subjects at 64 and 128 MHz.

    Science.gov (United States)

    Liu, W; Collins, C M; Smith, M B

    2005-03-01

    A numerical model of a female body is developed to study the effects of different body types with different coil drive methods on radio-frequency magnetic ( B 1 ) field distribution, specific energy absorption rate (SAR), and intrinsic signal-to-noise ratio (ISNR) for a body-size birdcage coil at 64 and 128 MHz. The coil is loaded with either a larger, more muscular male body model (subject 1) or a newly developed female body model (subject 2), and driven with two-port (quadrature), four-port, or many (ideal) sources. Loading the coil with subject 1 results in significantly less homogeneous B 1 field, higher SAR, and lower ISNR than those for subject 2 at both frequencies. This dependence of MR performance and safety measures on body type indicates a need for a variety of numerical models representative of a diverse population for future calculations. The different drive methods result in similar B 1 field patterns, SAR, and ISNR in all cases.

  14. Combination of highly nonlinear fiber, an optical bandpass filter, and a Fabry-Perot filter to improve the signal-to-noise ratio of a supercontinuum continuous-wave optical source.

    Science.gov (United States)

    Nan, Yinbo; Huo, Li; Lou, Caiyun

    2005-05-20

    We present a theoretical study of a supercontinuum (SC) continuous-wave (cw) optical source generation in highly nonlinear fiber and its noise properties through numerical simulations based on the nonlinear Schrödinger equation. Fluctuations of pump pulses generate substructures between the longitudinal modes that result in the generation of white noise and then in degradation of coherence and in a decrease of the modulation depths and the signal-to-noise ratio (SNR). A scheme for improvement of the SNR of a multiwavelength cw optical source based on a SC by use of the combination of a highly nonlinear fiber (HNLF), an optical bandpass filter, and a Fabry-Perot (FP) filter is presented. Numerical simulations show that the improvement in modulation depth is relative to the HNLF's length, the 3-dB bandwidth of the optical bandpass filter, and the reflection ratio of the FP filter and that the average improvement in modulation depth is 13.7 dB under specified conditions.

  15. Analysis on frequency response of trans-impedance amplifier (TIA) for signal-to-noise ratio (SNR) enhancement in optical signal detection system using lock-in amplifier (LIA)

    Science.gov (United States)

    Kim, Ji-Hoon; Jeon, Su-Jin; Ji, Myung-Gi; Park, Jun-Hee; Choi, Young-Wan

    2017-02-01

    Lock-in amplifier (LIA) has been widely used in optical signal detection systems because it can measure small signal under high noise level. Generally, The LIA used in optical signal detection system is composed of transimpedance amplifier (TIA), phase sensitive detector (PSD) and low pass filter (LPF). But commercial LIA using LPF is affected by flicker noise. To avoid flicker noise, there is 2ω detection LIA using BPF. To improve the dynamic reserve (DR) of the 2ω LIA, the signal to noise ratio (SNR) of the TIA should be improved. According to the analysis of frequency response of the TIA, the noise gain can be minimized by proper choices of input capacitor (Ci) and feed-back network in the TIA in a specific frequency range. In this work, we have studied how the SNR of the TIA can be improved by a proper choice of frequency range. We have analyzed the way to control this frequency range through the change of passive component in the TIA. The result shows that the variance of the passive component in the TIA can change the specific frequency range where the noise gain is minimized in the uniform gain region of the TIA.

  16. Post-Synapse Model Cell for Synaptic Glutamate Receptor (GluR-Based Biosensing: Strategy and Engineering to Maximize Ligand-Gated Ion-Flux Achieving High Signal-to-Noise Ratio

    Directory of Open Access Journals (Sweden)

    Tetsuya Haruyama

    2012-01-01

    Full Text Available Cell-based biosensing is a “smart” way to obtain efficacy-information on the effect of applied chemical on cellular biological cascade. We have proposed an engineered post-synapse model cell-based biosensors to investigate the effects of chemicals on ionotropic glutamate receptor (GluR, which is a focus of attention as a molecular target for clinical neural drug discovery. The engineered model cell has several advantages over native cells, including improved ease of handling and better reproducibility in the application of cell-based biosensors. However, in general, cell-based biosensors often have low signal-to-noise (S/N ratios due to the low level of cellular responses. In order to obtain a higher S/N ratio in model cells, we have attempted to design a tactic model cell with elevated cellular response. We have revealed that the increase GluR expression level is not directly connected to the amplification of cellular responses because the saturation of surface expression of GluR, leading to a limit on the total ion influx. Furthermore, coexpression of GluR with a voltage-gated potassium channel increased Ca2+ ion influx beyond levels obtained with saturating amounts of GluR alone. The construction of model cells based on strategy of amplifying ion flux per individual receptors can be used to perform smart cell-based biosensing with an improved S/N ratio.

  17. The Dependence of Signal-To-Noise Ratio (S/N) Between Star Brightness and Background on the Filter Used in Images Taken by the Vulcan Photometric Planet Search Camera

    Science.gov (United States)

    Mena-Werth, Jose

    1998-01-01

    The Vulcan Photometric Planet Search is the ground-based counterpart of Kepler Mission Proposal. The Kepler Proposal calls for the launch of telescope to look intently at a small patch of sky for four year. The mission is designed to look for extra-solar planets that transit sun-like stars. The Kepler Mission should be able to detect Earth-size planets. This goal requires an instrument and software capable of detecting photometric changes of several parts per hundred thousand in the flux of a star. The goal also requires the continuous monitoring of about a hundred thousand stars. The Kepler Mission is a NASA Discovery Class proposal similar in cost to the Lunar Prospector. The Vulcan Search is also a NASA project but based at Lick Observatory. A small wide-field telescope monitors various star fields successively during the year. Dozens of images, each containing tens of thousands of stars, are taken any night that weather permits. The images are then monitored for photometric changes of the order of one part in a thousand. These changes would reveal the transit of an inner-orbit Jupiter-size planet similar to those discovered recently in spectroscopic searches. In order to achieve a one part in one thousand photometric precision even the choice of a filter used in taking an exposure can be critical. The ultimate purpose of an filter is to increase the signal-to-noise ratio (S/N) of one's observation. Ideally, filters reduce the sky glow cause by street lights and, thereby, make the star images more distinct. The higher the S/N, the higher is the chance to observe a transit signal that indicates the presence of a new planet. It is, therefore, important to select the filter that maximizes the S/N.

  18. Simultaneous multi-slice echo planar diffusion weighted imaging of the liver and the pancreas: Optimization of signal-to-noise ratio and acquisition time and application to intravoxel incoherent motion analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boss, Andreas, E-mail: andreas.boss@usz.ch [Institute of Diagnostic and Interventional Radiology, University Hospital Zurich (Switzerland); Barth, Borna; Filli, Lukas; Kenkel, David; Wurnig, Moritz C. [Institute of Diagnostic and Interventional Radiology, University Hospital Zurich (Switzerland); Piccirelli, Marco [Institute of Neuroradiology, University Hospital of Zurich (Switzerland); Reiner, Caecilia S. [Institute of Diagnostic and Interventional Radiology, University Hospital Zurich (Switzerland)

    2016-11-15

    Purpose: To optimize and test a diffusion-weighted imaging (DWI) echo-planar imaging (EPI) sequence with simultaneous multi-slice (SMS) excitation in the liver and pancreas regarding acquisition time (TA), number of slices, signal-to-noise ratio (SNR), image quality (IQ), apparent diffusion coefficient (ADC) quantitation accuracy, and feasibility of intravoxel incoherent motion (IVIM) analysis. Materials and methods: Ten healthy volunteers underwent DWI of the upper abdomen at 3T. A SMS DWI sequence with CAIPIRINHA unaliasing technique (acceleration factors 2/3, denoted AF2/3) was compared to standard DWI-EPI (AF1). Four schemes were evaluated: (i) reducing TA, (ii) keeping TA identical with increasing number of averages, (iii) increasing number of slices with identical TA (iv) increasing number of b-values for IVIM. Acquisition schemes i-iii were evaluated qualitatively (reader score) and quantitatively (ADC values, SNR). Results: In scheme (i) no differences in SNR were observed (p = 0.321 − 0.038) with reduced TA (AF2 increase in SNR/time 75.6%, AF3 increase SNR/time 102.4%). No SNR improvement was obtained in scheme (ii). Increased SNR/time could be invested in acquisition of more and thinner slices or higher number of b-values. Image quality scores were stable for AF2 but decreased for AF3. Only for AF3, liver ADC values were systematically lower. Conclusion: SMS-DWI of the liver and pancreas provides substantially higher SNR/time, which either may be used for shorter scan time, higher slice resolution or IVIM measurements.

  19. Simultaneous multi-slice echo planar diffusion weighted imaging of the liver and the pancreas: Optimization of signal-to-noise ratio and acquisition time and application to intravoxel incoherent motion analysis

    International Nuclear Information System (INIS)

    Boss, Andreas; Barth, Borna; Filli, Lukas; Kenkel, David; Wurnig, Moritz C.; Piccirelli, Marco; Reiner, Caecilia S.

    2016-01-01

    Purpose: To optimize and test a diffusion-weighted imaging (DWI) echo-planar imaging (EPI) sequence with simultaneous multi-slice (SMS) excitation in the liver and pancreas regarding acquisition time (TA), number of slices, signal-to-noise ratio (SNR), image quality (IQ), apparent diffusion coefficient (ADC) quantitation accuracy, and feasibility of intravoxel incoherent motion (IVIM) analysis. Materials and methods: Ten healthy volunteers underwent DWI of the upper abdomen at 3T. A SMS DWI sequence with CAIPIRINHA unaliasing technique (acceleration factors 2/3, denoted AF2/3) was compared to standard DWI-EPI (AF1). Four schemes were evaluated: (i) reducing TA, (ii) keeping TA identical with increasing number of averages, (iii) increasing number of slices with identical TA (iv) increasing number of b-values for IVIM. Acquisition schemes i-iii were evaluated qualitatively (reader score) and quantitatively (ADC values, SNR). Results: In scheme (i) no differences in SNR were observed (p = 0.321 − 0.038) with reduced TA (AF2 increase in SNR/time 75.6%, AF3 increase SNR/time 102.4%). No SNR improvement was obtained in scheme (ii). Increased SNR/time could be invested in acquisition of more and thinner slices or higher number of b-values. Image quality scores were stable for AF2 but decreased for AF3. Only for AF3, liver ADC values were systematically lower. Conclusion: SMS-DWI of the liver and pancreas provides substantially higher SNR/time, which either may be used for shorter scan time, higher slice resolution or IVIM measurements.

  20. Image fusion in dual energy computed tomography for detection of various anatomic structures - Effect on contrast enhancement, contrast-to-noise ratio, signal-to-noise ratio and image quality

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Jijo, E-mail: jijopaul1980@gmail.com [Department of Diagnostic Radiology, Goethe University Hospital, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Department of Biophysics, Goethe University, Max von Laue-Str.1, 60438 Frankfurt am Main (Germany); Bauer, Ralf W. [Department of Diagnostic Radiology, Goethe University Hospital, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Maentele, Werner [Department of Biophysics, Goethe University, Max von Laue-Str.1, 60438 Frankfurt am Main (Germany); Vogl, Thomas J. [Department of Diagnostic Radiology, Goethe University Hospital, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany)

    2011-11-15

    Objective: The purpose of this study was to evaluate image fusion in dual energy computed tomography for detecting various anatomic structures based on the effect on contrast enhancement, contrast-to-noise ratio, signal-to-noise ratio and image quality. Material and methods: Forty patients underwent a CT neck with dual energy mode (DECT under a Somatom Definition flash Dual Source CT scanner (Siemens, Forchheim, Germany)). Tube voltage: 80-kV and Sn140-kV; tube current: 110 and 290 mA s; collimation-2 x 32 x 0.6 mm. Raw data were reconstructed using a soft convolution kernel (D30f). Fused images were calculated using a spectrum of weighting factors (0.0, 0.3, 0.6 0.8 and 1.0) generating different ratios between the 80- and Sn140-kV images (e.g. factor 0.6 corresponds to 60% of their information from the 80-kV image, and 40% from the Sn140-kV image). CT values and SNRs measured in the ascending aorta, thyroid gland, fat, muscle, CSF, spinal cord, bone marrow and brain. In addition, CNR values calculated for aorta, thyroid, muscle and brain. Subjective image quality evaluated using a 5-point grading scale. Results compared using paired t-tests and nonparametric-paired Wilcoxon-Wilcox-test. Results: Statistically significant increases in mean CT values noted in anatomic structures when increasing weighting factors used (all P {<=} 0.001). For example, mean CT values derived from the contrast enhanced aorta were 149.2 {+-} 12.8 Hounsfield Units (HU), 204.8 {+-} 14.4 HU, 267.5 {+-} 18.6 HU, 311.9 {+-} 22.3 HU, 347.3 {+-} 24.7 HU, when the weighting factors 0.0, 0.3, 0.6, 0.8 and 1.0 were used. The highest SNR and CNR values were found in materials when the weighting factor 0.6 used. The difference CNR between the weighting factors 0.6 and 0.3 was statistically significant in the contrast enhanced aorta and thyroid gland (P = 0.012 and P = 0.016, respectively). Visual image assessment for image quality showed the highest score for the data reconstructed using the

  1. Dose modulated retrospective ECG-gated versus non-gated 64-row CT angiography of the aorta at the same radiation dose: Comparison of motion artifacts, diagnostic confidence and signal-to-noise-ratios

    International Nuclear Information System (INIS)

    Schernthaner, Ruediger E.; Stadler, Alfred; Beitzke, Dietrich; Homolka, Peter; Weber, Michael; Lammer, Johannes; Czerny, Martin; Loewe, Christian

    2012-01-01

    Purpose: To compare ECG-gated and non-gated CT angiography of the aorta at the same radiation dose, with regard to motion artifacts (MA), diagnostic confidence (DC) and signal-to-noise-ratios (SNRs). Materials and methods: Sixty consecutive patients prospectively randomized into two groups underwent 64-row CT angiography, with or without dose-modulated ECG-gating, of the entire aorta, due to several pathologies of the ascending aorta. MA and DC were both assessed using a four-point scale. SNRs were calculated by dividing the mean enhancement by the standard deviation. The dose-length-product (DLP) of each examination was recorded and the effective dose was estimated. Results: Dose-modulated ECG-gating showed statistically significant advantages over non-gated CT angiography, with regard to MA (p < 0.001) and DC (p < 0.001), at the aortic valve, at the origin of the coronary arteries, and at the dissection membrane, with a significant correlation (p < 0.001) between MA and DC. At the aortic wall, however, ECG-gated CT angiography showed statistically significant fewer MA (p < 0.001), but not a statistically significant higher DC (p = 0.137) compared to non-gated CT angiography. At the supra-aortic vessels and the descending aorta, the ECG-triggering showed no statistically significant differences with regard to MA (p = 0.861 and 0.526, respectively) and DC (p = 1.88 and 0.728, respectively). The effective dose of ECG-gated CT angiography (23.24 mSv; range, 18.43–25.94 mSv) did not differ significantly (p = 0.051) from that of non-gated CT angiography (24.28 mSv; range, 19.37–29.27 mSv). Conclusion: ECG-gated CT angiography of the entire aorta reduces MA and results in a higher DC with the same SNR, compared to non-gated CT angiography at the same radiation dose.

  2. Image fusion in dual energy computed tomography for detection of various anatomic structures - Effect on contrast enhancement, contrast-to-noise ratio, signal-to-noise ratio and image quality

    International Nuclear Information System (INIS)

    Paul, Jijo; Bauer, Ralf W.; Maentele, Werner; Vogl, Thomas J.

    2011-01-01

    Objective: The purpose of this study was to evaluate image fusion in dual energy computed tomography for detecting various anatomic structures based on the effect on contrast enhancement, contrast-to-noise ratio, signal-to-noise ratio and image quality. Material and methods: Forty patients underwent a CT neck with dual energy mode (DECT under a Somatom Definition flash Dual Source CT scanner (Siemens, Forchheim, Germany)). Tube voltage: 80-kV and Sn140-kV; tube current: 110 and 290 mA s; collimation-2 x 32 x 0.6 mm. Raw data were reconstructed using a soft convolution kernel (D30f). Fused images were calculated using a spectrum of weighting factors (0.0, 0.3, 0.6 0.8 and 1.0) generating different ratios between the 80- and Sn140-kV images (e.g. factor 0.6 corresponds to 60% of their information from the 80-kV image, and 40% from the Sn140-kV image). CT values and SNRs measured in the ascending aorta, thyroid gland, fat, muscle, CSF, spinal cord, bone marrow and brain. In addition, CNR values calculated for aorta, thyroid, muscle and brain. Subjective image quality evaluated using a 5-point grading scale. Results compared using paired t-tests and nonparametric-paired Wilcoxon-Wilcox-test. Results: Statistically significant increases in mean CT values noted in anatomic structures when increasing weighting factors used (all P ≤ 0.001). For example, mean CT values derived from the contrast enhanced aorta were 149.2 ± 12.8 Hounsfield Units (HU), 204.8 ± 14.4 HU, 267.5 ± 18.6 HU, 311.9 ± 22.3 HU, 347.3 ± 24.7 HU, when the weighting factors 0.0, 0.3, 0.6, 0.8 and 1.0 were used. The highest SNR and CNR values were found in materials when the weighting factor 0.6 used. The difference CNR between the weighting factors 0.6 and 0.3 was statistically significant in the contrast enhanced aorta and thyroid gland (P = 0.012 and P = 0.016, respectively). Visual image assessment for image quality showed the highest score for the data reconstructed using the weighting factor 0

  3. Comparison of different cardiac MRI sequences at 1.5T/3.0T with respect to signal-to-noise and contrast-to-noise ratios - initial experience

    International Nuclear Information System (INIS)

    Gutberlet, M.; Spors, B.; Grothoff, M.; Freyhardt, P.; Schwinge, K.; Plotkin, M.; Amthauer, H.; Felix, R.

    2004-01-01

    Purpose: To compare image quality, signal-to-noise (SNR) and contrast-to-noise ratios (CNR) of different MRI sequences for cardiac imaging at 1.5 T and 3.0 T in volunteers. Material and Methods: 10 volunteers (5 male, 5 female) with a mean age of 33 years (±8) without any history of cardiac diseases were examined on a GE Signa 3.0 T and a GE Signa 1.5 T TwinSpeed Excite (GE Medical Systems, Milwaukee, WI, USA) scanner using a 4-element phased array surface coil (same design) on the same day. For tissue characterization ECG gated Fast Spinecho (FSE) T 1 - (Double IR), T 1 -STIR (Triple IR) and T 2 -weighted sequences in transverse orientation were used. For functional analysis a steady state free precession (SSFP-FIESTA) sequence was performed in the 4-chamber, 2-chamber long axis and short axis view. The flip angle used for the SSFP sequence at 3.0 T was reduced from 45 to 30 to keep short TR times while staying within the pre-defined SAR limitations. All other sequence parameters were kept constant. Results: All acquisitions could successfully be completed for the 10 volunteers. The mean SNR 3.0 T compared to 1.5 T was remarkably increased (p 2 - (160% SNR increase), the STIR-T 1 - (123%) and the T 1 - (91%) weighted FSE. Similar results were found comparing CNR at 3.0 T and 1.5 T. The mean SNR achieved using the SSFP sequences was more than doubled by 3.0 T (150%), but did not have any significant effect on the CNR. The image quality at 3.0 T did not appear to be improved, and was considered to be significantly worse when using SSFP sequences. Artefacts like shading in the area of the right ventricle (RV) were found to be more present at 3.0 T using FSE sequences. After a localized shim had been performed in 5/10 volunteers at the infero-lateral wall of the left ventricle (LV) with the SSFP sequences at 3.0 T no significant increase in artefacts could be detected. (orig.) [de

  4. On dealing with multiple correlation peaks in PIV

    Science.gov (United States)

    Masullo, A.; Theunissen, R.

    2018-05-01

    A novel algorithm to analyse PIV images in the presence of strong in-plane displacement gradients and reduce sub-grid filtering is proposed in this paper. Interrogation windows subjected to strong in-plane displacement gradients often produce correlation maps presenting multiple peaks. Standard multi-grid procedures discard such ambiguous correlation windows using a signal to noise (SNR) filter. The proposed algorithm improves the standard multi-grid algorithm allowing the detection of splintered peaks in a correlation map through an automatic threshold, producing multiple displacement vectors for each correlation area. Vector locations are chosen by translating images according to the peak displacements and by selecting the areas with the strongest match. The method is assessed on synthetic images of a boundary layer of varying intensity and a sinusoidal displacement field of changing wavelength. An experimental case of a flow exhibiting strong velocity gradients is also provided to show the improvements brought by this technique.

  5. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  6. 'Peak oil' or 'peak demand'?

    International Nuclear Information System (INIS)

    Chevallier, Bruno; Moncomble, Jean-Eudes; Sigonney, Pierre; Vially, Rolland; Bosseboeuf, Didier; Chateau, Bertrand

    2012-01-01

    This article reports a workshop which addressed several energy issues like the objectives and constraints of energy mix scenarios, the differences between the approaches in different countries, the cost of new technologies implemented for this purposes, how these technologies will be developed and marketed, which will be the environmental and societal acceptability of these technical choices. Different aspects and issues have been more precisely presented and discussed: the peak oil, development of shale gases and their cost (will non conventional hydrocarbons modify the peak oil and be socially accepted?), energy efficiency (its benefits, its reality in France and other countries, its position in front of the challenge of energy transition), and strategies in the transport sector (challenges for mobility, evolution towards a model of sustainable mobility)

  7. Automated asteroseismic peak detections

    Science.gov (United States)

    García Saravia Ortiz de Montellano, Andrés; Hekker, S.; Themeßl, N.

    2018-05-01

    Space observatories such as Kepler have provided data that can potentially revolutionize our understanding of stars. Through detailed asteroseismic analyses we are capable of determining fundamental stellar parameters and reveal the stellar internal structure with unprecedented accuracy. However, such detailed analyses, known as peak bagging, have so far been obtained for only a small percentage of the observed stars while most of the scientific potential of the available data remains unexplored. One of the major challenges in peak bagging is identifying how many solar-like oscillation modes are visible in a power density spectrum. Identification of oscillation modes is usually done by visual inspection that is time-consuming and has a degree of subjectivity. Here, we present a peak-detection algorithm especially suited for the detection of solar-like oscillations. It reliably characterizes the solar-like oscillations in a power density spectrum and estimates their parameters without human intervention. Furthermore, we provide a metric to characterize the false positive and false negative rates to provide further information about the reliability of a detected oscillation mode or the significance of a lack of detected oscillation modes. The algorithm presented here opens the possibility for detailed and automated peak bagging of the thousands of solar-like oscillators observed by Kepler.

  8. Toward a more comprehensive understanding of the impact of masker type and signal-to-noise ratio on the pupillary response while performing a speech-in-noise test

    DEFF Research Database (Denmark)

    Wendt, Dorothea; Koelewijn, Thomas; Książek, Patrycja

    2018-01-01

    intelligibility. In a second experiment, effects of SNR on listening effort were examined while presenting the HINT sentences across a broad range of fixed SNRs corresponding to intelligibility scores ranging from 100 % to 0 % correct performance. A peak pupil dilation (PPD) was calculated and a Growth Curve...... Analysis (GCA) was performed to examine listening effort involved in speech recognition as a function of SNR. The results of two experiments showed that the pupil dilation response is highly affected by both masker type and SNR when performing the HINT. The PPD was highest, suggesting the highest level...... strongly varied as a function of SNRs. Listening effort was highest for intermediate SNRs with performance accuracies ranging between 30 % -70 % correct. GCA revealed time-dependent effects of the SNR on the pupillary response that were not reflected in the PPD....

  9. Free-space optical communications with peak and average constraints: High SNR capacity approximation

    KAUST Repository

    Chaaban, Anas

    2015-09-07

    The capacity of the intensity-modulation direct-detection (IM-DD) free-space optical channel with both average and peak intensity constraints is studied. A new capacity lower bound is derived by using a truncated-Gaussian input distribution. Numerical evaluation shows that this capacity lower bound is nearly tight at high signal-to-noise ratio (SNR), while it is shown analytically that the gap to capacity upper bounds is a small constant at high SNR. In particular, the gap to the high-SNR asymptotic capacity of the channel under either a peak or an average constraint is small. This leads to a simple approximation of the high SNR capacity. Additionally, a new capacity upper bound is derived using sphere-packing arguments. This bound is tight at high SNR for a channel with a dominant peak constraint.

  10. Improvement of the XANAM System and Acquisition of a Peak Signal with a High S/N ratio

    International Nuclear Information System (INIS)

    Suzuki, S; Nakamura, M; Kinoshita, K; Koike, Y; Fujikawa, K; Matsudaira, N; Chun, W-J; Nomura, M; Asakura, K

    2007-01-01

    We have made remarkable progress in detecting X-ray-induced frequency shift signals, which will promote development of a chemically sensitive NC-AFM. A highperformance controller provides a tenfold higher signal to noise ratio than that previously reported. We confirmed that the frequency shift or complementary Z-feedback signal dependence on X-ray energy has a peak. An important feature of the signal is that it does not follow the absorption spectrum of a surface element. These new findings are important to elucidate this novel X-ray-induced phenomenon

  11. Explicit signal to noise ratio in reproducing kernel Hilbert spaces

    DEFF Research Database (Denmark)

    Gomez-Chova, Luis; Nielsen, Allan Aasbjerg; Camps-Valls, Gustavo

    2011-01-01

    This paper introduces a nonlinear feature extraction method based on kernels for remote sensing data analysis. The proposed approach is based on the minimum noise fraction (MNF) transform, which maximizes the signal variance while also minimizing the estimated noise variance. We here propose...... an alternative kernel MNF (KMNF) in which the noise is explicitly estimated in the reproducing kernel Hilbert space. This enables KMNF dealing with non-linear relations between the noise and the signal features jointly. Results show that the proposed KMNF provides the most noise-free features when confronted...

  12. Signal-to-noise limitations in white light holography

    Science.gov (United States)

    Ribak, Erez; Breckinridge, James B.; Roddier, Claude; Roddier, Francois

    1988-01-01

    A simple derivation is given for the SNR in images reconstructed from incoherent holograms. Dependence is shown to be on the hologram SNR, object complexity, and the number of pixels in the detector. Reconstruction of involved objects becomes possible with high-dynamic-range detectors such as CCDs. White-light holograms have been produced by means of a rotational shear interferometer combined with a chromatic corrector. A digital inverse transform recreated the object.

  13. Weak Lensing Peaks in Simulated Light-Cones: Investigating the Coupling between Dark Matter and Dark Energy

    Science.gov (United States)

    Giocoli, Carlo; Moscardini, Lauro; Baldi, Marco; Meneghetti, Massimo; Metcalf, Robert B.

    2018-05-01

    In this paper, we study the statistical properties of weak lensing peaks in light-cones generated from cosmological simulations. In order to assess the prospects of such observable as a cosmological probe, we consider simulations that include interacting Dark Energy (hereafter DE) models with coupling term between DE and Dark Matter. Cosmological models that produce a larger population of massive clusters have more numerous high signal-to-noise peaks; among models with comparable numbers of clusters those with more concentrated haloes produce more peaks. The most extreme model under investigation shows a difference in peak counts of about 20% with respect to the reference ΛCDM model. We find that peak statistics can be used to distinguish a coupling DE model from a reference one with the same power spectrum normalisation. The differences in the expansion history and the growth rate of structure formation are reflected in their halo counts, non-linear scale features and, through them, in the properties of the lensing peaks. For a source redshift distribution consistent with the expectations of future space-based wide field surveys, we find that typically seventy percent of the cluster population contributes to weak-lensing peaks with signal-to-noise ratios larger than two, and that the fraction of clusters in peaks approaches one-hundred percent for haloes with redshift z ≤ 0.5. Our analysis demonstrates that peak statistics are an important tool for disentangling DE models by accurately tracing the structure formation processes as a function of the cosmic time.

  14. KiDS-450: cosmological constraints from weak-lensing peak statistics - II: Inference from shear peaks using N-body simulations

    Science.gov (United States)

    Martinet, Nicolas; Schneider, Peter; Hildebrandt, Hendrik; Shan, HuanYuan; Asgari, Marika; Dietrich, Jörg P.; Harnois-Déraps, Joachim; Erben, Thomas; Grado, Aniello; Heymans, Catherine; Hoekstra, Henk; Klaes, Dominik; Kuijken, Konrad; Merten, Julian; Nakajima, Reiko

    2018-02-01

    We study the statistics of peaks in a weak-lensing reconstructed mass map of the first 450 deg2 of the Kilo Degree Survey (KiDS-450). The map is computed with aperture masses directly applied to the shear field with an NFW-like compensated filter. We compare the peak statistics in the observations with that of simulations for various cosmologies to constrain the cosmological parameter S_8 = σ _8 √{Ω _m/0.3}, which probes the (Ωm, σ8) plane perpendicularly to its main degeneracy. We estimate S8 = 0.750 ± 0.059, using peaks in the signal-to-noise range 0 ≤ S/N ≤ 4, and accounting for various systematics, such as multiplicative shear bias, mean redshift bias, baryon feedback, intrinsic alignment, and shear-position coupling. These constraints are ˜ 25 per cent tighter than the constraints from the high significance peaks alone (3 ≤ S/N ≤ 4) which typically trace single-massive haloes. This demonstrates the gain of information from low-S/N peaks. However, we find that including S/N KiDS-450. Combining shear peaks with non-tomographic measurements of the shear two-point correlation functions yields a ˜20 per cent improvement in the uncertainty on S8 compared to the shear two-point correlation functions alone, highlighting the great potential of peaks as a cosmological probe.

  15. A study of filtering problems of background noise in nuclear spectrometry, improvement of signal-to-noise ratio, and of pulse characteristics produced by the optimum predictor device; Etude de problemes de filtrage de bruit de fond en spectrometrie nucleaire, amelioration du rapport signal sur bruit et des caracteristiques de l'impulsion mise en forme par le dispositif du predicteur optimum

    Energy Technology Data Exchange (ETDEWEB)

    Benda, J [Commissariat a l' Energie Atomique, 91 - Saclay (France). Centre d' Etudes Nucleaires

    1967-05-01

    The purpose of nuclear spectrometry is the precise measurement of particles energy. The resolving power of a spectrometer design is an important factor. Two main phenomena are involved in the limitation of this resolving power: The statistical fluctuations of the detector itself, and the background noise. For a given noise, the theory of filters enables the calculation of networks specially designed for the improvement of signal to noise ratio. The proposed system should lead to an improvement of 10.5 per cent of this ratio. Experiments have confirmed this theoretical estimation. The predictor device also makes possible the obtaining of shortened pulses. (author) [French] Les mesures en spectrometrie nucleaire ont pour but la determination precise de l'energie des particules. Le pouvoir de resolution d'une chaine de spectrometrie est une caracteristique importante. Deux phenomenes principaux concourent a limiter ce pouvoir de resolution: les fluctuations statistiques du detecteur et le bruit de fond. Pour un bruit de fond donne, la theorie des filtres permet de calculer des reseaux susceptibles de modifier le rapport signal sur bruit. Le systeme propose permet d'ameliorer de 10.5 pour cent ce rapport lorsqu'on se place dans les conditions optimales. Les resultats experimentaux confirment les previsions. Le dispositif predicteur permet aussi un raccourcissement de l'impulsion dans le temps. (auteur)

  16. Improved peak detection in mass spectrum by incorporating continuous wavelet transform-based pattern matching.

    Science.gov (United States)

    Du, Pan; Kibbe, Warren A; Lin, Simon M

    2006-09-01

    A major problem for current peak detection algorithms is that noise in mass spectrometry (MS) spectra gives rise to a high rate of false positives. The false positive rate is especially problematic in detecting peaks with low amplitudes. Usually, various baseline correction algorithms and smoothing methods are applied before attempting peak detection. This approach is very sensitive to the amount of smoothing and aggressiveness of the baseline correction, which contribute to making peak detection results inconsistent between runs, instrumentation and analysis methods. Most peak detection algorithms simply identify peaks based on amplitude, ignoring the additional information present in the shape of the peaks in a spectrum. In our experience, 'true' peaks have characteristic shapes, and providing a shape-matching function that provides a 'goodness of fit' coefficient should provide a more robust peak identification method. Based on these observations, a continuous wavelet transform (CWT)-based peak detection algorithm has been devised that identifies peaks with different scales and amplitudes. By transforming the spectrum into wavelet space, the pattern-matching problem is simplified and in addition provides a powerful technique for identifying and separating the signal from the spike noise and colored noise. This transformation, with the additional information provided by the 2D CWT coefficients can greatly enhance the effective signal-to-noise ratio. Furthermore, with this technique no baseline removal or peak smoothing preprocessing steps are required before peak detection, and this improves the robustness of peak detection under a variety of conditions. The algorithm was evaluated with SELDI-TOF spectra with known polypeptide positions. Comparisons with two other popular algorithms were performed. The results show the CWT-based algorithm can identify both strong and weak peaks while keeping false positive rate low. The algorithm is implemented in R and will be

  17. Magnitude and Peak Amplitude Relationship for Microseismicity Induced by a Hydraulic Fracture Experiment

    Science.gov (United States)

    Smith, T.; Arce, A. C.; Ji, C.

    2016-12-01

    Waveform cross-correlation technique is widely used to improve the detection of small magnitude events induced by hydraulic fracturing. However, when events are detected, assigning a reliable magnitude is a challenging task, especially considering their small signal amplitude and high background noise during injections. In this study, we adopt the Match & Locate algorithm (M&L, Zhang and Wen, 2015) to analyze seven hours of continuous seismic observations from a hydraulic fracturing experiment in Central California. The site of the stimulated region is only 300-400m away from a 16-receiver vertical-borehole array which spans 230 m. The sampling rate is 4000 Hz. Both the injection sites and borehole array are more than 1.7 km below the surface. This dataset has previously been studied by an industry group, producing a catalog of 1134 events with moment magnitudes (Mw) ranging from -3.1 to -0.9. In this study, we select 202 events from this catalog with high signal to noise ratios to use as templates. Our M&L analysis produces a new catalog that contains 2119 events, which is 10 times more detections than the number of templates and about two times the original catalog. Using these two catalogs, we investigate the relationship of moment magnitude difference (ΔMW) and local magnitude difference (ΔML) between the detected event and corresponding template event. ΔML is computed using the peak amplitude ratio between the detected and template event for each channel. Our analysis yields an empirical relationship of ΔMW=0.64-0.65ΔML with an R2 of 0.99. The coefficient of 2/3 suggests that the information of the event's corner frequency is entirely lost (Hanks and Boore, 1984). The cause might not be unique, which implies that Earth's attenuation at this depth range (>1.7 km) is significant; or the 4000 Hz sampling rate is not sufficient. This relationship is crucial to estimate the b-value of the microseismicity induced by hydraulic fracture experiments. The analysis

  18. A wavelet transform algorithm for peak detection and application to powder x-ray diffraction data.

    Science.gov (United States)

    Gregoire, John M; Dale, Darren; van Dover, R Bruce

    2011-01-01

    Peak detection is ubiquitous in the analysis of spectral data. While many noise-filtering algorithms and peak identification algorithms have been developed, recent work [P. Du, W. Kibbe, and S. Lin, Bioinformatics 22, 2059 (2006); A. Wee, D. Grayden, Y. Zhu, K. Petkovic-Duran, and D. Smith, Electrophoresis 29, 4215 (2008)] has demonstrated that both of these tasks are efficiently performed through analysis of the wavelet transform of the data. In this paper, we present a wavelet-based peak detection algorithm with user-defined parameters that can be readily applied to the application of any spectral data. Particular attention is given to the algorithm's resolution of overlapping peaks. The algorithm is implemented for the analysis of powder diffraction data, and successful detection of Bragg peaks is demonstrated for both low signal-to-noise data from theta-theta diffraction of nanoparticles and combinatorial x-ray diffraction data from a composition spread thin film. These datasets have different types of background signals which are effectively removed in the wavelet-based method, and the results demonstrate that the algorithm provides a robust method for automated peak detection.

  19. A non-parametric peak calling algorithm for DamID-Seq.

    Directory of Open Access Journals (Sweden)

    Renhua Li

    Full Text Available Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS of double sex (DSX-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq. One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only. After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1 reads resampling; 2 reads scaling (normalization and computing signal-to-noise fold changes; 3 filtering; 4 Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC. We also used irreproducible discovery rate (IDR analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width.

  20. A non-parametric peak calling algorithm for DamID-Seq.

    Science.gov (United States)

    Li, Renhua; Hempel, Leonie U; Jiang, Tingbo

    2015-01-01

    Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS) of double sex (DSX)-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID) technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq). One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only). After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1) reads resampling; 2) reads scaling (normalization) and computing signal-to-noise fold changes; 3) filtering; 4) Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC). We also used irreproducible discovery rate (IDR) analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width.

  1. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  2. Peak Experience Project

    Science.gov (United States)

    Scott, Daniel G.; Evans, Jessica

    2010-01-01

    This paper emerges from the continued analysis of data collected in a series of international studies concerning Childhood Peak Experiences (CPEs) based on developments in understanding peak experiences in Maslow's hierarchy of needs initiated by Dr Edward Hoffman. Bridging from the series of studies, Canadian researchers explore collected…

  3. Automated Peak Picking and Peak Integration in Macromolecular NMR Spectra Using AUTOPSY

    Science.gov (United States)

    Koradi, Reto; Billeter, Martin; Engeli, Max; Güntert, Peter; Wüthrich, Kurt

    1998-12-01

    A new approach for automated peak picking of multidimensional protein NMR spectra with strong overlap is introduced, which makes use of the program AUTOPSY (automatedpeak picking for NMRspectroscopy). The main elements of this program are a novel function for local noise level calculation, the use of symmetry considerations, and the use of lineshapes extracted from well-separated peaks for resolving groups of strongly overlapping peaks. The algorithm generates peak lists with precise chemical shift and integral intensities, and a reliability measure for the recognition of each peak. The results of automated peak picking of NOESY spectra with AUTOPSY were tested in combination with the combined automated NOESY cross peak assignment and structure calculation routine NOAH implemented in the program DYANA. The quality of the resulting structures was found to be comparable with those from corresponding data obtained with manual peak picking.

  4. Peak-interviewet

    DEFF Research Database (Denmark)

    Raalskov, Jesper; Warming-Rasmussen, Bent

    Peak-interviewet er en særlig effektiv metode til at gøre ubevidste menneskelige ressourcer bevidste. Fokuspersonen (den interviewede) interviewes om en selvvalgt, personlig succesoplevelse. Terapeuten/coachen (intervieweren) spørger ind til processen, som ledte hen til denne succes. Herved afdæk...... fokuspersonen ønsker at tage op (nye mål eller nye processer). Nærværende workingpaper beskriver, hvad der menes med et peak-interview, peakinterviwets teoretiske fundament samt metodikken til at foretage et tillidsfuldt og effektiv peak-interview....

  5. Improved Peak Detection and Deconvolution of Native Electrospray Mass Spectra from Large Protein Complexes.

    Science.gov (United States)

    Lu, Jonathan; Trnka, Michael J; Roh, Soung-Hun; Robinson, Philip J J; Shiau, Carrie; Fujimori, Danica Galonic; Chiu, Wah; Burlingame, Alma L; Guan, Shenheng

    2015-12-01

    Native electrospray-ionization mass spectrometry (native MS) measures biomolecules under conditions that preserve most aspects of protein tertiary and quaternary structure, enabling direct characterization of large intact protein assemblies. However, native spectra derived from these assemblies are often partially obscured by low signal-to-noise as well as broad peak shapes because of residual solvation and adduction after the electrospray process. The wide peak widths together with the fact that sequential charge state series from highly charged ions are closely spaced means that native spectra containing multiple species often suffer from high degrees of peak overlap or else contain highly interleaved charge envelopes. This situation presents a challenge for peak detection, correct charge state and charge envelope assignment, and ultimately extraction of the relevant underlying mass values of the noncovalent assemblages being investigated. In this report, we describe a comprehensive algorithm developed for addressing peak detection, peak overlap, and charge state assignment in native mass spectra, called PeakSeeker. Overlapped peaks are detected by examination of the second derivative of the raw mass spectrum. Charge state distributions of the molecular species are determined by fitting linear combinations of charge envelopes to the overall experimental mass spectrum. This software is capable of deconvoluting heterogeneous, complex, and noisy native mass spectra of large protein assemblies as demonstrated by analysis of (1) synthetic mononucleosomes containing severely overlapping peaks, (2) an RNA polymerase II/α-amanitin complex with many closely interleaved ion signals, and (3) human TriC complex containing high levels of background noise. Graphical Abstract ᅟ.

  6. Peak power ratio generator

    Science.gov (United States)

    Moyer, R.D.

    A peak power ratio generator is described for measuring, in combination with a conventional power meter, the peak power level of extremely narrow pulses in the gigahertz radio frequency bands. The present invention in a preferred embodiment utilizes a tunnel diode and a back diode combination in a detector circuit as the only high speed elements. The high speed tunnel diode provides a bistable signal and serves as a memory device of the input pulses for the remaining, slower components. A hybrid digital and analog loop maintains the peak power level of a reference channel at a known amount. Thus, by measuring the average power levels of the reference signal and the source signal, the peak power level of the source signal can be determined.

  7. Peak-locking centroid bias in Shack-Hartmann wavefront sensing

    Science.gov (United States)

    Anugu, Narsireddy; Garcia, Paulo J. V.; Correia, Carlos M.

    2018-05-01

    Shack-Hartmann wavefront sensing relies on accurate spot centre measurement. Several algorithms were developed with this aim, mostly focused on precision, i.e. minimizing random errors. In the solar and extended scene community, the importance of the accuracy (bias error due to peak-locking, quantization, or sampling) of the centroid determination was identified and solutions proposed. But these solutions only allow partial bias corrections. To date, no systematic study of the bias error was conducted. This article bridges the gap by quantifying the bias error for different correlation peak-finding algorithms and types of sub-aperture images and by proposing a practical solution to minimize its effects. Four classes of sub-aperture images (point source, elongated laser guide star, crowded field, and solar extended scene) together with five types of peak-finding algorithms (1D parabola, the centre of gravity, Gaussian, 2D quadratic polynomial, and pyramid) are considered, in a variety of signal-to-noise conditions. The best performing peak-finding algorithm depends on the sub-aperture image type, but none is satisfactory to both bias and random errors. A practical solution is proposed that relies on the antisymmetric response of the bias to the sub-pixel position of the true centre. The solution decreases the bias by a factor of ˜7 to values of ≲ 0.02 pix. The computational cost is typically twice of current cross-correlation algorithms.

  8. Leveraging probabilistic peak detection to estimate baseline drift in complex chromatographic samples.

    Science.gov (United States)

    Lopatka, Martin; Barcaru, Andrei; Sjerps, Marjan J; Vivó-Truyols, Gabriel

    2016-01-29

    Accurate analysis of chromatographic data often requires the removal of baseline drift. A frequently employed strategy strives to determine asymmetric weights in order to fit a baseline model by regression. Unfortunately, chromatograms characterized by a very high peak saturation pose a significant challenge to such algorithms. In addition, a low signal-to-noise ratio (i.e. s/npeak detection algorithm. A posterior probability of being affected by a peak is computed for each point in the chromatogram, leading to a set of weights that allow non-iterative calculation of a baseline estimate. For extremely saturated chromatograms, the peak weighted (PW) method demonstrates notable improvement compared to the other methods examined. However, in chromatograms characterized by low-noise and well-resolved peaks, the asymmetric least squares (ALS) and the more sophisticated Mixture Model (MM) approaches achieve superior results in significantly less time. We evaluate the performance of these three baseline correction methods over a range of chromatographic conditions to demonstrate the cases in which each method is most appropriate. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Peak Oil, Peak Coal and Climate Change

    Science.gov (United States)

    Murray, J. W.

    2009-05-01

    Research on future climate change is driven by the family of scenarios developed for the IPCC assessment reports. These scenarios create projections of future energy demand using different story lines consisting of government policies, population projections, and economic models. None of these scenarios consider resources to be limiting. In many of these scenarios oil production is still increasing to 2100. Resource limitation (in a geological sense) is a real possibility that needs more serious consideration. The concept of 'Peak Oil' has been discussed since M. King Hubbert proposed in 1956 that US oil production would peak in 1970. His prediction was accurate. This concept is about production rate not reserves. For many oil producing countries (and all OPEC countries) reserves are closely guarded state secrets and appear to be overstated. Claims that the reserves are 'proven' cannot be independently verified. Hubbert's Linearization Model can be used to predict when half the ultimate oil will be produced and what the ultimate total cumulative production (Qt) will be. US oil production can be used as an example. This conceptual model shows that 90% of the ultimate US oil production (Qt = 225 billion barrels) will have occurred by 2011. This approach can then be used to suggest that total global production will be about 2200 billion barrels and that the half way point will be reached by about 2010. This amount is about 5 to 7 times less than assumed by the IPCC scenarios. The decline of Non-OPEC oil production appears to have started in 2004. Of the OPEC countries, only Saudi Arabia may have spare capacity, but even that is uncertain, because of lack of data transparency. The concept of 'Peak Coal' is more controversial, but even the US National Academy Report in 2007 concluded only a small fraction of previously estimated reserves in the US are actually minable reserves and that US reserves should be reassessed using modern methods. British coal production can be

  10. Peak regulation right

    International Nuclear Information System (INIS)

    Gao, Z. |; Ren, Z.; Li, Z.; Zhu, R.

    2005-01-01

    A peak regulation right concept and corresponding transaction mechanism for an electricity market was presented. The market was based on a power pool and independent system operator (ISO) model. Peak regulation right (PRR) was defined as a downward regulation capacity purchase option which allowed PRR owners to buy certain quantities of peak regulation capacity (PRC) at a specific price during a specified period from suppliers. The PRR owner also had the right to decide whether or not they would buy PRC from suppliers. It was the power pool's responsibility to provide competitive and fair peak regulation trading markets to participants. The introduction of PRR allowed for unit capacity regulation. The PRR and PRC were rated by the supplier, and transactions proceeded through a bidding process. PRR suppliers obtained profits by selling PRR and PRC, and obtained downward regulation fees regardless of whether purchases are made. It was concluded that the peak regulation mechanism reduced the total cost of the generating system and increased the social surplus. 6 refs., 1 tab., 3 figs

  11. Make peak flow a habit

    Science.gov (United States)

    Asthma - make peak flow a habit; Reactive airway disease - peak flow; Bronchial asthma - peak flow ... 2014:chap 55. National Asthma Education and Prevention Program website. How to use a peak flow meter. ...

  12. Automated asteroseismic peak detections

    DEFF Research Database (Denmark)

    de Montellano, Andres Garcia Saravia Ortiz; Hekker, S.; Themessl, N.

    2018-01-01

    Space observatories such as Kepler have provided data that can potentially revolutionize our understanding of stars. Through detailed asteroseismic analyses we are capable of determining fundamental stellar parameters and reveal the stellar internal structure with unprecedented accuracy. However......, such detailed analyses, known as peak bagging, have so far been obtained for only a small percentage of the observed stars while most of the scientific potential of the available data remains unexplored. One of the major challenges in peak bagging is identifying how many solar-like oscillation modes are visible...... of detected oscillation modes. The algorithm presented here opens the possibility for detailed and automated peak bagging of the thousands of solar-like oscillators observed by Kepler....

  13. Using elastic peak electron spectroscopy for enhanced depth resolution in sputter profiling

    International Nuclear Information System (INIS)

    Hofmann, S.; Kesler, V.

    2002-01-01

    Elastic peak electron spectroscopy (EPES) is an alternative to AES in sputter depth profiling of thin film structures. In contrast to AES, EPES depth profiling is not influenced by chemical effects. The high count rate ensures a good signal to noise ratio, that is lower measurement times and/or higher precision. In addition, because of the elastically scattered electrons travel twice through the sample, the effective escape depth is reduced, an important factor for the depth resolution function. Thus, the depth resolution is increased. EPES depth profiling was successfully applied to a Ge/Si multilayer structure. For an elastic peak energy of 1.0 keV the information depth is considerably lower (0.8 nm) as compared to the Ge (LMM, 1147 eV) peak (1.6 nm) used in AES depth profiling, resulting in a respectively improved depth resolution for EPES profiling under otherwise similar profiling conditions. EPES depth profiling is successfully applied to measure small diffusion lengths at Ge/Si interfaces of the order of 1 nm. (Authors)

  14. A novel fast phase correlation algorithm for peak wavelength detection of Fiber Bragg Grating sensors.

    Science.gov (United States)

    Lamberti, A; Vanlanduit, S; De Pauw, B; Berghmans, F

    2014-03-24

    Fiber Bragg Gratings (FBGs) can be used as sensors for strain, temperature and pressure measurements. For this purpose, the ability to determine the Bragg peak wavelength with adequate wavelength resolution and accuracy is essential. However, conventional peak detection techniques, such as the maximum detection algorithm, can yield inaccurate and imprecise results, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. Other techniques, such as the cross-correlation demodulation algorithm are more precise and accurate but require a considerable higher computational effort. To overcome these problems, we developed a novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor. This paper analyzes the performance of the FPC algorithm for different values of the SNR and wavelength resolution. Using simulations and experiments, we compared the FPC with the maximum detection and cross-correlation algorithms. The FPC method demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique. Additionally, FPC showed to be about 50 times faster than the cross-correlation. It is therefore a promising tool for future implementation in real-time systems or in embedded hardware intended for FBG sensor interrogation.

  15. Maximally reliable Markov chains under energy constraints.

    Science.gov (United States)

    Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam

    2009-07-01

    Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.

  16. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  17. Statistical analysis of uncertainties of gamma-peak identification and area calculation in particulate air-filter environment radionuclide measurements using the results of a Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) organized intercomparison, Part I: Assessment of reliability and uncertainties of isotope detection and energy precision using artificial spiked test spectra, Part II: Assessment of the true type I error rate and the quality of peak area estimators in relation to type II errors using large numbers of natural spectra

    International Nuclear Information System (INIS)

    Zhang, W.; Zaehringer, M.; Ungar, K.; Hoffman, I.

    2008-01-01

    In this paper, the uncertainties of gamma-ray small peak analysis have been examined. As the intensity of a gamma-ray peak approaches its detection decision limit, derived parameters such as centroid channel energy, peak area, peak area uncertainty, baseline determination, and peak significance are statistically sensitive. The intercomparison exercise organized by the CTBTO provided an excellent opportunity for this to be studied. Near background levels, the false-positive and false-negative peak identification frequencies in artificial test spectra have been compared to statistically predictable limiting values. In addition, naturally occurring radon progeny were used to compare observed variance against nominal uncertainties. The results infer that the applied fit algorithms do not always represent the best estimator. Understanding the statistically predicted peak-finding limit is important for data evaluation and analysis assessment. Furthermore, these results are useful to optimize analytical procedures to achieve the best results

  18. Multiscale peak detection in wavelet space.

    Science.gov (United States)

    Zhang, Zhi-Min; Tong, Xia; Peng, Ying; Ma, Pan; Zhang, Ming-Jin; Lu, Hong-Mei; Chen, Xiao-Qing; Liang, Yi-Zeng

    2015-12-07

    Accurate peak detection is essential for analyzing high-throughput datasets generated by analytical instruments. Derivatives with noise reduction and matched filtration are frequently used, but they are sensitive to baseline variations, random noise and deviations in the peak shape. A continuous wavelet transform (CWT)-based method is more practical and popular in this situation, which can increase the accuracy and reliability by identifying peaks across scales in wavelet space and implicitly removing noise as well as the baseline. However, its computational load is relatively high and the estimated features of peaks may not be accurate in the case of peaks that are overlapping, dense or weak. In this study, we present multi-scale peak detection (MSPD) by taking full advantage of additional information in wavelet space including ridges, valleys, and zero-crossings. It can achieve a high accuracy by thresholding each detected peak with the maximum of its ridge. It has been comprehensively evaluated with MALDI-TOF spectra in proteomics, the CAMDA 2006 SELDI dataset as well as the Romanian database of Raman spectra, which is particularly suitable for detecting peaks in high-throughput analytical signals. Receiver operating characteristic (ROC) curves show that MSPD can detect more true peaks while keeping the false discovery rate lower than MassSpecWavelet and MALDIquant methods. Superior results in Raman spectra suggest that MSPD seems to be a more universal method for peak detection. MSPD has been designed and implemented efficiently in Python and Cython. It is available as an open source package at .

  19. Peak reading detector circuit

    International Nuclear Information System (INIS)

    Courtin, E.; Grund, K.; Traub, S.; Zeeb, H.

    1975-01-01

    The peak reading detector circuit serves for picking up the instants during which peaks of a given polarity occur in sequences of signals in which the extreme values, their time intervals, and the curve shape of the signals vary. The signal sequences appear in measuring the foetal heart beat frequence from amplitude-modulated ultrasonic, electrocardiagram, and blood pressure signals. In order to prevent undesired emission of output signals from, e. g., disturbing intermediate extreme values, the circuit consists of the series connections of a circuit to simulate an ideal diode, a strong unit, a discriminator for the direction of charging current, a time-delay circuit, and an electronic switch lying in the decharging circuit of the storage unit. The time-delay circuit thereby causes storing of a preliminary maximum value being used only after a certain time delay for the emission of the output signal. If a larger extreme value occurs during the delay time the preliminary maximum value is cleared and the delay time starts running anew. (DG/PB) [de

  20. Automation of peak-tracking analysis of stepwise perturbed NMR spectra

    Energy Technology Data Exchange (ETDEWEB)

    Banelli, Tommaso; Vuano, Marco [Università di Udine, Dipartimento di Area Medica (Italy); Fogolari, Federico [INBB (Italy); Fusiello, Andrea [Università di Udine, Dipartimento Politecnico di Ingegneria e Architettura (Italy); Esposito, Gennaro [INBB (Italy); Corazza, Alessandra, E-mail: alessandra.corazza@uniud.it [Università di Udine, Dipartimento di Area Medica (Italy)

    2017-02-15

    We describe a new algorithmic approach able to automatically pick and track the NMR resonances of a large number of 2D NMR spectra acquired during a stepwise variation of a physical parameter. The method has been named Trace in Track (TinT), referring to the idea that a gaussian decomposition traces peaks within the tracks recognised through 3D mathematical morphology. It is capable of determining the evolution of the chemical shifts, intensity and linewidths of each tracked peak.The performances obtained in term of track reconstruction and correct assignment on realistic synthetic spectra were high above 90% when a noise level similar to that of experimental data were considered. TinT was applied successfully to several protein systems during a temperature ramp in isotope exchange experiments. A comparison with a state-of-the-art algorithm showed promising results for great numbers of spectra and low signal to noise ratios, when the graduality of the perturbation is appropriate. TinT can be applied to different kinds of high throughput chemical shift mapping experiments, with quasi-continuous variations, in which a quantitative automated recognition is crucial.

  1. Derivation from first principles of the statistical distribution of the mass peak intensities of MS data.

    Science.gov (United States)

    Ipsen, Andreas

    2015-02-03

    Despite the widespread use of mass spectrometry (MS) in a broad range of disciplines, the nature of MS data remains very poorly understood, and this places important constraints on the quality of MS data analysis as well as on the effectiveness of MS instrument design. In the following, a procedure for calculating the statistical distribution of the mass peak intensity for MS instruments that use analog-to-digital converters (ADCs) and electron multipliers is presented. It is demonstrated that the physical processes underlying the data-generation process, from the generation of the ions to the signal induced at the detector, and on to the digitization of the resulting voltage pulse, result in data that can be well-approximated by a Gaussian distribution whose mean and variance are determined by physically meaningful instrumental parameters. This allows for a very precise understanding of the signal-to-noise ratio of mass peak intensities and suggests novel ways of improving it. Moreover, it is a prerequisite for being able to address virtually all data analytical problems in downstream analyses in a statistically rigorous manner. The model is validated with experimental data.

  2. Time-frequency peak filtering for random noise attenuation of magnetic resonance sounding signal

    Science.gov (United States)

    Lin, Tingting; Zhang, Yang; Yi, Xiaofeng; Fan, Tiehu; Wan, Ling

    2018-05-01

    When measuring in a geomagnetic field, the method of magnetic resonance sounding (MRS) is often limited because of the notably low signal-to-noise ratio (SNR). Most current studies focus on discarding spiky noise and power-line harmonic noise cancellation. However, the effects of random noise should not be underestimated. The common method for random noise attenuation is stacking, but collecting multiple recordings merely to suppress random noise is time-consuming. Moreover, stacking is insufficient to suppress high-level random noise. Here, we propose the use of time-frequency peak filtering for random noise attenuation, which is performed after the traditional de-spiking and power-line harmonic removal method. By encoding the noisy signal with frequency modulation and estimating the instantaneous frequency using the peak of the time-frequency representation of the encoded signal, the desired MRS signal can be acquired from only one stack. The performance of the proposed method is tested on synthetic envelope signals and field data from different surveys. Good estimations of the signal parameters are obtained at different SNRs. Moreover, an attempt to use the proposed method to handle a single recording provides better results compared to 16 stacks. Our results suggest that the number of stacks can be appropriately reduced to shorten the measurement time and improve the measurement efficiency.

  3. Mask effects on cosmological studies with weak-lensing peak statistics

    International Nuclear Information System (INIS)

    Liu, Xiangkun; Pan, Chuzhong; Fan, Zuhui; Wang, Qiao

    2014-01-01

    With numerical simulations, we analyze in detail how the bad data removal, i.e., the mask effect, can influence the peak statistics of the weak-lensing convergence field reconstructed from the shear measurement of background galaxies. It is found that high peak fractions are systematically enhanced because of the presence of masks; the larger the masked area is, the higher the enhancement is. In the case where the total masked area is about 13% of the survey area, the fraction of peaks with signal-to-noise ratio ν ≥ 3 is ∼11% of the total number of peaks, compared with ∼7% of the mask-free case in our considered cosmological model. This can have significant effects on cosmological studies with weak-lensing convergence peak statistics, inducing a large bias in the parameter constraints if the effects are not taken into account properly. Even for a survey area of 9 deg 2 , the bias in (Ω m , σ 8 ) is already intolerably large and close to 3σ. It is noted that most of the affected peaks are close to the masked regions. Therefore, excluding peaks in those regions in the peak statistics can reduce the bias effect but at the expense of losing usable survey areas. Further investigations find that the enhancement of the number of high peaks around the masked regions can be largely attributed to the smaller number of galaxies usable in the weak-lensing convergence reconstruction, leading to higher noise than that of the areas away from the masks. We thus develop a model in which we exclude only those very large masks with radius larger than 3' but keep all the other masked regions in peak counting statistics. For the remaining part, we treat the areas close to and away from the masked regions separately with different noise levels. It is shown that this two-noise-level model can account for the mask effect on peak statistics very well, and the bias in cosmological parameters is significantly reduced if this model is applied in the parameter fitting.

  4. Reliability of stellar inclination estimated from asteroseismology: analytical criteria, mock simulations and Kepler data analysis

    Science.gov (United States)

    Kamiaka, Shoya; Benomar, Othman; Suto, Yasushi

    2018-05-01

    Advances in asteroseismology of solar-like stars, now provide a unique method to estimate the stellar inclination i⋆. This enables to evaluate the spin-orbit angle of transiting planetary systems, in a complementary fashion to the Rossiter-McLaughlineffect, a well-established method to estimate the projected spin-orbit angle λ. Although the asteroseismic method has been broadly applied to the Kepler data, its reliability has yet to be assessed intensively. In this work, we evaluate the accuracy of i⋆ from asteroseismology of solar-like stars using 3000 simulated power spectra. We find that the low signal-to-noise ratio of the power spectra induces a systematic under-estimate (over-estimate) bias for stars with high (low) inclinations. We derive analytical criteria for the reliable asteroseismic estimate, which indicates that reliable measurements are possible in the range of 20° ≲ i⋆ ≲ 80° only for stars with high signal-to-noise ratio. We also analyse and measure the stellar inclination of 94 Kepler main-sequence solar-like stars, among which 33 are planetary hosts. According to our reliability criteria, a third of them (9 with planets, 22 without) have accurate stellar inclination. Comparison of our asteroseismic estimate of vsin i⋆ against spectroscopic measurements indicates that the latter suffers from a large uncertainty possibly due to the modeling of macro-turbulence, especially for stars with projected rotation speed vsin i⋆ ≲ 5km/s. This reinforces earlier claims, and the stellar inclination estimated from the combination of measurements from spectroscopy and photometric variation for slowly rotating stars needs to be interpreted with caution.

  5. Peak Bagging of red giant stars observed by Kepler: first results with a new method based on Bayesian nested sampling

    Science.gov (United States)

    Corsaro, Enrico; De Ridder, Joris

    2015-09-01

    The peak bagging analysis, namely the fitting and identification of single oscillation modes in stars' power spectra, coupled to the very high-quality light curves of red giant stars observed by Kepler, can play a crucial role for studying stellar oscillations of different flavor with an unprecedented level of detail. A thorough study of stellar oscillations would thus allow for deeper testing of stellar structure models and new insights in stellar evolution theory. However, peak bagging inferences are in general very challenging problems due to the large number of observed oscillation modes, hence of free parameters that can be involved in the fitting models. Efficiency and robustness in performing the analysis is what may be needed to proceed further. For this purpose, we developed a new code implementing the Nested Sampling Monte Carlo (NSMC) algorithm, a powerful statistical method well suited for Bayesian analyses of complex problems. In this talk we show the peak bagging of a sample of high signal-to-noise red giant stars by exploiting recent Kepler datasets and a new criterion for the detection of an oscillation mode based on the computation of the Bayesian evidence. Preliminary results for frequencies and lifetimes for single oscillation modes, together with acoustic glitches, are therefore presented.

  6. Peak Bagging of red giant stars observed by Kepler: first results with a new method based on Bayesian nested sampling

    Directory of Open Access Journals (Sweden)

    Corsaro Enrico

    2015-01-01

    Full Text Available The peak bagging analysis, namely the fitting and identification of single oscillation modes in stars’ power spectra, coupled to the very high-quality light curves of red giant stars observed by Kepler, can play a crucial role for studying stellar oscillations of different flavor with an unprecedented level of detail. A thorough study of stellar oscillations would thus allow for deeper testing of stellar structure models and new insights in stellar evolution theory. However, peak bagging inferences are in general very challenging problems due to the large number of observed oscillation modes, hence of free parameters that can be involved in the fitting models. Efficiency and robustness in performing the analysis is what may be needed to proceed further. For this purpose, we developed a new code implementing the Nested Sampling Monte Carlo (NSMC algorithm, a powerful statistical method well suited for Bayesian analyses of complex problems. In this talk we show the peak bagging of a sample of high signal-to-noise red giant stars by exploiting recent Kepler datasets and a new criterion for the detection of an oscillation mode based on the computation of the Bayesian evidence. Preliminary results for frequencies and lifetimes for single oscillation modes, together with acoustic glitches, are therefore presented.

  7. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  8. Robust Peak Recognition in Intracranial Pressure Signals

    Directory of Open Access Journals (Sweden)

    Bergsneider Marvin

    2010-10-01

    Full Text Available Abstract Background The waveform morphology of intracranial pressure pulses (ICP is an essential indicator for monitoring, and forecasting critical intracranial and cerebrovascular pathophysiological variations. While current ICP pulse analysis frameworks offer satisfying results on most of the pulses, we observed that the performance of several of them deteriorates significantly on abnormal, or simply more challenging pulses. Methods This paper provides two contributions to this problem. First, it introduces MOCAIP++, a generic ICP pulse processing framework that generalizes MOCAIP (Morphological Clustering and Analysis of ICP Pulse. Its strength is to integrate several peak recognition methods to describe ICP morphology, and to exploit different ICP features to improve peak recognition. Second, it investigates the effect of incorporating, automatically identified, challenging pulses into the training set of peak recognition models. Results Experiments on a large dataset of ICP signals, as well as on a representative collection of sampled challenging ICP pulses, demonstrate that both contributions are complementary and significantly improve peak recognition performance in clinical conditions. Conclusion The proposed framework allows to extract more reliable statistics about the ICP waveform morphology on challenging pulses to investigate the predictive power of these pulses on the condition of the patient.

  9. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  10. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  11. NOISY WEAK-LENSING CONVERGENCE PEAK STATISTICS NEAR CLUSTERS OF GALAXIES AND BEYOND

    International Nuclear Information System (INIS)

    Fan Zuhui; Shan Huanyuan; Liu Jiayi

    2010-01-01

    Taking into account noise from intrinsic ellipticities of source galaxies, in this paper, we study the peak statistics in weak-lensing convergence maps around clusters of galaxies and beyond. We emphasize how the noise peak statistics is affected by the density distribution of nearby clusters, and also how cluster-peak signals are changed by the existence of noise. These are the important aspects to be thoroughly understood in weak-lensing analyses for individual clusters as well as in cosmological applications of weak-lensing cluster statistics. We adopt Gaussian smoothing with the smoothing scale θ G = 0.5arcmin in our analyses. It is found that the noise peak distribution near a cluster of galaxies sensitively depends on the density profile of the cluster. For a cored isothermal cluster with the core radius R c , the inner region with R ≤ R c appears noisy containing on average ∼2.4 peaks with ν ≥ 5 for R c = 1.7arcmin and the true peak height of the cluster ν = 5.6, where ν denotes the convergence signal-to-noise ratio. For a Navarro-Frenk-White (NFW) cluster of the same mass and the same central ν, the average number of peaks with ν ≥ 5 within R ≤ R c is ∼1.6. Thus a high peak corresponding to the main cluster can be identified more cleanly in the NFW case. In the outer region with R c c , the number of high noise peaks is considerably enhanced in comparison with that of the pure noise case without the nearby cluster. For ν ≥ 4, depending on the treatment of the mass-sheet degeneracy in weak-lensing analyses, the enhancement factor f is in the range of ∼5 to ∼55 for both clusters as their outer density profiles are similar. The properties of the main-cluster-peak identified in convergence maps are also significantly affected by the presence of noise. Scatters as well as a systematic shift for the peak height are present. The height distribution is peaked at ν ∼ 6.6, rather than at ν = 5.6, corresponding to a shift of Δν ∼ 1

  12. Conductive graphene as passive saturable absorber with high instantaneous peak power and pulse energy in Q-switched regime

    Science.gov (United States)

    Zuikafly, Siti Nur Fatin; Khalifa, Ali; Ahmad, Fauzan; Shafie, Suhaidi; Harun, SulaimanWadi

    2018-06-01

    The Q-switched pulse regime is demonstrated by integrating conductive graphene as passive saturable absorber producing relatively high instantaneous peak power and pulse energy. The fabricated conductive graphene is investigated using Raman spectroscopy. The single wavelength Q-switching operates at 1558.28 nm at maximum input pump power of 151.47 mW. As the pump power is increased from threshold power of 51.6 mW to 151.47 mW, the pulse train repetition rate increases proportionally from 47.94 kHz to 67.8 kHz while the pulse width is reduced from 9.58 μs to 6.02 μs. The generated stable pulse produced maximum peak power and pulse energy of 32 mW and 206 nJ, respectively. The first beat node of the measured signal-to-noise ratio is about 62 dB indicating high pulse stability.

  13. Prediction of peak overlap in NMR spectra

    International Nuclear Information System (INIS)

    Hefke, Frederik; Schmucki, Roland; Güntert, Peter

    2013-01-01

    Peak overlap is one of the major factors complicating the analysis of biomolecular NMR spectra. We present a general method for predicting the extent of peak overlap in multidimensional NMR spectra and its validation using both, experimental data sets and Monte Carlo simulation. The method is based on knowledge of the magnetization transfer pathways of the NMR experiments and chemical shift statistics from the Biological Magnetic Resonance Data Bank. Assuming a normal distribution with characteristic mean value and standard deviation for the chemical shift of each observable atom, an analytic expression was derived for the expected overlap probability of the cross peaks. The analytical approach was verified to agree with the average peak overlap in a large number of individual peak lists simulated using the same chemical shift statistics. The method was applied to eight proteins, including an intrinsically disordered one, for which the prediction results could be compared with the actual overlap based on the experimentally measured chemical shifts. The extent of overlap predicted using only statistical chemical shift information was in good agreement with the overlap that was observed when the measured shifts were used in the virtual spectrum, except for the intrinsically disordered protein. Since the spectral complexity of a protein NMR spectrum is a crucial factor for protein structure determination, analytical overlap prediction can be used to identify potentially difficult proteins before conducting NMR experiments. Overlap predictions can be tailored to particular classes of proteins by preparing statistics from corresponding protein databases. The method is also suitable for optimizing recording parameters and labeling schemes for NMR experiments and improving the reliability of automated spectra analysis and protein structure determination.

  14. Peak globalization. Climate change, oil depletion and global trade

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, Fred [Department of Economics, Drew University, Madison, NJ 07940 (United States)

    2009-12-15

    The global trade in goods depends upon reliable, inexpensive transportation of freight along complex and long-distance supply chains. Global warming and peak oil undermine globalization by their effects on both transportation costs and the reliable movement of freight. Countering the current geographic pattern of comparative advantage with higher transportation costs, climate change and peak oil will thus result in peak globalization, after which the volume of exports will decline as measured by ton-miles of freight. Policies designed to mitigate climate change and peak oil are very unlikely to change this result due to their late implementation, contradictory effects and insufficient magnitude. The implication is that supply chains will become shorter for most products and that production of goods will be located closer to where they are consumed. (author)

  15. Peak globalization. Climate change, oil depletion and global trade

    International Nuclear Information System (INIS)

    Curtis, Fred

    2009-01-01

    The global trade in goods depends upon reliable, inexpensive transportation of freight along complex and long-distance supply chains. Global warming and peak oil undermine globalization by their effects on both transportation costs and the reliable movement of freight. Countering the current geographic pattern of comparative advantage with higher transportation costs, climate change and peak oil will thus result in peak globalization, after which the volume of exports will decline as measured by ton-miles of freight. Policies designed to mitigate climate change and peak oil are very unlikely to change this result due to their late implementation, contradictory effects and insufficient magnitude. The implication is that supply chains will become shorter for most products and that production of goods will be located closer to where they are consumed. (author)

  16. Upper limit of peak area

    International Nuclear Information System (INIS)

    Helene, O.A.M.

    1982-08-01

    The determination of the upper limit of peak area in a multi-channel spectra, with a known significance level is discussed. This problem is specially important when the peak area is masked by the background statistical fluctuations. The problem is exactly solved and, thus, the results are valid in experiments with small number of events. The results are submitted to a Monte Carlo test and applied to the 92 Nb beta decay. (Author) [pt

  17. Accuracy and Reliability of the Kinect Version 2 for Clinical Measurement of Motor Function.

    Directory of Open Access Journals (Sweden)

    Karen Otte

    Full Text Available The introduction of low cost optical 3D motion tracking sensors provides new options for effective quantification of motor dysfunction.The present study aimed to evaluate the Kinect V2 sensor against a gold standard motion capture system with respect to accuracy of tracked landmark movements and accuracy and repeatability of derived clinical parameters.Nineteen healthy subjects were concurrently recorded with a Kinect V2 sensor and an optical motion tracking system (Vicon. Six different movement tasks were recorded with 3D full-body kinematics from both systems. Tasks included walking in different conditions, balance and adaptive postural control. After temporal and spatial alignment, agreement of movements signals was described by Pearson's correlation coefficient and signal to noise ratios per dimension. From these movement signals, 45 clinical parameters were calculated, including ranges of motions, torso sway, movement velocities and cadence. Accuracy of parameters was described as absolute agreement, consistency agreement and limits of agreement. Intra-session reliability of 3 to 5 measurement repetitions was described as repeatability coefficient and standard error of measurement for each system.Accuracy of Kinect V2 landmark movements was moderate to excellent and depended on movement dimension, landmark location and performed task. Signal to noise ratio provided information about Kinect V2 landmark stability and indicated larger noise behaviour in feet and ankles. Most of the derived clinical parameters showed good to excellent absolute agreement (30 parameters showed ICC(3,1 > 0.7 and consistency (38 parameters showed r > 0.7 between both systems.Given that this system is low-cost, portable and does not require any sensors to be attached to the body, it could provide numerous advantages when compared to established marker- or wearable sensor based system. The Kinect V2 has the potential to be used as a reliable and valid clinical

  18. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  19. Reliability training

    Science.gov (United States)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  20. Peak Oil and other threatening peaks-Chimeras without substance

    International Nuclear Information System (INIS)

    Radetzki, Marian

    2010-01-01

    The Peak Oil movement has widely spread its message about an impending peak in global oil production, caused by an inadequate resource base. On closer scrutiny, the underlying analysis is inconsistent, void of a theoretical foundation and without support in empirical observations. Global oil resources are huge and expanding, and pose no threat to continuing output growth within an extended time horizon. In contrast, temporary or prolonged supply crunches are indeed plausible, even likely, on account of growing resource nationalism denying access to efficient exploitation of the existing resource wealth.

  1. Electricity Portfolio Management: Optimal Peak / Off-Peak Allocations

    OpenAIRE

    Huisman, Ronald; Mahieu, Ronald; Schlichter, Felix

    2007-01-01

    textabstractElectricity purchasers manage a portfolio of contracts in order to purchase the expected future electricity consumption profile of a company or a pool of clients. This paper proposes a mean-variance framework to address the concept of structuring the portfolio and focuses on how to allocate optimal positions in peak and off-peak forward contracts. It is shown that the optimal allocations are based on the difference in risk premiums per unit of day-ahead risk as a measure of relati...

  2. Ultrasonic Transducer Peak-to-Peak Optical Measurement

    Directory of Open Access Journals (Sweden)

    Pavel Skarvada

    2012-01-01

    Full Text Available Possible optical setups for measurement of the peak-to-peak value of an ultrasonic transducer are described in this work. The Michelson interferometer with the calibrated nanopositioner in reference path and laser Doppler vibrometer were used for the basic measurement of vibration displacement. Langevin type of ultrasonic transducer is used for the purposes of Electro-Ultrasonic Nonlinear Spectroscopy (EUNS. Parameters of produced mechanical vibration have to been well known for EUNS. Moreover, a monitoring of mechanical vibration frequency shift with a mass load and sample-transducer coupling is important for EUNS measurement.

  3. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  4. Peaking-factor of PWR

    International Nuclear Information System (INIS)

    Morioka, Noboru; Kato, Yasuji; Yokoi, M.

    1975-01-01

    Output peaking factor often plays an important role in the safety and operation of nuclear reactors. The meaning of the peaking factor of PWRs is categorized into two features or the peaking factor in core (FQ-core) and the peaking factor on the basis of accident analysis (or FQ-limit). FQ-core is the actual peaking factor realized in nuclear core at the time of normal operation, and FQ-limit should be evaluated from loss of coolant accident and other abnormal conditions. If FQ-core is lower than FQ-limit, the reactor may be operated at full load, but if FQ-core is larger than FQ-limit, reactor output should be controlled lower than FQ-limit. FQ-core has two kinds of values, or the one on the basis of nuclear design, and the other actually measured in reactor operation. The first FQ-core should be named as FQ-core-design and the latter as FQ-core-measured. The numerical evaluation of FQ-core-design is as follows; FQ-core-design of three-dimensions is synthesized with FQ-core horizontal value (X-Y) and FQ-core vertical value, the former one is calculated with ASSY-CORE code, and the latter one with one dimensional diffusion code. For the evaluation of FQ-core-measured, on-site data observation from nuclear reactor instrumentation or off-site data observation is used. (Iwase, T.)

  5. How to use your peak flow meter

    Science.gov (United States)

    ... meter - how to use; Asthma - peak flow meter; Reactive airway disease - peak flow meter; Bronchial asthma - peak ... 2014:chap 55. National Asthma Education and Prevention Program website. How to use a peak flow meter. ...

  6. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  7. Achieving high signal-to-noise performance for a velocity-map imaging experiment

    International Nuclear Information System (INIS)

    Roberts, E.H.; Cavanagh, S.J.; Gibson, S.T.; Lewis, B.R.; Dedman, C.J.; Picker, G.J.

    2005-01-01

    Since the publication of the pioneering paper on velocity-map imaging in 1997, by Eppink and Parker [A.T.J.B. Eppink, D.H. Parker, Rev. Sci. Instrum. 68 (1997) 3477], numerous groups have applied this method in a variety of ways and to various targets. However, despite this interest, little attention has been given to the inherent difficulties and problems associated with this method. In implementing a velocity-map imaging system for photoelectron spectroscopy for the photo-detachment of anion radicals, we have developed a coaxial velocity-map imaging spectrometer. Examined are the advantages and disadvantages of such a system, in particular the sources of noise and the methods used to reduce it

  8. Signal to noise : listening for democracy and environment in climate change discourse

    Energy Technology Data Exchange (ETDEWEB)

    Glover, L. [Delaware Univ., Newark, DE (United States)

    2000-06-01

    This paper discussed the importance of active involvement by civic society in achieving long term greenhouse gas (GHG) emission reduction targets to stabilize atmospheric GHG gas concentrations. On the basis of the attempted GHG reductions by Annex I nations in the first reporting period under the UN Framework Convention on Climate Change (FCCC), climate change policy was generally a failure. Few developed nations managed to return annual emissions to anywhere near 1990 levels. This paper focused on the failures in national climate change policy in the United States and Australia in reducing GHG emissions. The author stated that the cause of these failures was not due to communication inadequacies between governments and the general public. National policy formulation processes have been characterized by minimal community input and low discourse over the ethical and practical implications of ecological justice. It was emphasized that civic society should be engaged in longer-term policy formulations to effectively overcome the limitations currently imposed by liberal-democratic nation states and ecological modernisation policy approaches. It was cautioned that until civic society is involved, progress will be bound by the contradictions of seeking to create ecologically-minded communities through governance that fails to explain the relationships between social behaviour and global ecology. 45 refs.

  9. Speed of response, pile-up and signal to noise ratio in liquid ionization calorimeters

    International Nuclear Information System (INIS)

    Colas, J.

    1989-11-01

    Although liquid ionization calorimeters have been mostly used up to now with slow readout, their signals have a fast rise time. However, it is not easy to get this fast component of the pulse out of the calorimeter. For this purpose a new connection scheme of the electrodes, the electrostatic transformer, is presented and discussed. This technique reduces the detector capacitance while keeping the number of channels at an acceptable level. Also it allows the use of transmission lines to bring signals from the electrodes to the preamplifiers which could be located in an accessible area. With room temperature liquids the length of these cables can be short, keeping the added noise at a reasonable level. Contributions to the error on the energy measurement from pile up and electronics noise are studied in detail. Even on this issue, room temperature liquids (TMP/TMS) are found to be competitive with cold liquid argon at the expense of a moderately higher gap voltage

  10. Phased array technique for low signal-to-noise ratio wind tunnels, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Closed wind tunnel beamforming for aeroacoustics has become more and more prevalent in recent years. Still, there are major drawbacks as current microphone arrays...

  11. Signal to noise ratio enhancement for Eddy Current testing of steam generator tubes in PWR's

    International Nuclear Information System (INIS)

    Georgel, B.

    1985-01-01

    Noise reduction is a compulsory task when we try to recognize and characterize flaws. The signals we deal with come from Eddy Current testings of steam generator steel tubes. We point out the need for a spectral invariant in digital spectral analysis of 2 components signals. We make clear the pros and cons of classical passband filtering and suggest the use of a new noise cancellation method first discussed by Moriwaki and Tlusty. We generalize this tricky technique and prove it is a very special case of the well-known Wiener filter. In that sense the M-T method is shown to be optimal. 6 refs

  12. Signal-to-noise ratio comparison of angular signal radiography and phase stepping method

    Science.gov (United States)

    Faiz, Wali; Zhu, Peiping; Hu, Renfang; Gao, Kun; Wu, Zhao; Bao, Yuan; Tian, Yangchao

    2017-12-01

    Not Available Project supported by the National Research and Development Project for Key Scientific Instruments (Grant No. CZBZDYZ20140002), the National Natural Science Foundation of China (Grant Nos. 11535015, 11305173, and 11375225), the project supported by Institute of High Energy Physics, Chinese Academy of Sciences (Grant No. Y4545320Y2), and the Fundamental Research Funds for the Central Universities (Grant No. WK2310000065). The author, Wali Faiz, acknowledges and wishes to thank the Chinese Academy of Sciences and The World Academy of Sciences (CAS-TWAS) President's Fellowship Program for generous financial support.

  13. Speed of response, pile-up, and signal to noise ratio in liquid ionization calorimeters

    International Nuclear Information System (INIS)

    Colas, J.

    1989-06-01

    Although liquid ionization calorimeters have been mostly used up to now with slow readout, their signals have a fast rise time. However, it is not easy to get this fast component of the pulse out of the calorimeter. For this purpose a new connection scheme of the electrodes, the ''electrostatic transformer,'' is presented. This technique reduces the detector capacitance while keeping the number of channels at an acceptable level. Also it allows the use of transmission lines to bring signals from the electrodes to the preamplifiers which could be located in an accessible area. With room temperature liquids the length of these cables can be short, keeping the added noise at a reasonable level. Contributions to the error on the energy measurement from pile up and electronics noise are studied in detail. Even on this issue, room temperature liquids (TMP/TMS) are found to be competitive with cold liquid argon at the expense of a moderately higher gap voltage. 5 refs., 9 figs., 2 tabs

  14. Exponential signaling gain at the receptor level enhances signal-to-noise ratio in bacterial chemotaxis.

    Directory of Open Access Journals (Sweden)

    Silke Neumann

    Full Text Available Cellular signaling systems show astonishing precision in their response to external stimuli despite strong fluctuations in the molecular components that determine pathway activity. To control the effects of noise on signaling most efficiently, living cells employ compensatory mechanisms that reach from simple negative feedback loops to robustly designed signaling architectures. Here, we report on a novel control mechanism that allows living cells to keep precision in their signaling characteristics - stationary pathway output, response amplitude, and relaxation time - in the presence of strong intracellular perturbations. The concept relies on the surprising fact that for systems showing perfect adaptation an exponential signal amplification at the receptor level suffices to eliminate slowly varying multiplicative noise. To show this mechanism at work in living systems, we quantified the response dynamics of the E. coli chemotaxis network after genetically perturbing the information flux between upstream and downstream signaling components. We give strong evidence that this signaling system results in dynamic invariance of the activated response regulator against multiplicative intracellular noise. We further demonstrate that for environmental conditions, for which precision in chemosensing is crucial, the invariant response behavior results in highest chemotactic efficiency. Our results resolve several puzzling features of the chemotaxis pathway that are widely conserved across prokaryotes but so far could not be attributed any functional role.

  15. Measuring the effect of signal-to-noise ratio on listening fatigue

    NARCIS (Netherlands)

    Baselmans, R.; Van Schijndel, N.H.; Duisters, R.P.N.

    2010-01-01

    Nowadays, by using modern means of communication, people tend to talk longer with each other. This may result in listening fatigue, which hampers cognitive performance. Twenty-four subjects listened to English texts, and answered questions about the text, in two sessions.There was background babble

  16. Analysis of Signal-to-Noise Ratio of the Laser Doppler Velocimeter

    DEFF Research Database (Denmark)

    Lading, Lars

    1973-01-01

    The signal-to-shot-noise ratio of the photocurrent of a laser Doppler anemometer is calculated as a function of the parameters which describe the system. It is found that the S/N is generally a growing function of receiver area, that few large particles are better than many small ones, and that g...

  17. Metallicity in galactic clusters from high signal-to-noise spectroscopy

    International Nuclear Information System (INIS)

    Boesgaard, A.M.

    1989-01-01

    High-quality spectroscopic data on selected F dwarfs in six Galactic clusters are used to determine global (Fe/H) values for the clusters. For the two youngest clusters, Pleiades and Alpha Per, the (Fe/H) values are solar: 0.017 + or - 0.055. The Hyades and Praesepe are slightly metal-enhanced at (Fe/H) = + 0.125 + or - 0.032, even though they are an order of magnitude older than the Pleiades. Coma and the UMa Group at the age of the Hyades are slightly metal-deficient with (Fe/H) = - 0.082 + or - 0.039. The lack of an age-metallicity relationship indicates that the enrichment and mixing in the Galactic disk have not been uniform on time scales less than a billion years. 39 references

  18. Signal to noise comparison of metabolic imaging methods on a clinical 3T MRI

    DEFF Research Database (Denmark)

    Müller, C. A.; Hansen, Rie Beck; Skinner, J. G.

    MRI with hyperpolarized tracers has enabled new diagnostic applications, e.g. metabolic imaging in cancer research. However, the acquisition of the transient, hyperpolarized signal with spatial and frequency resolution requires dedicated imaging methods. Here, we compare three promising candidate...... for 2D MR spectroscopic imaging (MRSI): (i) multi-echo balanced steady-state free precession (me-bSSFP), 1,2 (ii) echo planar spectroscopic imaging (EPSI) sequence and (iii) phase-encoded, pulseacquisition chemical-shift imaging (CSI)...

  19. Spectrophotometry of white dwarfs as observed at high signal-to-noise ratio. II

    International Nuclear Information System (INIS)

    Greenstein, J.L.; Liebert, J.W.

    1990-01-01

    CCD spectrophotometry is presented of 140 white dwarfs at high SNR and is analyzed in detail. Energy distributions at 14,000 A are given at bandpasses from 3571 to 8300 A, and equivalent widths of lines of H, He I, metals, and atomic and molecular carbon are given as functions of color for DB, DQ, DZ, and DA stars. New forbidden H I transitions at 6068 A and 6632 A are found in at least the two hottest DB stars, new metallic features are found in cooler DZ stars, and the presence of Ca I in vMa 2 is confirmed. The spectrum of the hot DQAB star G227 - 5 and the pressure-shifted carbon bands seen in 0038-226 are discussed in detail. Comparison of the optical energy distribution of the latter with published IR fluxes shows that the 1-2 micron region is strongly depressed, with extensive blanketing. Equivalent widths, central depths, and width parameters are presented for H-alpha in 73 DA stars in the sample, and their dependences on color are studied. 64 refs

  20. Peak effect in twinned superconductors

    International Nuclear Information System (INIS)

    Larkin, A.I.; Marchetti, M.C.; Vinokur, V.M.

    1995-01-01

    A sharp maximum in the critical current J c as a function of temperature just below the melting point of the Abrikosov flux lattice has recently been observed in both low- and high-temperature superconductors. This peak effect is strongest in twinned crystals for fields aligned with the twin planes. We propose that this peak signals the breakdown of the collective pinning regime and the crossover to strong pinning of single vortices on the twin boundaries. This crossover is very sharp and can account for the steep drop of the differential resistivity observed in experiments. copyright 1995 The American Physical Society

  1. Hubbert's Peak -- A Physicist's View

    Science.gov (United States)

    McDonald, Richard

    2011-04-01

    Oil, as used in agriculture and transportation, is the lifeblood of modern society. It is finite in quantity and will someday be exhausted. In 1956, Hubbert proposed a theory of resource production and applied it successfully to predict peak U.S. oil production in 1970. Bartlett extended this work in publications and lectures on the finite nature of oil and its production peak and depletion. Both Hubbert and Bartlett place peak world oil production at a similar time, essentially now. Central to these analyses are estimates of total ``oil in place'' obtained from engineering studies of oil reservoirs as this quantity determines the area under the Hubbert's Peak. Knowing the production history and the total oil in place allows us to make estimates of reserves, and therefore future oil availability. We will then examine reserves data for various countries, in particular OPEC countries, and see if these data tell us anything about the future availability of oil. Finally, we will comment on synthetic oil and the possibility of carbon-neutral synthetic oil for a sustainable future.

  2. Human reliability

    International Nuclear Information System (INIS)

    Bubb, H.

    1992-01-01

    This book resulted from the activity of Task Force 4.2 - 'Human Reliability'. This group was established on February 27th, 1986, at the plenary meeting of the Technical Reliability Committee of VDI, within the framework of the joint committee of VDI on industrial systems technology - GIS. It is composed of representatives of industry, representatives of research institutes, of technical control boards and universities, whose job it is to study how man fits into the technical side of the world of work and to optimize this interaction. In a total of 17 sessions, information from the part of ergonomy dealing with human reliability in using technical systems at work was exchanged, and different methods for its evaluation were examined and analyzed. The outcome of this work was systematized and compiled in this book. (orig.) [de

  3. Microelectronics Reliability

    Science.gov (United States)

    2017-01-17

    inverters  connected in a chain. ................................................. 5  Figure 3  Typical graph showing frequency versus square root of...developing an experimental  reliability estimating methodology that could both illuminate the  lifetime  reliability of advanced devices,  circuits and...or  FIT of the device. In other words an accurate estimate of the device  lifetime  was found and thus the  reliability  that  can  be  conveniently

  4. Robust peak-shaving for a neighborhood with electric vehicles

    NARCIS (Netherlands)

    Gerards, Marco Egbertus Theodorus; Hurink, Johann L.

    2016-01-01

    Demand Side Management (DSM) is a popular approach for grid-aware peak-shaving. The most commonly used DSM methods either have no look ahead feature and risk deploying flexibility too early, or they plan ahead using predictions, which are in general not very reliable. To counter this, a DSM approach

  5. SPANISH PEAKS PRIMITIVE AREA, MONTANA.

    Science.gov (United States)

    Calkins, James A.; Pattee, Eldon C.

    1984-01-01

    A mineral survey of the Spanish Peaks Primitive Area, Montana, disclosed a small low-grade deposit of demonstrated chromite and asbestos resources. The chances for discovery of additional chrome resources are uncertain and the area has little promise for the occurrence of other mineral or energy resources. A reevaluation, sampling at depth, and testing for possible extensions of the Table Mountain asbestos and chromium deposit should be undertaken in the light of recent interpretations regarding its geologic setting.

  6. Neurofeedback training for peak performance

    OpenAIRE

    Marek Graczyk; Maria Pąchalska; Artur Ziółkowski; Grzegorz Mańko; Beata Łukaszewska; Kazimierz Kochanowicz; Andrzej Mirski; Iurii D. Kropotov

    2014-01-01

    [b]aim[/b]. One of the applications of the Neurofeedback methodology is peak performance in sport. The protocols of the neurofeedback are usually based on an assessment of the spectral parameters of spontaneous EEG in resting state conditions. The aim of the paper was to study whether the intensive neurofeedback training of a well-functioning Olympic athlete who has lost his performance confidence after injury in sport, could change the brain functioning reflected in changes in spontaneou...

  7. Evaluation of concurrent peak responses

    International Nuclear Information System (INIS)

    Wang, P.C.; Curreri, J.; Reich, M.

    1983-01-01

    This report deals with the problem of combining two or more concurrent responses which are induced by dynamic loads acting on nuclear power plant structures. Specifically, the acceptability of using the square root of the sum of the squares (SRSS) value of peak values as the combined response is investigated. Emphasis is placed on the establishment of a simplified criterion that is convenient and relatively easy to use by design engineers

  8. Finding two-dimensional peaks

    International Nuclear Information System (INIS)

    Silagadze, Z.K.

    2007-01-01

    Two-dimensional generalization of the original peak finding algorithm suggested earlier is given. The ideology of the algorithm emerged from the well-known quantum mechanical tunneling property which enables small bodies to penetrate through narrow potential barriers. We merge this 'quantum' ideology with the philosophy of Particle Swarm Optimization to get the global optimization algorithm which can be called Quantum Swarm Optimization. The functionality of the newborn algorithm is tested on some benchmark optimization problems

  9. Evaluation of different time domain peak models using extreme learning machine-based peak detection for EEG signal.

    Science.gov (United States)

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan

    2016-01-01

    Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.

  10. Drivers of peak sales for pharmaceutical brands

    NARCIS (Netherlands)

    Fischer, Marc; Leeflang, Peter S. H.; Verhoef, Peter C.

    2010-01-01

    Peak sales are an important metric in the pharmaceutical industry. Specifically, managers are focused on the height-of-peak-sales and the time required achieving peak sales. We analyze how order of entry and quality affect the level of peak sales and the time-to-peak-sales of pharmaceutical brands.

  11. Redefining reliability

    International Nuclear Information System (INIS)

    Paulson, S.L.

    1995-01-01

    Want to buy some reliability? The question would have been unthinkable in some markets served by the natural gas business even a few years ago, but in the new gas marketplace, industrial, commercial and even some residential customers have the opportunity to choose from among an array of options about the kind of natural gas service they need--and are willing to pay for. The complexities of this brave new world of restructuring and competition have sent the industry scrambling to find ways to educate and inform its customers about the increased responsibility they will have in determining the level of gas reliability they choose. This article discusses the new options and the new responsibilities of customers, the needed for continuous education, and MidAmerican Energy Company's experiment in direct marketing of natural gas

  12. Planetary Candidates Observed by Kepler. VIII. A Fully Automated Catalog with Measured Completeness and Reliability Based on Data Release 25

    DEFF Research Database (Denmark)

    Thompson, Susan E.; Coughlin, Jeffrey L.; Hoffman, Kelsey

    2018-01-01

    We present the Kepler Object of Interest (KOI) catalog of transiting exoplanets based on searching 4 yr of Kepler time series photometry (Data Release 25, Q1-Q17). The catalog contains 8054 KOIs, of which 4034 are planet candidates with periods between 0.25. and 632. days. Of these candidates, 219...... simulated data sets and measured how well it was able to separate TCEs caused by noise from those caused by low signal-to-noise transits. We discuss the Robovetter and the metrics it uses to sort TCEs. For orbital periods less than 100 days the Robovetter completeness (the fraction of simulated transits...... FGK-dwarf stars, the Robovetter is 76.7% complete and the catalog is 50.5% reliable. The KOI catalog, the transit fits, and all of the simulated data used to characterize this catalog are available at the NASA Exoplanet Archive....

  13. Spatial peak-load pricing

    International Nuclear Information System (INIS)

    Arellano, M. Soledad; Serra, Pablo

    2007-01-01

    This article extends the traditional electricity peak-load pricing model to include transmission costs. In the context of a two-node, two-technology electric power system, where suppliers face inelastic demand, we show that when the marginal plant is located at the energy-importing center, generators located away from that center should pay the marginal capacity transmission cost; otherwise, consumers should bear this cost through capacity payments. Since electric power transmission is a natural monopoly, marginal-cost pricing does not fully cover costs. We propose distributing the revenue deficit among users in proportion to the surplus they derive from the service priced at marginal cost. (Author)

  14. Economic effects of peak oil

    International Nuclear Information System (INIS)

    Lutz, Christian; Lehr, Ulrike; Wiebe, Kirsten S.

    2012-01-01

    Assuming that global oil production peaked, this paper uses scenario analysis to show the economic effects of a possible supply shortage and corresponding rise in oil prices in the next decade on different sectors in Germany and other major economies such as the US, Japan, China, the OPEC or Russia. Due to the price-inelasticity of oil demand the supply shortage leads to a sharp increase in oil prices in the second scenario, with high effects on GDP comparable to the magnitude of the global financial crises in 2008/09. Oil exporting countries benefit from high oil prices, whereas oil importing countries are negatively affected. Generally, the effects in the third scenario are significantly smaller than in the second, showing that energy efficiency measures and the switch to renewable energy sources decreases the countries' dependence on oil imports and hence reduces their vulnerability to oil price shocks on the world market. - Highlights: ► National and sectoral economic effects of peak oil until 2020 are modelled. ► The price elasticity of oil demand is low resulting in high price fluctuations. ► Oil shortage strongly affects transport and indirectly all other sectors. ► Global macroeconomic effects are comparable to the 2008/2009 crisis. ► Country effects depend on oil imports and productivity, and economic structures.

  15. Free-space optical communications with peak and average constraints: High SNR capacity approximation

    KAUST Repository

    Chaaban, Anas; Morvan, Jean-Marie; Alouini, Mohamed-Slim

    2015-01-01

    . Numerical evaluation shows that this capacity lower bound is nearly tight at high signal-to-noise ratio (SNR), while it is shown analytically that the gap to capacity upper bounds is a small constant at high SNR. In particular, the gap to the high

  16. An Introduction To Reliability

    International Nuclear Information System (INIS)

    Park, Kyoung Su

    1993-08-01

    This book introduces reliability with definition of reliability, requirement of reliability, system of life cycle and reliability, reliability and failure rate such as summary, reliability characteristic, chance failure, failure rate which changes over time, failure mode, replacement, reliability in engineering design, reliability test over assumption of failure rate, and drawing of reliability data, prediction of system reliability, conservation of system, failure such as summary and failure relay and analysis of system safety.

  17. 110 C thermoluminescence glow peak of quartz – A brief review

    Indian Academy of Sciences (India)

    sitization property. Various aspects of the peak, like its nature, defect centres involved, ... sensitivity, reliability, versatility and compatibility with the detection system. Its ...... [24] H M Rendell, P D Townsend, R A Wood and B J Luff, Radiat. Meas.

  18. Establishment of peak bone mass.

    Science.gov (United States)

    Mora, Stefano; Gilsanz, Vicente

    2003-03-01

    Among the main areas of progress in osteoporosis research during the last decade or so are the general recognition that this condition, which is the cause of so much pain in the elderly population, has its antecedents in childhood and the identification of the structural basis accounting for much of the differences in bone strength among humans. Nevertheless, current understanding of the bone mineral accrual process is far from complete. The search for genes that regulate bone mass acquisition is ongoing, and current results are not sufficient to identify subjects at risk. However, there is solid evidence that BMD measurements can be helpful for the selection of subjects that presumably would benefit from preventive interventions. The questions regarding the type of preventive interventions, their magnitude, and duration remain unanswered. Carefully designed controlled trials are needed. Nevertheless, previous experience indicates that weight-bearing activity and possibly calcium supplements are beneficial if they are begun during childhood and preferably before the onset of puberty. Modification of unhealthy lifestyles and increments in exercise or calcium assumption are logical interventions that should be implemented to improve bone mass gains in all children and adolescents who are at risk of failing to achieve an optimal peak bone mass.

  19. Neurofeedback training for peak performance

    Directory of Open Access Journals (Sweden)

    Marek Graczyk

    2014-11-01

    Full Text Available [b]aim[/b]. One of the applications of the Neurofeedback methodology is peak performance in sport. The protocols of the neurofeedback are usually based on an assessment of the spectral parameters of spontaneous EEG in resting state conditions. The aim of the paper was to study whether the intensive neurofeedback training of a well-functioning Olympic athlete who has lost his performance confidence after injury in sport, could change the brain functioning reflected in changes in spontaneous EEG and event related potentials (ERPs. [b]case study[/b]. The case is presented of an Olympic athlete who has lost his performance confidence after injury in sport. He wanted to resume his activities by means of neurofeedback training. His QEEG/ERP parameters were assessed before and after 4 intensive sessions of neurotherapy. Dramatic and statistically significant changes that could not be explained by error measurement were observed in the patient. [b]conclusion[/b]. Neurofeedback training in the subject under study increased the amplitude of the monitoring component of ERPs generated in the anterior cingulate cortex, accompanied by an increase in beta activity over the medial prefrontal cortex. Taking these changes together, it can be concluded that that even a few sessions of neurofeedback in a high performance brain can significantly activate the prefrontal cortical areas associated with increasing confidence in sport performance.

  20. Reactor power peaking information display

    International Nuclear Information System (INIS)

    Book, T.L.; Kochendarfer, R.A.

    1986-01-01

    This patent describes a system for monitoring operating conditions within a nuclear reactor. The system consists of a method for measuring the operating parameters within the nuclear reactor, including the position of axial power shaping rods and regulating control rod. It also includes a method for determining from the operating parameters the operating limits before a power peaking condition exists within the nuclear reactor, and a method for displaying the operating limits which consists of a visual display permitting the continuous monitoring of the operating conditions within the nuclear reactor as a graph of the shaping rod position vs the regulating rod position having a permissible area and a restricted area. The permissible area is further divided into a recommended operating area for steady state operation and a cursor located on the graph to indicate the present operating condition of the nuclear reactor to allow an operator to view any need for corrective action based on the movement of the cursor out of the recommended operating area and to take any corrective transient action within the permissible area

  1. Neurofeedback training for peak performance.

    Science.gov (United States)

    Graczyk, Marek; Pąchalska, Maria; Ziółkowski, Artur; Mańko, Grzegorz; Łukaszewska, Beata; Kochanowicz, Kazimierz; Mirski, Andrzej; Kropotov, Iurii D

    2014-01-01

    One of the applications of the Neurofeedback methodology is peak performance in sport. The protocols of the neurofeedback are usually based on an assessment of the spectral parameters of spontaneous EEG in resting state conditions. The aim of the paper was to study whether the intensive neurofeedback training of a well-functioning Olympic athlete who has lost his performance confidence after injury in sport, could change the brain functioning reflected in changes in spontaneous EEG and event related potentials (ERPs). The case is presented of an Olympic athlete who has lost his performance confidence after injury in sport. He wanted to resume his activities by means of neurofeedback training. His QEEG/ERP parameters were assessed before and after 4 intensive sessions of neurotherapy. Dramatic and statistically significant changes that could not be explained by error measurement were observed in the patient. Neurofeedback training in the subject under study increased the amplitude of the monitoring component of ERPs generated in the anterior cingulate cortex, accompanied by an increase in beta activity over the medial prefrontal cortex. Taking these changes together, it can be concluded that that even a few sessions of neurofeedback in a high performance brain can significantly activate the prefrontal cortical areas associated with increasing confidence in sport performance.

  2. Light, Alpha, and Fe-peak Element Abundances in the Galactic Bulge

    Science.gov (United States)

    Johnson, Christian I.; Rich, R. Michael; Kobayashi, Chiaki; Kunder, Andrea; Koch, Andreas

    2014-10-01

    We present radial velocities and chemical abundances of O, Na, Mg, Al, Si, Ca, Cr, Fe, Co, Ni, and Cu for a sample of 156 red giant branch stars in two Galactic bulge fields centered near (l, b) = (+5.25,-3.02) and (0,-12). The (+5.25,-3.02) field also includes observations of the bulge globular cluster NGC 6553. The results are based on high-resolution (R ~ 20,000), high signal-to-noise ration (S/N >~ 70) FLAMES-GIRAFFE spectra obtained through the European Southern Observatory archive. However, we only selected a subset of the original observations that included spectra with both high S/N and that did not show strong TiO absorption bands. This work extends previous analyses of this data set beyond Fe and the α-elements Mg, Si, Ca, and Ti. While we find reasonable agreement with past work, the data presented here indicate that the bulge may exhibit a different chemical composition than the local thick disk, especially at [Fe/H] >~ -0.5. In particular, the bulge [α/Fe] ratios may remain enhanced to a slightly higher [Fe/H] than the thick disk, and the Fe-peak elements Co, Ni, and Cu appear enhanced compared to the disk. There is also some evidence that the [Na/Fe] (but not [Al/Fe]) trends between the bulge and local disk may be different at low and high metallicity. We also find that the velocity dispersion decreases as a function of increasing [Fe/H] for both fields, and do not detect any significant cold, high-velocity populations. A comparison with chemical enrichment models indicates that a significant fraction of hypernovae may be required to explain the bulge abundance trends, and that initial mass functions that are steep, top-heavy (and do not include strong outflow), or truncated to avoid including contributions from stars >40 M ⊙ are ruled out, in particular because of disagreement with the Fe-peak abundance data. For most elements, the NGC 6553 stars exhibit abundance trends nearly identical to comparable metallicity bulge field stars. However, the

  3. Light, alpha, and Fe-peak element abundances in the galactic bulge

    International Nuclear Information System (INIS)

    Johnson, Christian I.; Rich, R. Michael; Kobayashi, Chiaki; Kunder, Andrea; Koch, Andreas

    2014-01-01

    We present radial velocities and chemical abundances of O, Na, Mg, Al, Si, Ca, Cr, Fe, Co, Ni, and Cu for a sample of 156 red giant branch stars in two Galactic bulge fields centered near (l, b) = (+5.25,–3.02) and (0,–12). The (+5.25,–3.02) field also includes observations of the bulge globular cluster NGC 6553. The results are based on high-resolution (R ∼ 20,000), high signal-to-noise ration (S/N ≳ 70) FLAMES-GIRAFFE spectra obtained through the European Southern Observatory archive. However, we only selected a subset of the original observations that included spectra with both high S/N and that did not show strong TiO absorption bands. This work extends previous analyses of this data set beyond Fe and the α-elements Mg, Si, Ca, and Ti. While we find reasonable agreement with past work, the data presented here indicate that the bulge may exhibit a different chemical composition than the local thick disk, especially at [Fe/H] ≳ –0.5. In particular, the bulge [α/Fe] ratios may remain enhanced to a slightly higher [Fe/H] than the thick disk, and the Fe-peak elements Co, Ni, and Cu appear enhanced compared to the disk. There is also some evidence that the [Na/Fe] (but not [Al/Fe]) trends between the bulge and local disk may be different at low and high metallicity. We also find that the velocity dispersion decreases as a function of increasing [Fe/H] for both fields, and do not detect any significant cold, high-velocity populations. A comparison with chemical enrichment models indicates that a significant fraction of hypernovae may be required to explain the bulge abundance trends, and that initial mass functions that are steep, top-heavy (and do not include strong outflow), or truncated to avoid including contributions from stars >40 M ☉ are ruled out, in particular because of disagreement with the Fe-peak abundance data. For most elements, the NGC 6553 stars exhibit abundance trends nearly identical to comparable metallicity bulge field stars

  4. Light, alpha, and Fe-peak element abundances in the galactic bulge

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Christian I. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, MS-15, Cambridge, MA 02138 (United States); Rich, R. Michael [Department of Physics and Astronomy, UCLA, 430 Portola Plaza, Box 951547, Los Angeles, CA 90095-1547 (United States); Kobayashi, Chiaki [Centre for Astrophysics Research, University of Hertfordshire, Hatfield AL10 9AB (United Kingdom); Kunder, Andrea [Leibniz-Institute für Astrophysik Potsdam (AIP), Ander Sternwarte 16, D-14482, Potsdam (Germany); Koch, Andreas, E-mail: cjohnson@cfa.harvard.edu, E-mail: rmr@astro.ucla.edu, E-mail: c.kobayashi@herts.ac.uk, E-mail: akunder@aip.de, E-mail: akoch@lsw.uni-heidelberg.de [Zentrum für Astronomie der Universität Heidelberg, Landessternwarte, Königstuhl 12, Heidelberg (Germany)

    2014-10-01

    We present radial velocities and chemical abundances of O, Na, Mg, Al, Si, Ca, Cr, Fe, Co, Ni, and Cu for a sample of 156 red giant branch stars in two Galactic bulge fields centered near (l, b) = (+5.25,–3.02) and (0,–12). The (+5.25,–3.02) field also includes observations of the bulge globular cluster NGC 6553. The results are based on high-resolution (R ∼ 20,000), high signal-to-noise ration (S/N ≳ 70) FLAMES-GIRAFFE spectra obtained through the European Southern Observatory archive. However, we only selected a subset of the original observations that included spectra with both high S/N and that did not show strong TiO absorption bands. This work extends previous analyses of this data set beyond Fe and the α-elements Mg, Si, Ca, and Ti. While we find reasonable agreement with past work, the data presented here indicate that the bulge may exhibit a different chemical composition than the local thick disk, especially at [Fe/H] ≳ –0.5. In particular, the bulge [α/Fe] ratios may remain enhanced to a slightly higher [Fe/H] than the thick disk, and the Fe-peak elements Co, Ni, and Cu appear enhanced compared to the disk. There is also some evidence that the [Na/Fe] (but not [Al/Fe]) trends between the bulge and local disk may be different at low and high metallicity. We also find that the velocity dispersion decreases as a function of increasing [Fe/H] for both fields, and do not detect any significant cold, high-velocity populations. A comparison with chemical enrichment models indicates that a significant fraction of hypernovae may be required to explain the bulge abundance trends, and that initial mass functions that are steep, top-heavy (and do not include strong outflow), or truncated to avoid including contributions from stars >40 M {sub ☉} are ruled out, in particular because of disagreement with the Fe-peak abundance data. For most elements, the NGC 6553 stars exhibit abundance trends nearly identical to comparable metallicity bulge field

  5. flowPeaks: a fast unsupervised clustering for flow cytometry data via K-means and density peak finding.

    Science.gov (United States)

    Ge, Yongchao; Sealfon, Stuart C

    2012-08-01

    For flow cytometry data, there are two common approaches to the unsupervised clustering problem: one is based on the finite mixture model and the other on spatial exploration of the histograms. The former is computationally slow and has difficulty to identify clusters of irregular shapes. The latter approach cannot be applied directly to high-dimensional data as the computational time and memory become unmanageable and the estimated histogram is unreliable. An algorithm without these two problems would be very useful. In this article, we combine ideas from the finite mixture model and histogram spatial exploration. This new algorithm, which we call flowPeaks, can be applied directly to high-dimensional data and identify irregular shape clusters. The algorithm first uses K-means algorithm with a large K to partition the cell population into many small clusters. These partitioned data allow the generation of a smoothed density function using the finite mixture model. All local peaks are exhaustively searched by exploring the density function and the cells are clustered by the associated local peak. The algorithm flowPeaks is automatic, fast and reliable and robust to cluster shape and outliers. This algorithm has been applied to flow cytometry data and it has been compared with state of the art algorithms, including Misty Mountain, FLOCK, flowMeans, flowMerge and FLAME. The R package flowPeaks is available at https://github.com/yongchao/flowPeaks. yongchao.ge@mssm.edu Supplementary data are available at Bioinformatics online.

  6. Simultaneous collection method of on-peak window image and off-peak window image in Tl-201 imaging

    International Nuclear Information System (INIS)

    Murakami, Tomonori; Noguchi, Yasushi; Kojima, Akihiro; Takagi, Akihiro; Matsumoto, Masanori

    2007-01-01

    Tl-201 imaging detects the photopeak (71 keV, in on-peak window) of characteristic X-rays of Hg-201 formed from Tl-201 decay. The peak is derived from 4 rays of different energy and emission intensity and does not follow in Gaussian distribution. In the present study, authors made an idea for the method in the title to attain the more effective single imaging, which was examined for its accuracy and reliability with phantoms and applied clinically to Tl-201 scintigraphy in a patient. The authors applied the triple energy window method for data acquisition: the energy window setting was made on Hg-201 X-rays photopeak in three of the lower (3%, L), main (72 keV, M) and upper (14%, U) windows with the gamma camera with 2-gated detector (Toshiba E. CAM/ICON). L, M and U images obtained simultaneously were then constructed to images of on-peak (L+M, Mock on-peak) and off-peak (M+U) window settings for evaluation. Phantoms for line source with Tl-201-containing swab and for multi-defect with acrylic plate containing Tl-201 solution were imaged in water. The female patient with thyroid cancer was subjected to preoperative scintigraphy under the defined conditions. Mock on-, off-peak images were found to be equivalent to the true (ordinary, clinical) on-, off-peak ones, and the present method was thought usable for evaluation of usefulness of off-peak window data. (R.T.)

  7. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  8. Hippocampal MRI volumetry at 3 Tesla: reliability and practical guidance.

    Science.gov (United States)

    Jeukens, Cécile R L P N; Vlooswijk, Mariëlle C G; Majoie, H J Marian; de Krom, Marc C T F M; Aldenkamp, Albert P; Hofman, Paul A M; Jansen, Jacobus F A; Backes, Walter H

    2009-09-01

    was similar to the literature values obtained at 1.5 Tesla, hippocampal border definition is argued to be more confident and easier because of the improved signal-to-noise characteristics.

  9. The Performance of the Robo-AO Laser Guide Star Adaptive Optics System at the Kitt Peak 2.1 m Telescope

    Science.gov (United States)

    Jensen-Clem, Rebecca; Duev, Dmitry A.; Riddle, Reed; Salama, Maïssa; Baranec, Christoph; Law, Nicholas M.; Kulkarni, S. R.; Ramprakash, A. N.

    2018-01-01

    Robo-AO is an autonomous laser guide star adaptive optics (AO) system recently commissioned at the Kitt Peak 2.1 m telescope. With the ability to observe every clear night, Robo-AO at the 2.1 m telescope is the first dedicated AO observatory. This paper presents the imaging performance of the AO system in its first 18 months of operations. For a median seeing value of 1.″44, the average Strehl ratio is 4% in the i\\prime band. After post processing, the contrast ratio under sub-arcsecond seeing for a 2≤slant i\\prime ≤slant 16 primary star is five and seven magnitudes at radial offsets of 0.″5 and 1.″0, respectively. The data processing and archiving pipelines run automatically at the end of each night. The first stage of the processing pipeline shifts and adds the rapid frame rate data using techniques optimized for different signal-to-noise ratios. The second “high-contrast” stage of the pipeline is eponymously well suited to finding faint stellar companions. Currently, a range of scientific programs, including the synthetic tracking of near-Earth asteroids, the binarity of stars in young clusters, and weather on solar system planets are being undertaken with Robo-AO.

  10. Peak Detection Method Evaluation for Ion Mobility Spectrometry by Using Machine Learning Approaches

    DEFF Research Database (Denmark)

    Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna

    2013-01-01

    machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region......-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods...

  11. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  12. Molecular network topology and reliability for multipurpose diagnosis

    Directory of Open Access Journals (Sweden)

    Jalil MA

    2011-10-01

    Full Text Available MA Jalil1, N Moongfangklang2,3, K Innate4, S Mitatha3, J Ali5, PP Yupapin41Ibnu Sina Institute of Fundamental Science Studies, Nanotechnology Research Alliance, University of Technology Malaysia, Johor Bahru, Malaysia; 2School of Information and Communication Technology, Phayao University, Phayao, Thailand; 3Hybrid Computing Research Laboratory, Faculty of Engineering, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand; 4Nanoscale Science and Engineering Research Alliance, Advanced Research Center for Photonics, Faculty of Science, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand; 5Institute of Advanced Photonics Science, Nanotechnology Research Alliance, University of Technology Malaysia, Johor Bahru, MalaysiaAbstract: This investigation proposes the use of molecular network topology for drug delivery and diagnosis network design. Three modules of molecular network topologies, such as bus, star, and ring networks, are designed and manipulated based on a micro- and nanoring resonator system. The transportation of the trapping molecules by light in the network is described and the theoretical background is reviewed. The quality of the network is analyzed and calculated in terms of signal transmission (ie, signal to noise ratio and crosstalk effects. Results obtained show that a bus network has advantages over star and ring networks, where the use of mesh networks is possible. In application, a thin film network can be fabricated in the form of a waveguide and embedded in artificial bone, which can be connected to the required drug targets. The particular drug/nutrient can be transported to the required targets via the particular network used.Keywords: molecular network, network reliability, network topology, drug network, multi-access network

  13. Passive radio frequency peak power multiplier

    Science.gov (United States)

    Farkas, Zoltan D.; Wilson, Perry B.

    1977-01-01

    Peak power multiplication of a radio frequency source by simultaneous charging of two high-Q resonant microwave cavities by applying the source output through a directional coupler to the cavities and then reversing the phase of the source power to the coupler, thereby permitting the power in the cavities to simultaneously discharge through the coupler to the load in combination with power from the source to apply a peak power to the load that is a multiplication of the source peak power.

  14. Practical load management - Peak shaving using photovoltaics

    International Nuclear Information System (INIS)

    Berger, W.

    2009-01-01

    This article takes a look at how photovoltaic (PV) power generation can be used in a practical way to meet peak demands for electricity. Advice is provided on how photovoltaics can provide peak load 'shaving' through the correlation between its production and the peak loads encountered during the day. The situation regarding feed-in tariffs in Italy is discussed, as are further examples of installations in Germany and Austria. Further, an initiative of the American Southern California Edison utility is discussed which foresees the installation of large PV plant on the roofs of commercial premises to provide local generation of peak energy and thus relieve demands on their power transportation network.

  15. The geomorphic structure of the runoff peak

    Directory of Open Access Journals (Sweden)

    R. Rigon

    2011-06-01

    Full Text Available This paper develops a theoretical framework to investigate the core dependence of peak flows on the geomorphic properties of river basins. Based on the theory of transport by travel times, and simple hydrodynamic characterization of floods, this new framework invokes the linearity and invariance of the hydrologic response to provide analytical and semi-analytical expressions for peak flow, time to peak, and area contributing to the peak runoff. These results are obtained for the case of constant-intensity hyetograph using the Intensity-Duration-Frequency (IDF curves to estimate extreme flow values as a function of the rainfall return period. Results show that, with constant-intensity hyetographs, the time-to-peak is greater than rainfall duration and usually shorter than the basin concentration time. Moreover, the critical storm duration is shown to be independent of rainfall return period as well as the area contributing to the flow peak. The same results are found when the effects of hydrodynamic dispersion are accounted for. Further, it is shown that, when the effects of hydrodynamic dispersion are negligible, the basin area contributing to the peak discharge does not depend on the channel velocity, but is a geomorphic propriety of the basin. As an example this framework is applied to three watersheds. In particular, the runoff peak, the critical rainfall durations and the time to peak are calculated for all links within a network to assess how they increase with basin area.

  16. [A peak recognition algorithm designed for chromatographic peaks of transformer oil].

    Science.gov (United States)

    Ou, Linjun; Cao, Jian

    2014-09-01

    In the field of the chromatographic peak identification of the transformer oil, the traditional first-order derivative requires slope threshold to achieve peak identification. In terms of its shortcomings of low automation and easy distortion, the first-order derivative method was improved by applying the moving average iterative method and the normalized analysis techniques to identify the peaks. Accurate identification of the chromatographic peaks was realized through using multiple iterations of the moving average of signal curves and square wave curves to determine the optimal value of the normalized peak identification parameters, combined with the absolute peak retention times and peak window. The experimental results show that this algorithm can accurately identify the peaks and is not sensitive to the noise, the chromatographic peak width or the peak shape changes. It has strong adaptability to meet the on-site requirements of online monitoring devices of dissolved gases in transformer oil.

  17. In vivo dissolution measurement with indium-111 summation peak ratios

    International Nuclear Information System (INIS)

    Jay, M.; Woodward, M.A.; Brouwer, K.R.

    1985-01-01

    Dissolution of [ 111 In]labeled tablets was measured in vivo in a totally noninvasive manner by using a modification of the perturbed angular correlation technique known as the summation peak ratio method. This method, which requires the incorporation of only 10-12 microCi into the dosage form, provided reliable dissolution data after oral administration of [ 111 In]lactose tablets. These results were supported by in vitro experiments which demonstrated that the dissolution rate as measured by the summation peak ratio method was in close agreement with the dissolution rate of salicylic acid in a [ 111 In]salicylic acid tablet. The method has the advantages of using only one detector, thereby avoiding the need for complex coincidence counting systems, requiring less radioactivity, and being potentially applicable to a gamma camera imaging system

  18. Power peak in vicinity of WWER-440 control rod

    International Nuclear Information System (INIS)

    Mikus, J.

    2003-01-01

    The measurements of axial power (fission density) distribution have been performed by means of gamma activity determination (gamma scanning method) of the irradiated fuel pins detecting gamma quanta in the La peak area - 1596.5 keV of their selected parts having 20 mm length in the axial coordinates range 50-950 mm with a 10 mm step using a rectangular collimator (dimensions 20x10 mm). Two NaI(Tl) scintillation crystals (one as a monitor) with diameter of 40 mm were used, each of them in Pb shielding (thickness 150 mm). The obtained results enlarge the available 'power peaking database' and enable validation of the codes also in important case of zero-boron concentration that corresponds to the end of WWER-440 fuel cycle. This validation can improve the reliability of the calculation results of the power distribution in WWER-440 cores, re-loading schemes etc

  19. Employer Attitudes towards Peak Hour Avoidance

    NARCIS (Netherlands)

    Vonk Noordegraaf, D.M.; Annema, J.A.

    2012-01-01

    Peak Hour Avoidance is a relatively new Dutch mobility management measure. To reduce congestion frequent car drivers are given a financial reward for reducing the proportion of trips that they make during peak hours on a specific motorway section. Although previous studies show that employers are

  20. Employer attitudes towards peak hour avoidance

    NARCIS (Netherlands)

    Noordegraaf, D.M.V.; Annema, J.A.

    2012-01-01

    Peak Hour Avoidance is a relatively new Dutch mobility management measure. To reduce congestion frequent car drivers are given a financial reward for reducing the proportion of trips that they make during peak hours on a specific motorway section. Although previous studies show that employers are

  1. Peak load pricing lowers generation costs

    International Nuclear Information System (INIS)

    Lande, R.H.

    1980-01-01

    Before a utility implements peak load pricing for different classes of consumers, the costs and the benefits should be compared. The methodology described enables a utility to determine whether peak load pricing should be introduced for specific users. Cost-benefit analyses for domestic consumers and commercial/industrial consumers, showing break-even points are presented. (author)

  2. Peak Shaving Considering Streamflow Uncertainties | Iwuagwu ...

    African Journals Online (AJOL)

    The main thrust of this paper is peak shaving with a Stochastic hydro model. In peak sharing, the amount of hydro energy scheduled may be a minimum but it serves to replace less efficient thermal units. The sample system is die Kainji hydro plant and the thermal units of the National Electric Power Authority. The random ...

  3. Systematic evaluation of commercially available ultra-high performance liquid chromatography columns for drug metabolite profiling: optimization of chromatographic peak capacity.

    Science.gov (United States)

    Dubbelman, Anne-Charlotte; Cuyckens, Filip; Dillen, Lieve; Gross, Gerhard; Hankemeier, Thomas; Vreeken, Rob J

    2014-12-29

    The present study investigated the practical use of modern ultra-high performance liquid chromatography (UHPLC) separation techniques for drug metabolite profiling, aiming to develop a widely applicable, high-throughput, easy-to-use chromatographic method, with a high chromatographic resolution to accommodate simultaneous qualitative and quantitative analysis of small-molecule drugs and metabolites in biological matrices. To this end, first the UHPLC system volume and variance were evaluated. Then, a mixture of 17 drugs and various metabolites (molecular mass of 151-749Da, logP of -1.04 to 6.7), was injected on six sub-2μm particle columns. Five newest generation core shell technology columns were compared and tested against one column packed with porous particles. Two aqueous (pH 2.7 and 6.8) and two organic mobile phases were evaluated, first with the same flow and temperature and subsequently at each column's individual limit of temperature and pressure. The results demonstrated that pre-column dead volume had negligible influence on the peak capacity and shape. In contrast, a decrease in post-column volume of 57% resulted in a substantial (47%) increase in median peak capacity and significantly improved peak shape. When the various combinations of stationary and mobile phases were used at the same flow rate (0.5mL/min) and temperature (45°C), limited differences were observed between the median peak capacities, with a maximum of 26%. At higher flow though (up to 0.9mL/min), a maximum difference of almost 40% in median peak capacity was found between columns. The finally selected combination of solid-core particle column and mobile phase composition was chosen for its selectivity, peak capacity, wide applicability and peak shape. The developed method was applied to rat hepatocyte samples incubated with the drug buspirone and demonstrated to provide a similar chromatographic resolution, but a 6 times higher signal-to-noise ratio than a more traditional UHPLC

  4. The peak in neutron powder diffraction

    International Nuclear Information System (INIS)

    Laar, B. van; Yelon, W.B.

    1984-01-01

    For the application of Rietveld profile analysis to neutron powder diffraction data a precise knowledge of the peak profile, in both shape and position, is required. The method now in use employs a Gaussian shaped profile with a semi-empirical asymmetry correction for low-angle peaks. The integrated intensity is taken to be proportional to the classical Lorentz factor calculated for the X-ray case. In this paper an exact expression is given for the peak profile based upon the geometrical dimensions of the diffractometer. It is shown that the asymmetry of observed peaks is well reproduced by this expression. The angular displacement of the experimental profile with respect to the nominal Bragg angle value is larger than expected. Values for the correction to the classical Lorentz factor for the integrated intensity are given. The exact peak profile expression has been incorporated into a Rietveld profile analysis refinement program. (Auth.)

  5. Demand Side Management: An approach to peak load smoothing

    Science.gov (United States)

    Gupta, Prachi

    A preliminary national-level analysis was conducted to determine whether Demand Side Management (DSM) programs introduced by electric utilities since 1992 have made any progress towards their stated goal of reducing peak load demand. Estimates implied that DSM has a very small effect on peak load reduction and there is substantial regional and end-user variability. A limited scholarly literature on DSM also provides evidence in support of a positive effect of demand response programs. Yet, none of these studies examine the question of how DSM affects peak load at the micro-level by influencing end-users' response to prices. After nearly three decades of experience with DSM, controversy remains over how effective these programs have been. This dissertation considers regional analyses that explore both demand-side solutions and supply-side interventions. On the demand side, models are estimated to provide in-depth evidence of end-user consumption patterns for each North American Electric Reliability Corporation (NERC) region, helping to identify sectors in regions that have made a substantial contribution to peak load reduction. The empirical evidence supports the initial hypothesis that there is substantial regional and end-user variability of reductions in peak demand. These results are quite robust in rapidly-urbanizing regions, where air conditioning and lighting load is substantially higher, and regions where the summer peak is more pronounced than the winter peak. It is also evident from the regional experiences that active government involvement, as shaped by state regulations in the last few years, has been successful in promoting DSM programs, and perhaps for the same reason we witness an uptick in peak load reductions in the years 2008 and 2009. On the supply side, we estimate the effectiveness of DSM programs by analyzing the growth of capacity margin with the introduction of DSM programs. The results indicate that DSM has been successful in offsetting the

  6. Peak tree: a new tool for multiscale hierarchical representation and peak detection of mass spectrometry data.

    Science.gov (United States)

    Zhang, Peng; Li, Houqiang; Wang, Honghui; Wong, Stephen T C; Zhou, Xiaobo

    2011-01-01

    Peak detection is one of the most important steps in mass spectrometry (MS) analysis. However, the detection result is greatly affected by severe spectrum variations. Unfortunately, most current peak detection methods are neither flexible enough to revise false detection results nor robust enough to resist spectrum variations. To improve flexibility, we introduce peak tree to represent the peak information in MS spectra. Each tree node is a peak judgment on a range of scales, and each tree decomposition, as a set of nodes, is a candidate peak detection result. To improve robustness, we combine peak detection and common peak alignment into a closed-loop framework, which finds the optimal decomposition via both peak intensity and common peak information. The common peak information is derived and loopily refined from the density clustering of the latest peak detection result. Finally, we present an improved ant colony optimization biomarker selection method to build a whole MS analysis system. Experiment shows that our peak detection method can better resist spectrum variations and provide higher sensitivity and lower false detection rates than conventional methods. The benefits from our peak-tree-based system for MS disease analysis are also proved on real SELDI data.

  7. AMSAA Reliability Growth Guide

    National Research Council Canada - National Science Library

    Broemm, William

    2000-01-01

    ... has developed reliability growth methodology for all phases of the process, from planning to tracking to projection. The report presents this methodology and associated reliability growth concepts.

  8. Reliability of Source Mechanisms for a Hydraulic Fracturing Dataset

    Science.gov (United States)

    Eyre, T.; Van der Baan, M.

    2016-12-01

    Non-double-couple components have been inferred for induced seismicity due to fluid injection, yet these components are often poorly constrained due to the acquisition geometry. Likewise non-double-couple components in microseismic recordings are not uncommon. Microseismic source mechanisms provide an insight into the fracturing behaviour of a hydraulically stimulated reservoir. However, source inversion in a hydraulic fracturing environment is complicated by the likelihood of volumetric contributions to the source due to the presence of high pressure fluids, which greatly increases the possible solution space and therefore the non-uniqueness of the solutions. Microseismic data is usually recorded on either 2D surface or borehole arrays of sensors. In many cases, surface arrays appear to constrain source mechanisms with high shear components, whereas borehole arrays tend to constrain more variable mechanisms including those with high tensile components. The abilities of each geometry to constrain the true source mechanisms are therefore called into question.The ability to distinguish between shear and tensile source mechanisms with different acquisition geometries is investigated using synthetic data. For both inversions, both P- and S- wave amplitudes recorded on three component sensors need to be included to obtain reliable solutions. Surface arrays appear to give more reliable solutions due to a greater sampling of the focal sphere, but in reality tend to record signals with a low signal to noise ratio. Borehole arrays can produce acceptable results, however the reliability is much more affected by relative source-receiver locations and source orientation, with biases produced in many of the solutions. Therefore more care must be taken when interpreting results.These findings are taken into account when interpreting a microseismic dataset of 470 events recorded by two vertical borehole arrays monitoring a horizontal treatment well. Source locations and

  9. Isotope resolution of the iron peak

    International Nuclear Information System (INIS)

    Henke, R.P.; Benton, E.V.

    1977-01-01

    A stack of Lexan detectors from the Apollo 17 mission has been analyzed to obtain Z measurements of sufficient accuracy to resolve the iron peak into its isotopic components. Within this distribution several peaks are present. With the centrally located, most populated peak assumed to be 56 Fe, the measurements imply that the abundances of 54 Fe and 58 Fe are appreciable fractions of the 56 Fe abundance. This result is in agreement with those of Webber et al. and Siegman et al. but in disagreement with the predictions of Tsao et al. (Auth.)

  10. A reliability simulation language for reliability analysis

    International Nuclear Information System (INIS)

    Deans, N.D.; Miller, A.J.; Mann, D.P.

    1986-01-01

    The results of work being undertaken to develop a Reliability Description Language (RDL) which will enable reliability analysts to describe complex reliability problems in a simple, clear and unambiguous way are described. Component and system features can be stated in a formal manner and subsequently used, along with control statements to form a structured program. The program can be compiled and executed on a general-purpose computer or special-purpose simulator. (DG)

  11. Peak load arrangements : Assessment of Nordel guidelines

    Energy Technology Data Exchange (ETDEWEB)

    2009-07-01

    Two Nordic countries, Sweden and Finland, have legislation that empowers the TSO to acquire designated peak load resources to mitigate the risk for shortage situations during the winter. In Denmark, the system operator procures resources to maintain a satisfactory level of security of supply. In Norway the TSO has set up a Regulation Power Option Market (RKOM) to secure a satisfactory level of operational reserves at all times, also in winter with high load demand. Only the arrangements in Finland and Sweden fall under the heading of Peak Load Arrangements defined in Nordel Guidelines. NordREG has been invited by the Electricity Market Group (EMG) to evaluate Nordel's proposal for 'Guidelines for transitional Peak Load Arrangements'. The EMG has also financed a study made by EC Group to support NordREG in the evaluation of the proposal. The study has been taken into account in NordREG's evaluation. In parallel to the EMG task, the Swedish regulator, the Energy Markets Inspectorate, has been given the task by the Swedish government to investigate a long term solution of the peak load issue. The Swedish and Finnish TSOs have together with Nord Pool Spot worked on finding a harmonized solution for activation of the peak load reserves in the market. An agreement accepted by the relevant authorities was reached in early January 2009, and the arrangement has been implemented since 19th January 2009. NordREG views that the proposed Nordel guidelines have served as a starting point for the presently agreed procedure. However, NordREG does not see any need to further develop the Nordel guidelines for peak load arrangements. NordREG agrees with Nordel that the market should be designed to solve peak load problems through proper incentives to market players. NordREG presumes that the relevant authorities in each country will take decisions on the need for any peak load arrangement to ensure security of supply. NordREG proposes that such decisions should be

  12. Sensitivity enhancement by chromatographic peak concentration with ultra-high performance liquid chromatography-nuclear magnetic resonance spectroscopy for minor impurity analysis.

    Science.gov (United States)

    Tokunaga, Takashi; Akagi, Ken-Ichi; Okamoto, Masahiko

    2017-07-28

    High performance liquid chromatography can be coupled with nuclear magnetic resonance (NMR) spectroscopy to give a powerful analytical method known as liquid chromatography-nuclear magnetic resonance (LC-NMR) spectroscopy, which can be used to determine the chemical structures of the components of complex mixtures. However, intrinsic limitations in the sensitivity of NMR spectroscopy have restricted the scope of this procedure, and resolving these limitations remains a critical problem for analysis. In this study, we coupled ultra-high performance liquid chromatography (UHPLC) with NMR to give a simple and versatile analytical method with higher sensitivity than conventional LC-NMR. UHPLC separation enabled the concentration of individual peaks to give a volume similar to that of the NMR flow cell, thereby maximizing the sensitivity to the theoretical upper limit. The UHPLC concentration of compound peaks present at typical impurity levels (5.0-13.1 nmol) in a mixture led to at most three-fold increase in the signal-to-noise ratio compared with LC-NMR. Furthermore, we demonstrated the use of UHPLC-NMR for obtaining structural information of a minor impurity in a reaction mixture in actual laboratory-scale development of a synthetic process. Using UHPLC-NMR, the experimental run times for chromatography and NMR were greatly reduced compared with LC-NMR. UHPLC-NMR successfully overcomes the difficulties associated with analyses of minor components in a complex mixture by LC-NMR, which are problematic even when an ultra-high field magnet and cryogenic probe are used. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Visual reliability and information rate in the retina of a nocturnal bee.

    Science.gov (United States)

    Frederiksen, Rikard; Wcislo, William T; Warrant, Eric J

    2008-03-11

    Nocturnal animals relying on vision typically have eyes that are optically and morphologically adapted for both increased sensitivity and greater information capacity in dim light. Here, we investigate whether adaptations for increased sensitivity also are found in their photoreceptors by using closely related and fast-flying nocturnal and diurnal bees as model animals. The nocturnal bee Megalopta genalis is capable of foraging and homing by using visually discriminated landmarks at starlight intensities. Megalopta's near relative, Lasioglossum leucozonium, performs these tasks only in bright sunshine. By recording intracellular responses to Gaussian white-noise stimuli, we show that photoreceptors in Megalopta actually code less information at most light levels than those in Lasioglossum. However, as in several other nocturnal arthropods, Megalopta's photoreceptors possess a much greater gain of transduction, indicating that nocturnal photoreceptors trade information capacity for sensitivity. By sacrificing photoreceptor signal-to-noise ratio and information capacity in dim light for an increased gain and, thus, an increased sensitivity, this strategy can benefit nocturnal insects that use neural summation to improve visual reliability at night.

  14. Bayesian Peak Picking for NMR Spectra

    KAUST Repository

    Cheng, Yichen

    2014-02-01

    Protein structure determination is a very important topic in structural genomics, which helps people to understand varieties of biological functions such as protein-protein interactions, protein–DNA interactions and so on. Nowadays, nuclear magnetic resonance (NMR) has often been used to determine the three-dimensional structures of protein in vivo. This study aims to automate the peak picking step, the most important and tricky step in NMR structure determination. We propose to model the NMR spectrum by a mixture of bivariate Gaussian densities and use the stochastic approximation Monte Carlo algorithm as the computational tool to solve the problem. Under the Bayesian framework, the peak picking problem is casted as a variable selection problem. The proposed method can automatically distinguish true peaks from false ones without preprocessing the data. To the best of our knowledge, this is the first effort in the literature that tackles the peak picking problem for NMR spectrum data using Bayesian method.

  15. Peak-Seeking Control for Trim Optimization

    Data.gov (United States)

    National Aeronautics and Space Administration — Innovators have developed a peak-seeking algorithm that can reduce drag and improve performance and fuel efficiency by optimizing aircraft trim in real time. The...

  16. Modeling, implementation, and validation of arterial travel time reliability : [summary].

    Science.gov (United States)

    2013-11-01

    Travel time reliability (TTR) has been proposed as : a better measure of a facilitys performance than : a statistical measure like peak hour demand. TTR : is based on more information about average traffic : flows and longer time periods, thus inc...

  17. Instream flow needs below peaking hydroelectric projects

    International Nuclear Information System (INIS)

    Milhous, R.T.

    1991-01-01

    This paper reports on a method developed to assist in the determination of instream flow needs below hydroelectric projects operated in a peaking mode. Peaking hydroelectric projects significantly change streamflow over a short period of time; consequently, any instream flow methodology must consider the dual flows associated with peaking projects. The dual flows are the lowest flow and the maximum generation flow of a peaking cycle. The methodology is based on elements of the Physical Habitat Simulation System of the U.S. Fish and Wildlife Service and uses habitat, rather than fish numbers or biomas, as at basic response variable. All aquatic animals are subject to the rapid changes in streamflow which cause rapid swings in habitat quality. Some aquatic organisms are relatively fixed in location in the stream while others can move when flows change. The habitat available from a project operated in peaking mode is considered to be the minimum habitat occurring during a cycle of habitat change. The methodology takes in to consideration that some aquatic animals can move and others cannot move during a peaking cycle

  18. Limitation of peak fitting and peak shape methods for determination of activation energy of thermoluminescence glow peaks

    CERN Document Server

    Sunta, C M; Piters, T M; Watanabe, S

    1999-01-01

    This paper shows the limitation of general order peak fitting and peak shape methods for determining the activation energy of the thermoluminescence glow peaks in the cases in which retrapping probability is much higher than the recombination probability and the traps are filled up to near saturation level. Right values can be obtained when the trap occupancy is reduced by using small doses or by post-irradiation partial bleaching. This limitation in the application of these methods has not been indicated earlier. In view of the unknown nature of kinetics in the experimental samples, it is recommended that these methods of activation energy determination should be applied only at doses well below the saturation dose.

  19. Security and reliability analysis of diversity combining techniques in SIMO mixed RF/FSO with multiple users

    KAUST Repository

    Abd El-Malek, Ahmed H.; Salhab, Anas M.; Zummo, Salam A.; Alouini, Mohamed-Slim

    2016-01-01

    In this paper, we investigate the impact of different diversity combining techniques on the security and reliability analysis of a single-input-multiple-output (SIMO) mixed radio frequency (RF)/free space optical (FSO) relay network with opportunistic multiuser scheduling. In this model, the user of the best channel among multiple users communicates with a multiple antennas relay node over an RF link, and then, the relay node employs amplify-and-forward (AF) protocol in retransmitting the user data to the destination over an FSO link. Moreover, the authorized transmission is assumed to be attacked by a single passive RF eavesdropper equipped with multiple antennas. Therefore, the system security reliability trade-off analysis is investigated. Closed-form expressions for the system outage probability and the system intercept probability are derived. Then, the newly derived expressions are simplified to their asymptotic formulas at the high signal-to-noise- ratio (SNR) region. Numerical results are presented to validate the achieved exact and asymptotic results and to illustrate the impact of various system parameters on the system performance. © 2016 IEEE.

  20. A fast and reliable method for simultaneous waveform, amplitude and latency estimation of single-trial EEG/MEG data.

    Directory of Open Access Journals (Sweden)

    Wouter D Weeda

    Full Text Available The amplitude and latency of single-trial EEG/MEG signals may provide valuable information concerning human brain functioning. In this article we propose a new method to reliably estimate single-trial amplitude and latency of EEG/MEG signals. The advantages of the method are fourfold. First, no a-priori specified template function is required. Second, the method allows for multiple signals that may vary independently in amplitude and/or latency. Third, the method is less sensitive to noise as it models data with a parsimonious set of basis functions. Finally, the method is very fast since it is based on an iterative linear least squares algorithm. A simulation study shows that the method yields reliable estimates under different levels of latency variation and signal-to-noise ratioÕs. Furthermore, it shows that the existence of multiple signals can be correctly determined. An application to empirical data from a choice reaction time study indicates that the method describes these data accurately.

  1. Security and reliability analysis of diversity combining techniques in SIMO mixed RF/FSO with multiple users

    KAUST Repository

    Abd El-Malek, Ahmed H.

    2016-07-26

    In this paper, we investigate the impact of different diversity combining techniques on the security and reliability analysis of a single-input-multiple-output (SIMO) mixed radio frequency (RF)/free space optical (FSO) relay network with opportunistic multiuser scheduling. In this model, the user of the best channel among multiple users communicates with a multiple antennas relay node over an RF link, and then, the relay node employs amplify-and-forward (AF) protocol in retransmitting the user data to the destination over an FSO link. Moreover, the authorized transmission is assumed to be attacked by a single passive RF eavesdropper equipped with multiple antennas. Therefore, the system security reliability trade-off analysis is investigated. Closed-form expressions for the system outage probability and the system intercept probability are derived. Then, the newly derived expressions are simplified to their asymptotic formulas at the high signal-to-noise- ratio (SNR) region. Numerical results are presented to validate the achieved exact and asymptotic results and to illustrate the impact of various system parameters on the system performance. © 2016 IEEE.

  2. Reliability data banks

    International Nuclear Information System (INIS)

    Cannon, A.G.; Bendell, A.

    1991-01-01

    Following an introductory chapter on Reliability, what is it, why it is needed, how it is achieved and measured, the principles of reliability data bases and analysis methodologies are the subject of the next two chapters. Achievements due to the development of data banks are mentioned for different industries in the next chapter, FACTS, a comprehensive information system for industrial safety and reliability data collection in process plants are covered next. CREDO, the Central Reliability Data Organization is described in the next chapter and is indexed separately, as is the chapter on DANTE, the fabrication reliability Data analysis system. Reliability data banks at Electricite de France and IAEA's experience in compiling a generic component reliability data base are also separately indexed. The European reliability data system, ERDS, and the development of a large data bank come next. The last three chapters look at 'Reliability data banks, - friend foe or a waste of time'? and future developments. (UK)

  3. Suncor maintenance and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Little, S. [Suncor Energy, Calgary, AB (Canada)

    2006-07-01

    Fleet maintenance and reliability at Suncor Energy was discussed in this presentation, with reference to Suncor Energy's primary and support equipment fleets. This paper also discussed Suncor Energy's maintenance and reliability standard involving people, processes and technology. An organizational maturity chart that graphed organizational learning against organizational performance was illustrated. The presentation also reviewed the maintenance and reliability framework; maintenance reliability model; the process overview of the maintenance and reliability standard; a process flow chart of maintenance strategies and programs; and an asset reliability improvement process flow chart. An example of an improvement initiative was included, with reference to a shovel reliability review; a dipper trip reliability investigation; bucket related failures by type and frequency; root cause analysis of the reliability process; and additional actions taken. Last, the presentation provided a graph of the results of the improvement initiative and presented the key lessons learned. tabs., figs.

  4. Statistics of peaks of Gaussian random fields

    International Nuclear Information System (INIS)

    Bardeen, J.M.; Bond, J.R.; Kaiser, N.; Szalay, A.S.; Stanford Univ., CA; California Univ., Berkeley; Cambridge Univ., England; Fermi National Accelerator Lab., Batavia, IL)

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of upcrossing points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima. 67 references

  5. Peak Oil, threat or energy worlds' phantasm?

    International Nuclear Information System (INIS)

    Favennec, Jean-Pierre

    2011-01-01

    The concept of Peak Oil is based on the work of King Hubbert, a petroleum geologist who worked for Shell in the USA in the 1960's. Based on the fact that discoveries in America reached a maximum in the 1930's, he announced that American production would reach a maximum in 1969, which did actually occur. Geologists members of the Association for the Study of Peak Oil have extrapolated this result to a worldwide scale and, since oil discoveries reached a peak in the 1960's, argued that production will peak in the very near future. It is clear that hydrocarbon reserves are finite and therefore exhaustible. But little is known regarding the level of ultimate (i.e. total existing) reserves. There are probably very large reserves of non conventional oil in addition to the reserves of conventional oil. An increasing number of specialists put maximum production at less than 100 Mb/d more for geopolitical than physical reasons. Attainable peak production will probably vary from year to year and will depend on how crude oil prices develop

  6. Electric peak power forecasting by year 2025

    International Nuclear Information System (INIS)

    Alsayegh, O.A.; Al-Matar, O.A.; Fairouz, F.A.; Al-Mulla Ali, A.

    2005-01-01

    Peak power demand in Kuwait up to the year 2025 was predicted using an artificial neural network (ANN) model. The aim of the study was to investigate the effect of air conditioning (A/C) units on long-term power demand. Five socio-economic factors were selected as inputs for the simulation: (1) gross national product, (2) population, (3) number of buildings, (4) imports of A/C units, and (5) index of industrial production. The study used socio-economic data from 1978 to 2000. Historical data of the first 10 years of the studied time period were used to train the ANN. The electrical network was then simulated to forecast peak power for the following 11 years. The calculated error was then used for years in which power consumption data were not available. The study demonstrated that average peak power rates increased by 4100 MW every 5 years. Various scenarios related to changes in population, the number of buildings, and the quantity of A/C units were then modelled to estimate long-term peak power demand. Results of the study demonstrated that population had the strongest impact on future power demand, while the number of buildings had the smallest impact. It was concluded that peak power growth can be controlled through the use of different immigration policies, increased A/C efficiency, and the use of vertical housing. 7 refs., 2 tabs., 6 figs

  7. Electric peak power forecasting by year 2025

    Energy Technology Data Exchange (ETDEWEB)

    Alsayegh, O.A.; Al-Matar, O.A.; Fairouz, F.A.; Al-Mulla Ali, A. [Kuwait Inst. for Scientific Research, Kuwait City (Kuwait). Div. of Environment and Urban Development

    2005-07-01

    Peak power demand in Kuwait up to the year 2025 was predicted using an artificial neural network (ANN) model. The aim of the study was to investigate the effect of air conditioning (A/C) units on long-term power demand. Five socio-economic factors were selected as inputs for the simulation: (1) gross national product, (2) population, (3) number of buildings, (4) imports of A/C units, and (5) index of industrial production. The study used socio-economic data from 1978 to 2000. Historical data of the first 10 years of the studied time period were used to train the ANN. The electrical network was then simulated to forecast peak power for the following 11 years. The calculated error was then used for years in which power consumption data were not available. The study demonstrated that average peak power rates increased by 4100 MW every 5 years. Various scenarios related to changes in population, the number of buildings, and the quantity of A/C units were then modelled to estimate long-term peak power demand. Results of the study demonstrated that population had the strongest impact on future power demand, while the number of buildings had the smallest impact. It was concluded that peak power growth can be controlled through the use of different immigration policies, increased A/C efficiency, and the use of vertical housing. 7 refs., 2 tabs., 6 figs.

  8. A note on errors and signal to noise ratio of binary cross-correlation measurements of system impulse response

    International Nuclear Information System (INIS)

    Cummins, J.D.

    1964-02-01

    The sources of error in the measurement of system impulse response using test signals of a discrete interval binary nature are considered. Methods of correcting for the errors due to theoretical imperfections are given and the variance of the estimate of the system impulse response due to random noise is determined. Several topics related to the main topic are considered e.g. determination of a theoretical model from experimental results. General conclusions about the magnitude of the errors due to the theoretical imperfections are made. (author)

  9. Signal to noise ratio (SNR) and image uniformity: an estimate of performance of magnetic resonance imaging (MRI) system

    International Nuclear Information System (INIS)

    Narayan, P.; Suri, S.; Choudhary, S.R.

    2001-01-01

    In most general definition, noise in an image, is any variation that represents a deviation from truth. Noise sources in MRI can be systematic or random and statistical in nature. Data processing algorithms that smooth and enhance the edges by non-linear intensity assignments among other factors can affect the distribution of statistical noise. The SNR and image uniformity depends on the various parameters of NMR imaging system (viz. General system calibration, Gain coil tuning, AF shielding, coil loading, image processing and scan parameters like TE, TR, interslice distance, slice thickness, pixel size and matrix size). A study on SNR and image uniformity have been performed using standard head AF coil with different TR and the estimates of their variation are presented. A comparison between different techniques has also been evaluated using standard protocol of the Siemens Magnetom Vision Plus MRI system

  10. Prediction of speech masking release for fluctuating interferers based on the envelope power signal-to-noise ratio

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2012-01-01

    -hearing listeners in conditions with additive stationary noise, reverberation, and nonlinear processing with spectral subtraction. The latter condition represents a case in which the standardized speech intelligibility index and speech transmission index fail. However, the sEPSM is limited to conditions...... for the stationary and non-stationary interferers, demonstrating further that the envelope SNR is crucial for speech comprehension....

  11. Optimizing Taq polymerase concentration for improved signal-to-noise in the broad range detection of low abundance bacteria.

    Directory of Open Access Journals (Sweden)

    Rudolph Spangler

    Full Text Available BACKGROUND: PCR in principle can detect a single target molecule in a reaction mixture. Contaminating bacterial DNA in reagents creates a practical limit on the use of PCR to detect dilute bacterial DNA in environmental or public health samples. The most pernicious source of contamination is microbial DNA in DNA polymerase preparations. Importantly, all commercial Taq polymerase preparations inevitably contain contaminating microbial DNA. Removal of DNA from an enzyme preparation is problematical. METHODOLOGY/PRINCIPAL FINDINGS: This report demonstrates that the background of contaminating DNA detected by quantitative PCR with broad host range primers can be decreased greater than 10-fold through the simple expedient of Taq enzyme dilution, without altering detection of target microbes in samples. The general method is: For any thermostable polymerase used for high-sensitivity detection, do a dilution series of the polymerase crossed with a dilution series of DNA or bacteria that work well with the test primers. For further work use the concentration of polymerase that gave the least signal in its negative control (H(2O while also not changing the threshold cycle for dilutions of spiked DNA or bacteria compared to higher concentrations of Taq polymerase. CONCLUSIONS/SIGNIFICANCE: It is clear from the studies shown in this report that a straightforward procedure of optimizing the Taq polymerase concentration achieved "treatment-free" attenuation of interference by contaminating bacterial DNA in Taq polymerase preparations. This procedure should facilitate detection and quantification with broad host range primers of a small number of bona fide bacteria (as few as one in a sample.

  12. Signal-to-Noise Contribution of Principal Component Loads in Reconstructed Near-Infrared Raman Tissue Spectra

    NARCIS (Netherlands)

    Grimbergen, M. C. M.; van Swol, C. F. P.; Kendall, C.; Verdaasdonk, R. M.; Stone, N.; Bosch, J. L. H. R.

    The overall quality of Raman spectra in the near-infrared region, where biological samples are often studied, has benefited from various improvements to optical instrumentation over the past decade. However, obtaining ample spectral quality for analysis is still challenging due to device

  13. Dopamine modulates persistent synaptic activity and enhances the signal-to-noise ratio in the prefrontal cortex.

    Directory of Open Access Journals (Sweden)

    Sven Kroener

    2009-08-01

    Full Text Available The importance of dopamine (DA for prefrontal cortical (PFC cognitive functions is widely recognized, but its mechanisms of action remain controversial. DA is thought to increase signal gain in active networks according to an inverted U dose-response curve, and these effects may depend on both tonic and phasic release of DA from midbrain ventral tegmental area (VTA neurons.We used patch-clamp recordings in organotypic co-cultures of the PFC, hippocampus and VTA to study DA modulation of spontaneous network activity in the form of Up-states and signals in the form of synchronous EPSP trains. These cultures possessed a tonic DA level and stimulation of the VTA evoked DA transients within the PFC. The addition of high (> or = 1 microM concentrations of exogenous DA to the cultures reduced Up-states and diminished excitatory synaptic inputs (EPSPs evoked during the Down-state. Increasing endogenous DA via bath application of cocaine also reduced Up-states. Lower concentrations of exogenous DA (0.1 microM had no effect on the up-state itself, but they selectively increased the efficiency of a train of EPSPs to evoke spikes during the Up-state. When the background DA was eliminated by depleting DA with reserpine and alpha-methyl-p-tyrosine, or by preparing corticolimbic co-cultures without the VTA slice, Up-states could be enhanced by low concentrations (0.1-1 microM of DA that had no effect in the VTA containing cultures. Finally, in spite of the concentration-dependent effects on Up-states, exogenous DA at all but the lowest concentrations increased intracellular current-pulse evoked firing in all cultures underlining the complexity of DA's effects in an active network.Taken together, these data show concentration-dependent effects of DA on global PFC network activity and they demonstrate a mechanism through which optimal levels of DA can modulate signal gain to support cognitive functioning.

  14. A note on errors and signal to noise ratio of binary cross-correlation measurements of system impulse response

    Energy Technology Data Exchange (ETDEWEB)

    Cummins, J D [Dynamics Group, Control and Instrumentation Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1964-02-15

    The sources of error in the measurement of system impulse response using test signals of a discrete interval binary nature are considered. Methods of correcting for the errors due to theoretical imperfections are given and the variance of the estimate of the system impulse response due to random noise is determined. Several topics related to the main topic are considered e.g. determination of a theoretical model from experimental results. General conclusions about the magnitude of the errors due to the theoretical imperfections are made. (author)

  15. Predicting speech intelligibility based on the signal-to-noise envelope power ratio after modulation-frequency selective processing

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2011-01-01

    A model for predicting the intelligibility of processed noisy speech is proposed. The speech-based envelope power spectrum model has a similar structure as the model of Ewert and Dau [(2000). J. Acoust. Soc. Am. 108, 1181-1196], developed to account for modulation detection and masking data. The ...... process provides a key measure of speech intelligibility. © 2011 Acoustical Society of America.......A model for predicting the intelligibility of processed noisy speech is proposed. The speech-based envelope power spectrum model has a similar structure as the model of Ewert and Dau [(2000). J. Acoust. Soc. Am. 108, 1181-1196], developed to account for modulation detection and masking data....... The model estimates the speech-to-noise envelope power ratio, SNR env, at the output of a modulation filterbank and relates this metric to speech intelligibility using the concept of an ideal observer. Predictions were compared to data on the intelligibility of speech presented in stationary speech...

  16. Real-time determination of the signal-to-noise ratio of partly coherent seismic time series

    DEFF Research Database (Denmark)

    Kjeldsen, Peter Møller

    1994-01-01

    it is of great practical interest to be able to monitor the S/N while the traces are recorded an approach for fast real-time determination of the S/N of seismic time series is proposed. The described method is based on an iterative procedure utilizing the trace-to-trace coherence, but unlike procedures known so...... far it uses calculated initial guesses and stop criterions. This significantly reduces the computational burden of the procedure so that real-time capabilities are obtained...

  17. Predicting binaural speech intelligibility using the signal-to-noise ratio in the envelope power spectrum domain

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; MacDonald, Ewen; Dau, Torsten

    2016-01-01

    time difference of the target and masker. The Pearson correlation coefficient between the simulated speech reception thresholds and the data across all experiments was 0.91. A model version that considered only BE processing performed similarly (correlation coefficient of 0.86) to the complete model...

  18. Enhancement of signal-to-noise ratio of ultracold polar NaCs molecular spectra by phase locking detection

    Science.gov (United States)

    Wang, Wenhao; Liu, Wenliang; Wu, Jizhou; Li, Yuqing; Wang, Xiaofeng; Liu, Yanyan; Ma, Jie; Xiao, Liantuan; Jia, Suotang

    2017-12-01

    Not Available Project supported by the National Key Research and Development Program of China (Grant No. 2017YFA0304203), the ChangJiang Scholars and Innovative Research Team in the University of the Ministry of Education of China (Grant No. IRT13076), the National Natural Science Foundation of China (Grant Nos. 91436108, 61378014, 61675121, 61705123, and 61722507), the Fund for Shanxi “1331 Project” Key Subjects Construction, China, and the Foundation for Outstanding Young Scholars of Shanxi Province, China (Grant No. 201601D021001).

  19. The Accelerator Reliability Forum

    CERN Document Server

    Lüdeke, Andreas; Giachino, R

    2014-01-01

    A high reliability is a very important goal for most particle accelerators. The biennial Accelerator Reliability Workshop covers topics related to the design and operation of particle accelerators with a high reliability. In order to optimize the over-all reliability of an accelerator one needs to gather information on the reliability of many different subsystems. While a biennial workshop can serve as a platform for the exchange of such information, the authors aimed to provide a further channel to allow for a more timely communication: the Particle Accelerator Reliability Forum [1]. This contribution will describe the forum and advertise it’s usage in the community.

  20. SPANISH PEAKS WILDERNESS STUDY AREA, COLORADO.

    Science.gov (United States)

    Budding, Karin E.; Kluender, Steven E.

    1984-01-01

    A geologic and geochemical investigation and a survey of mines and prospects were conducted to evaluate the mineral-resource potential of the Spanish Peaks Wilderness Study Area, Huerfano and Las Animas Counties, in south-central Colorado. Anomalous gold, silver, copper, lead, and zinc concentrations in rocks and in stream sediments from drainage basins in the vicinity of the old mines and prospects on West Spanish Peak indicate a substantiated mineral-resource potential for base and precious metals in the area surrounding this peak; however, the mineralized veins are sparse, small in size, and generally low in grade. There is a possibility that coal may underlie the study area, but it would be at great depth and it is unlikely that it would have survived the intense igneous activity in the area. There is little likelihood for the occurrence of oil and gas because of the lack of structural traps and the igneous activity.

  1. Analysis of fuel end-temperature peaking

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Z.; Jiang, Q.; Lai, L.; Shams, M. [CANDU Energy Inc., Fuel Engineering Dept., Mississauga, Ontario (Canada)

    2013-07-01

    During normal operation and refuelling of CANDU® fuel, fuel temperatures near bundle ends will increase due to a phenomenon called end flux peaking. Similar phenomenon would also be expected to occur during a postulated large break LOCA event. The end flux peaking in a CANDU fuel element is due to the fact that neutron flux is higher near a bundle end, in contact with a neighbouring bundle or close to heavy water coolant, than in the bundle mid-plane, because of less absorption of thermal neutrons by Zircaloy or heavy water than by the UO{sub 2} material. This paper describes Candu Energy experience in analysing behaviour of bundle due to end flux peaking using fuel codes FEAT, ELESTRES and ELOCA. (author)

  2. Resolving overlapping peaks in ARXPS data: The effect of noise and fitting method

    International Nuclear Information System (INIS)

    Muñoz-Flores, Jaime; Herrera-Gomez, Alberto

    2012-01-01

    Highlights: ► Noise is an important factor affecting the fitting of overlapping peaks in XPS data. ► The combined information in ARXPS data can be used to improve fitting reliability. ► The error on the estimation of the peak parameters depends on the peak-fitting method. ► Simultaneous fitting method is much more robust against noise than sequential fitting. ► The estimation of the error range is better done with ARXPS data than with XPS data. - Abstract: Peak-fitting of X-ray photoelectron spectroscopy (XPS) data can be very sensitive to noise when the difference on the binding energy among the peaks is smaller than the width of the peaks. This sensitivity depends on the fitting algorithm. Angle-resolved XPS (ARXPS) analysis offers the opportunity of employing the combined information contained in the data at the various angles to reduce the sensitivity to noise. The assumption of shared peak parameters (center and width) among the spectra for the different angles, and how it is introduced into the analysis, plays a basic role. Sequential fitting is the usual practice in ARXPS data peak-fitting. It consist on first estimating the center and width of the peaks from the data acquired at one of the angles, and then using those parameters as a starting approximation for fitting the data for each of the rest of the angles. An improvement of this method consists of averaging the centers and widths of the peaks obtained at the different angles, and then employing these values to assess the areas of the peaks for each angle. Another strategy for using the combined information is by assessing the peak parameters from the sum of the experimental data. The complete use of the combined information contained in the data-set is optimized by the simultaneous fitting method. It consists of the assessment of the center and width of the peaks by fitting the data at all the angles simultaneously. Computer-generated data was employed to compare the sensitivity with respect

  3. Human Reliability Program Overview

    Energy Technology Data Exchange (ETDEWEB)

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  4. Power electronics reliability analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  5. Reliability of software

    International Nuclear Information System (INIS)

    Kopetz, H.

    1980-01-01

    Common factors and differences in the reliability of hardware and software; reliability increase by means of methods of software redundancy. Maintenance of software for long term operating behavior. (HP) [de

  6. Climate change and peak demand for electricity: Evaluating policies for reducing peak demand under different climate change scenarios

    Science.gov (United States)

    Anthony, Abigail Walker

    This research focuses on the relative advantages and disadvantages of using price-based and quantity-based controls for electricity markets. It also presents a detailed analysis of one specific approach to quantity based controls: the SmartAC program implemented in Stockton, California. Finally, the research forecasts electricity demand under various climate scenarios, and estimates potential cost savings that could result from a direct quantity control program over the next 50 years in each scenario. The traditional approach to dealing with the problem of peak demand for electricity is to invest in a large stock of excess capital that is rarely used, thereby greatly increasing production costs. Because this approach has proved so expensive, there has been a focus on identifying alternative approaches for dealing with peak demand problems. This research focuses on two approaches: price based approaches, such as real time pricing, and quantity based approaches, whereby the utility directly controls at least some elements of electricity used by consumers. This research suggests that well-designed policies for reducing peak demand might include both price and quantity controls. In theory, sufficiently high peak prices occurring during periods of peak demand and/or low supply can cause the quantity of electricity demanded to decline until demand is in balance with system capacity, potentially reducing the total amount of generation capacity needed to meet demand and helping meet electricity demand at the lowest cost. However, consumers need to be well informed about real-time prices for the pricing strategy to work as well as theory suggests. While this might be an appropriate assumption for large industrial and commercial users who have potentially large economic incentives, there is not yet enough research on whether households will fully understand and respond to real-time prices. Thus, while real-time pricing can be an effective tool for addressing the peak load

  7. Reliable Design Versus Trust

    Science.gov (United States)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  8. Pocket Handbook on Reliability

    Science.gov (United States)

    1975-09-01

    exponencial distributions Weibull distribution, -xtimating reliability, confidence intervals, relia- bility growth, 0. P- curves, Bayesian analysis. 20 A S...introduction for those not familiar with reliability and a good refresher for those who are currently working in the area. LEWIS NERI, CHIEF...includes one or both of the following objectives: a) prediction of the current system reliability, b) projection on the system reliability for someI future

  9. Principles of Bridge Reliability

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated......, and as an example the reliability profile and a sensitivity analyses for a corroded reinforced concrete bridge is shown....

  10. Reliability in engineering '87

    International Nuclear Information System (INIS)

    Tuma, M.

    1987-01-01

    The participants heard 51 papers dealing with the reliability of engineering products. Two of the papers were incorporated in INIS, namely ''Reliability comparison of two designs of low pressure regeneration of the 1000 MW unit at the Temelin nuclear power plant'' and ''Use of probability analysis of reliability in designing nuclear power facilities.''(J.B.)

  11. Osteoporosis: Peak Bone Mass in Women

    Science.gov (United States)

    ... bone density are seen even during childhood and adolescence. Hormonal factors. The hormone estrogen has an effect on peak bone mass. For example, women who had their first menstrual cycle at an early age and those who use oral contraceptives, which contain estrogen, often have high bone mineral ...

  12. Facility Location with Double-peaked Preferences

    DEFF Research Database (Denmark)

    Filos-Ratsikas, Aris; Li, Minming; Zhang, Jie

    2015-01-01

    ; this makes the problem essentially more challenging. As our main contribution, we present a simple truthful-in-expectation mechanism that achieves an approximation ratio of 1+b=c for both the social and the maximum, cost, where b is the distance of the agent from the peak and c is the minimum cost...

  13. Liquid waste processing at Comanche Peak

    International Nuclear Information System (INIS)

    Hughes-Edwards, L.M.; Edwards, J.M.

    1996-01-01

    This article describes the radioactive waste processing at Comanche Peak Steam Electric Station. Topics covered are the following: Reduction of liquid radioactive discharges (system leakage, outage planning); reduction of waste resin generation (waste stream segregation, processing methodology); reduction of activity released and off-site dose. 8 figs., 2 tabs

  14. Avoiding the False Peaks in Correlation Discrimination

    International Nuclear Information System (INIS)

    Awwal, A.S.

    2009-01-01

    Fiducials imprinted on laser beams are used to perform video image based alignment of the 192 laser beams in the National Ignition Facility (NIF) of Lawrence Livermore National Laboratory. In many video images, matched filtering is used to detect the location of these fiducials. Generally, the highest correlation peak is used to determine the position of the fiducials. However, when the signal to-be-detected is very weak compared to the noise, this approach totally breaks down. The highest peaks act as traps for false detection. The active target images used for automatic alignment in the National Ignition Facility are examples of such images. In these images, the fiducials of interest exhibit extremely low intensity and contrast, surrounded by high intensity reflection from metallic objects. Consequently, the highest correlation peaks are caused by these bright objects. In this work, we show how the shape of the correlation is exploited to isolate the valid matches from hundreds of invalid correlation peaks, and therefore identify extremely faint fiducials under very challenging imaging conditions

  15. Hubbert's Peak: the Impending World oil Shortage

    Science.gov (United States)

    Deffeyes, K. S.

    2004-12-01

    Global oil production will probably reach a peak sometime during this decade. After the peak, the world's production of crude oil will fall, never to rise again. The world will not run out of energy, but developing alternative energy sources on a large scale will take at least 10 years. The slowdown in oil production may already be beginning; the current price fluctuations for crude oil and natural gas may be the preamble to a major crisis. In 1956, the geologist M. King Hubbert predicted that U.S. oil production would peak in the early 1970s.1 Almost everyone, inside and outside the oil industry, rejected Hubbert's analysis. The controversy raged until 1970, when the U.S. production of crude oil started to fall. Hubbert was right. Around 1995, several analysts began applying Hubbert's method to world oil production, and most of them estimate that the peak year for world oil will be between 2004 and 2008. These analyses were reported in some of the most widely circulated sources: Nature, Science, and Scientific American.2 None of our political leaders seem to be paying attention. If the predictions are correct, there will be enormous effects on the world economy. Even the poorest nations need fuel to run irrigation pumps. The industrialized nations will be bidding against one another for the dwindling oil supply. The good news is that we will put less carbon dioxide into the atmosphere. The bad news is that my pickup truck has a 25-gallon tank.

  16. Reliability benefits of dispersed wind resource development

    International Nuclear Information System (INIS)

    Milligan, M.; Artig, R.

    1998-05-01

    Generating capacity that is available during the utility peak period is worth more than off-peak capacity. Wind power from a single location might not be available during enough of the peak period to provide sufficient value. However, if the wind power plant is developed over geographically disperse locations, the timing and availability of wind power from these multiple sources could provide a better match with the utility's peak load than a single site. There are other issues that arise when considering disperse wind plant development. Singular development can result in economies of scale and might reduce the costs of obtaining multiple permits and multiple interconnections. However, disperse development can result in cost efficiencies if interconnection can be accomplished at lower voltages or at locations closer to load centers. Several wind plants are in various stages of planning or development in the US. Although some of these are small-scale demonstration projects, significant wind capacity has been developed in Minnesota, with additional developments planned in Wyoming, Iowa and Texas. As these and other projects are planned and developed, there is a need to perform analysis of the value of geographically disperse sites on the reliability of the overall wind plant.This paper uses a production-cost/reliability model to analyze the reliability of several wind sites in the state of Minnesota. The analysis finds that the use of a model with traditional reliability measures does not produce consistent, robust results. An approach based on fuzzy set theory is applied in this paper, with improved results. Using such a model, the authors find that system reliability can be optimized with a mix of disperse wind sites

  17. Analysis of the same day of the week increases in peak electricity ...

    African Journals Online (AJOL)

    Modelling of the same day of the week increases in peak electricity demand improves the reliability of a power network if an accurate assessment of the level and frequency of future extreme load forecasts is carried out. Key words: Gibbs sampling, generalized single pareto, generalized pareto distribution, pareto quantile ...

  18. The peak in anomalous magnetic viscosity

    International Nuclear Information System (INIS)

    Collocott, S.J.; Watterson, P.A.; Tan, X.H.; Xu, H.

    2014-01-01

    Anomalous magnetic viscosity, where the magnetization as a function of time exhibits non-monotonic behaviour, being seen to increase, reach a peak, and then decrease, is observed on recoil lines in bulk amorphous ferromagnets, for certain magnetic prehistories. A simple geometrical approach based on the motion of the state line on the Preisach plane gives a theoretical framework for interpreting non-monotonic behaviour and explains the origin of the peak. This approach gives an expression for the time taken to reach the peak as a function of the applied (or holding) field. The theory is applied to experimental data for bulk amorphous ferromagnet alloys of composition Nd 60−x Fe 30 Al 10 Dy x , x = 0, 1, 2, 3 and 4, and it gives a reasonable description of the observed behaviour. The role played by other key magnetic parameters, such as the intrinsic coercivity and fluctuation field, is also discussed. When the non-monotonic behaviour of the magnetization of a number of alloys is viewed in the context of the model, features of universal behaviour emerge, that are independent of alloy composition. - Highlights: • Development of a simple geometrical model based on the Preisach model which gives a complete explanation of the peak in the magnetic viscosity. • Geometrical approach is extended by considering equations that govern the motion of the state line. • The model is used to deduce the relationship between the holding field and the time it takes to reach the peak. • The model is tested with experimental results for a range of Nd–Fe–Al–Dy bulk amorphous ferromagnets. • There is good agreement between the model and the experimental data

  19. The spatial resolution of epidemic peaks.

    Directory of Open Access Journals (Sweden)

    Harriet L Mills

    2014-04-01

    Full Text Available The emergence of novel respiratory pathogens can challenge the capacity of key health care resources, such as intensive care units, that are constrained to serve only specific geographical populations. An ability to predict the magnitude and timing of peak incidence at the scale of a single large population would help to accurately assess the value of interventions designed to reduce that peak. However, current disease-dynamic theory does not provide a clear understanding of the relationship between: epidemic trajectories at the scale of interest (e.g. city; population mobility; and higher resolution spatial effects (e.g. transmission within small neighbourhoods. Here, we used a spatially-explicit stochastic meta-population model of arbitrary spatial resolution to determine the effect of resolution on model-derived epidemic trajectories. We simulated an influenza-like pathogen spreading across theoretical and actual population densities and varied our assumptions about mobility using Latin-Hypercube sampling. Even though, by design, cumulative attack rates were the same for all resolutions and mobilities, peak incidences were different. Clear thresholds existed for all tested populations, such that models with resolutions lower than the threshold substantially overestimated population-wide peak incidence. The effect of resolution was most important in populations which were of lower density and lower mobility. With the expectation of accurate spatial incidence datasets in the near future, our objective was to provide a framework for how to use these data correctly in a spatial meta-population model. Our results suggest that there is a fundamental spatial resolution for any pathogen-population pair. If underlying interactions between pathogens and spatially heterogeneous populations are represented at this resolution or higher, accurate predictions of peak incidence for city-scale epidemics are feasible.

  20. Reliable computer systems.

    Science.gov (United States)

    Wear, L L; Pinkert, J R

    1993-11-01

    In this article, we looked at some decisions that apply to the design of reliable computer systems. We began with a discussion of several terms such as testability, then described some systems that call for highly reliable hardware and software. The article concluded with a discussion of methods that can be used to achieve higher reliability in computer systems. Reliability and fault tolerance in computers probably will continue to grow in importance. As more and more systems are computerized, people will want assurances about the reliability of these systems, and their ability to work properly even when sub-systems fail.

  1. Human factor reliability program

    International Nuclear Information System (INIS)

    Knoblochova, L.

    2017-01-01

    The human factor's reliability program was at Slovenske elektrarne, a.s. (SE) nuclear power plants. introduced as one of the components Initiatives of Excellent Performance in 2011. The initiative's goal was to increase the reliability of both people and facilities, in response to 3 major areas of improvement - Need for improvement of the results, Troubleshooting support, Supporting the achievement of the company's goals. The human agent's reliability program is in practice included: - Tools to prevent human error; - Managerial observation and coaching; - Human factor analysis; -Quick information about the event with a human agent; -Human reliability timeline and performance indicators; - Basic, periodic and extraordinary training in human factor reliability(authors)

  2. GRB physics and cosmology with peak energy-intensity correlations

    Energy Technology Data Exchange (ETDEWEB)

    Sawant, Disha, E-mail: sawant@fe.infn.it [University of Ferrara, Via Saragat-1, Block C, Ferrara 44122 (Italy); University of Nice, 28 Avenue Valrose, Nice 06103 (France); IRAP Erasmus PhD Program, European Union and INAF - IASF Bologna, Via P. Gobetti 101, Bologna 41125 (Italy); Amati, Lorenzo, E-mail: amati@iasfbo.inaf.it [INAF - IASF Bologna, Via P. Gobetti 101, Bologna 41125 (Italy); ICRANet, Piazzale Aldo Moro-5, Rome 00185 (Italy)

    2015-12-17

    Gamma Ray Bursts (GRBs) are immensely energetic explosions radiating up to 10{sup 54} erg of energy isotropically (E{sub iso}) and they are observed within a wide range of redshift (from ∼ 0.01 up to ∼ 9). Such enormous power and high redshift point at these phenomena being highly favorable to investigate the history and evolution of our universe. The major obstacle in their application as cosmological study-tools is to find a way to standardize the GRBs, for instance similar to SNe Ia. With respect to this goal, the correlation between spectral peak energy (E{sub p,i}) and the “intensity” is a positively useful and investigated criterion. Moreover, it has been demonstrated that, through the E{sub p,i} – E{sub iso} correlation, the current data set of GRBs can already contribute to the independent evidence of the matter density Ω{sub M} being ∼ 0.3 for a flat universe scenario. We try to inspect and compare the correlations of E{sub p,i} with different intensity indicators (e.g., radiated energy, average and peak luminosity, bolometric vs. monochromatic quantities, etc.) both in terms of intrinsic dispersion and precise estimation of Ω{sub M}. The outcome of such studies are further analyzed in verifying the reliability of the correlations for both GRB physics and their standardization for cosmology.

  3. Stereotactic Bragg peak proton radiosurgery method

    International Nuclear Information System (INIS)

    Kjellberg, R.N.

    1979-01-01

    A brief description of the technical aspects of a stereotactic Bragg peak proton radiosurgical method for the head is presented. The preparatory radiographic studies are outlined and the stereotactic instrument and positioning of the patient are described. The instrument is so calibrated that after corrections for soft tissue and bone thickness, the Bragg peak superimposes upon the intracranial target. The head is rotated at specific intervals to allow predetermined portals of access for the beam path, all of which converge on the intracranial target. Normally, portals are arranged to oppose and overlap from both sides of the head. Using a number of beams (in sequence) on both sides of the head, the target dose is far greater than the path dose. The procedure normally takes 3/2-2 hours, following which the patient can walk away. (Auth./C.F.)

  4. Central peaking of magnetized gas discharges

    International Nuclear Information System (INIS)

    Chen, Francis F.; Curreli, Davide

    2013-01-01

    Partially ionized gas discharges used in industry are often driven by radiofrequency (rf) power applied at the periphery of a cylinder. It is found that the plasma density n is usually flat or peaked on axis even if the skin depth of the rf field is thin compared with the chamber radius a. Previous attempts at explaining this did not account for the finite length of the discharge and the boundary conditions at the endplates. A simple 1D model is used to focus on the basic mechanism: the short-circuit effect. It is found that a strong electric field (E-field) scaled to electron temperature T e , drives the ions inward. The resulting density profile is peaked on axis and has a shape independent of pressure or discharge radius. This “universal” profile is not affected by a dc magnetic field (B-field) as long as the ion Larmor radius is larger than a

  5. Peak Oil, Food Systems, and Public Health

    Science.gov (United States)

    Parker, Cindy L.; Kirschenmann, Frederick L.; Tinch, Jennifer; Lawrence, Robert S.

    2011-01-01

    Peak oil is the phenomenon whereby global oil supplies will peak, then decline, with extraction growing increasingly costly. Today's globalized industrial food system depends on oil for fueling farm machinery, producing pesticides, and transporting goods. Biofuels production links oil prices to food prices. We examined food system vulnerability to rising oil prices and the public health consequences. In the short term, high food prices harm food security and equity. Over time, high prices will force the entire food system to adapt. Strong preparation and advance investment may mitigate the extent of dislocation and hunger. Certain social and policy changes could smooth adaptation; public health has an essential role in promoting a proactive, smart, and equitable transition that increases resilience and enables adequate food for all. PMID:21778492

  6. WaVPeak: Picking NMR peaks through wavelet-based smoothing and volume-based filtering

    KAUST Repository

    Liu, Zhi

    2012-02-10

    Motivation: Nuclear magnetic resonance (NMR) has been widely used as a powerful tool to determine the 3D structures of proteins in vivo. However, the post-spectra processing stage of NMR structure determination usually involves a tremendous amount of time and expert knowledge, which includes peak picking, chemical shift assignment and structure calculation steps. Detecting accurate peaks from the NMR spectra is a prerequisite for all following steps, and thus remains a key problem in automatic NMR structure determination. Results: We introduce WaVPeak, a fully automatic peak detection method. WaVPeak first smoothes the given NMR spectrum by wavelets. The peaks are then identified as the local maxima. The false positive peaks are filtered out efficiently by considering the volume of the peaks. WaVPeak has two major advantages over the state-of-the-art peak-picking methods. First, through wavelet-based smoothing, WaVPeak does not eliminate any data point in the spectra. Therefore, WaVPeak is able to detect weak peaks that are embedded in the noise level. NMR spectroscopists need the most help isolating these weak peaks. Second, WaVPeak estimates the volume of the peaks to filter the false positives. This is more reliable than intensity-based filters that are widely used in existing methods. We evaluate the performance of WaVPeak on the benchmark set proposed by PICKY (Alipanahi et al., 2009), one of the most accurate methods in the literature. The dataset comprises 32 2D and 3D spectra from eight different proteins. Experimental results demonstrate that WaVPeak achieves an average of 96%, 91%, 88%, 76% and 85% recall on 15N-HSQC, HNCO, HNCA, HNCACB and CBCA(CO)NH, respectively. When the same number of peaks are considered, WaVPeak significantly outperforms PICKY. The Author(s) 2012. Published by Oxford University Press.

  7. Hanford Site peak gust wind speeds

    International Nuclear Information System (INIS)

    Ramsdell, J.V.

    1998-01-01

    Peak gust wind data collected at the Hanford Site since 1945 are analyzed to estimate maximum wind speeds for use in structural design. The results are compared with design wind speeds proposed for the Hanford Site. These comparisons indicate that design wind speeds contained in a January 1998 advisory changing DOE-STD-1020-94 are excessive for the Hanford Site and that the design wind speeds in effect prior to the changes are still appropriate for the Hanford Site

  8. Commodity hydrogen from off-peak electricity

    Energy Technology Data Exchange (ETDEWEB)

    Darrow, K.; Biederman, N.; Konopka, A.

    1977-01-01

    This paper considers the use of off-peak electrical power as an energy source for the electrolytic production of hydrogen. The present industrial uses for hydrogen are examined to determine if hydrogen produced in this fashion would be competitive with the industry's onsite production or existing hydrogen prices. The paper presents a technical and economic feasibility analysis of the various components required and of the operation of the system as a whole including production, transmission, storage, and markets.

  9. Some practical aspects of peak kilovoltage measurements

    International Nuclear Information System (INIS)

    Irfan, A.Y.; Pugh, V.I.; Jeffery, C.D.

    1985-01-01

    The peak kilovoltage (kVsub(p)) across the X-ray tube electrodes in diagnostic X-ray machines is a most important parameter, affecting both radiation output and beam quality. Four commercially available non-invasive devices used for kVsub(p) measurement were tested using a selection of generator waveforms. The majority of the devices provided satisfactory measurements of the kVsub(p) to within approximately +- kV provided certain operating conditions are observed. (U.K.)

  10. A two-step method for fast and reliable EUV mask metrology

    Science.gov (United States)

    Helfenstein, Patrick; Mochi, Iacopo; Rajendran, Rajeev; Yoshitake, Shusuke; Ekinci, Yasin

    2017-03-01

    One of the major obstacles towards the implementation of extreme ultraviolet lithography for upcoming technology nodes in semiconductor industry remains the realization of a fast and reliable detection methods patterned mask defects. We are developing a reflective EUV mask-scanning lensless imaging tool (RESCAN), installed at the Swiss Light Source synchrotron at the Paul Scherrer Institut. Our system is based on a two-step defect inspection method. In the first step, a low-resolution defect map is generated by die to die comparison of the diffraction patterns from areas with programmed defects, to those from areas that are known to be defect-free on our test sample. In a later stage, a die to database comparison will be implemented in which the measured diffraction patterns will be compared to those calculated directly from the mask layout. This Scattering Scanning Contrast Microscopy technique operates purely in the Fourier domain without the need to obtain the aerial image and, given a sufficient signal to noise ratio, defects are found in a fast and reliable way, albeit with a location accuracy limited by the spot size of the incident illumination. Having thus identified rough locations for the defects, a fine scan is carried out in the vicinity of these locations. Since our source delivers coherent illumination, we can use an iterative phase-retrieval method to reconstruct the aerial image of the scanned area with - in principle - diffraction-limited resolution without the need of an objective lens. Here, we will focus on the aerial image reconstruction technique and give a few examples to illustrate the capability of the method.

  11. METing SUSY on the Z peak

    Energy Technology Data Exchange (ETDEWEB)

    Barenboim, G.; Bernabeu, J.; Vives, O. [Universitat de Valencia, Departament de Fisica Teorica, Burjassot (Spain); Universitat de Valencia-CSIC, Parc Cientific U.V., IFIC, Paterna (Spain); Mitsou, V.A.; Romero, E. [Universitat de Valencia-CSIC, Parc Cientific U.V., IFIC, Paterna (Spain)

    2016-02-15

    Recently the ATLAS experiment announced a 3 σ excess at the Z-peak consisting of 29 pairs of leptons together with two or more jets, E{sub T}{sup miss} > 225 GeV and HT > 600 GeV, to be compared with 10.6 ± 3.2 expected lepton pairs in the Standard Model. No excess outside the Z-peak was observed. By trying to explain this signal with SUSY we find that only relatively light gluinos, m{sub g} or similar 400 GeV decaying predominantly to Z-boson plus a light gravitino, such that nearly every gluino produces at least one Z-boson in its decay chain, could reproduce the excess. We construct an explicit general gauge mediation model able to reproduce the observed signal overcoming all the experimental limits. Needless to say, more sophisticated models could also reproduce the signal, however, any model would have to exhibit the following features: light gluinos, or heavy particles with a strong production cross section, producing at least one Z-boson in its decay chain. The implications of our findings for the Run II at LHC with the scaling on the Z peak, as well as for the direct search of gluinos and other SUSY particles, are pointed out. (orig.)

  12. Acquisition of peak responding: what is learned?

    Science.gov (United States)

    Balci, Fuat; Gallistel, Charles R; Allen, Brian D; Frank, Krystal M; Gibson, Jacqueline M; Brunner, Daniela

    2009-01-01

    We investigated how the common measures of timing performance behaved in the course of training on the peak procedure in C3H mice. Following fixed interval (FI) pre-training, mice received 16 days of training in the peak procedure. The peak time and spread were derived from the average response rates while the start and stop times and their relative variability were derived from a single-trial analysis. Temporal precision (response spread) appeared to improve in the course of training. This apparent improvement in precision was, however, an averaging artifact; it was mediated by the staggered appearance of timed stops, rather than by the delayed occurrence of start times. Trial-by-trial analysis of the stop times for individual subjects revealed that stops appeared abruptly after three to five sessions and their timing did not change as training was prolonged. Start times and the precision of start and stop times were generally stable throughout training. Our results show that subjects do not gradually learn to time their start or stop of responding. Instead, they learn the duration of the FI, with robust temporal control over the start of the response; the control over the stop of response appears abruptly later.

  13. METing SUSY on the Z peak

    International Nuclear Information System (INIS)

    Barenboim, G.; Bernabeu, J.; Vives, O.; Mitsou, V.A.; Romero, E.

    2016-01-01

    Recently the ATLAS experiment announced a 3 σ excess at the Z-peak consisting of 29 pairs of leptons together with two or more jets, E T miss > 225 GeV and HT > 600 GeV, to be compared with 10.6 ± 3.2 expected lepton pairs in the Standard Model. No excess outside the Z-peak was observed. By trying to explain this signal with SUSY we find that only relatively light gluinos, m g or similar 400 GeV decaying predominantly to Z-boson plus a light gravitino, such that nearly every gluino produces at least one Z-boson in its decay chain, could reproduce the excess. We construct an explicit general gauge mediation model able to reproduce the observed signal overcoming all the experimental limits. Needless to say, more sophisticated models could also reproduce the signal, however, any model would have to exhibit the following features: light gluinos, or heavy particles with a strong production cross section, producing at least one Z-boson in its decay chain. The implications of our findings for the Run II at LHC with the scaling on the Z peak, as well as for the direct search of gluinos and other SUSY particles, are pointed out. (orig.)

  14. Monitoring device for local power peaking coefficients

    International Nuclear Information System (INIS)

    Mihashi, Ishi

    1987-01-01

    Purpose: To determine and monitor the local power peaking coefficients by a method not depending on the combination of fuel types. Constitution: Representative values for the local power distribution can be obtained by determining corresponding burn-up degrees based on the burn-up degree of each of fuel assembly segments obtained in a power distribution monitor and by the interpolation and extrapolation of void coefficients. The typical values are multiplied with compensation coefficients for the control rod effect and coefficients for compensating the effect of adjacent fuel assemblies in a calculation device to obtain typical values for the present local power distribution compensated with all of the effects. Further, the calculation device compares them with typical values of the present local power distribution to obtain an aimed local power peaking coefficient as the maximum value thereof. According to the present invention, since the local power peaking coefficients can be determined not depending on the combination of the kind of fuels, if the combination of fuel assemblies is increased upon fuel change, the amount of operation therefor is not increased. (Kamimura, M.)

  15. Chinese emissions peak: Not when, but how

    International Nuclear Information System (INIS)

    Spencer, Thomas; Colombier, Michel; Wang, Xin; Sartor, Oliver; Waisman, Henri

    2016-07-01

    It seems highly likely that China will overachieve its 2020 and 2030 targets, and peak its emissions before 2030 and possibly at a lower level than often assumed. This paper argues that the debate on the timing of the peak is misplaced: what matters is not when by why. For the peak to be seen as a harbinger of deep transformation, it needs to be based on significant macro-economic reform and restructuring, with attendant improvement in energy intensity. The Chinese economic model has been extraordinarily investment and resource intensive, and has driven the growth in GHG emissions. That model is no longer economically or environmentally sustainable. Therefore Chinese policy-makers are faced with a trade-off between slower short-term growth and economic reform, versus supporting short-term growth but slowing economic reform. The outcome will be crucial for the transition to a low-carbon economy. Overall, the 13. FYP (2016-2020) gives the impression of a cautious reflection of the new normal paradigm on the economic front, and a somewhat conservative translation of this shift into the energy and climate targets. Nonetheless, the 13. FYP targets set China well on the way to overachieving its 2020 pledge undertaken at COP15 in Copenhagen, and to potentially overachieving its INDC. It thus seems likely that China will achieve its emissions peak before 2030. However, the crucial question is not when China peaks, but whether the underlying transformation of the Chinese economy and energy system lays the basis for deep decarbonization thereafter. Thorough assessments of the implications of the 'new normal' for Chinese emissions and energy system trajectories, taking into account the link with the Chinese macro-economy, are needed. Scenarios provide a useful framework and should focus on a number of short-term uncertainties. Most energy system and emissions scenarios published today assume a continuity of trends between 2010-2015 and 2015-2020, which is at odds with clear

  16. Reliability and safety engineering

    CERN Document Server

    Verma, Ajit Kumar; Karanki, Durga Rao

    2016-01-01

    Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...

  17. Human reliability analysis

    International Nuclear Information System (INIS)

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach

  18. Reliability of electronic systems

    International Nuclear Information System (INIS)

    Roca, Jose L.

    2001-01-01

    Reliability techniques have been developed subsequently as a need of the diverse engineering disciplines, nevertheless they are not few those that think they have been work a lot on reliability before the same word was used in the current context. Military, space and nuclear industries were the first ones that have been involved in this topic, however not only in these environments it is that it has been carried out this small great revolution in benefit of the increase of the reliability figures of the products of those industries, but rather it has extended to the whole industry. The fact of the massive production, characteristic of the current industries, drove four decades ago, to the fall of the reliability of its products, on one hand, because the massively itself and, for other, to the recently discovered and even not stabilized industrial techniques. Industry should be changed according to those two new requirements, creating products of medium complexity and assuring an enough reliability appropriated to production costs and controls. Reliability began to be integral part of the manufactured product. Facing this philosophy, the book describes reliability techniques applied to electronics systems and provides a coherent and rigorous framework for these diverse activities providing a unifying scientific basis for the entire subject. It consists of eight chapters plus a lot of statistical tables and an extensive annotated bibliography. Chapters embrace the following topics: 1- Introduction to Reliability; 2- Basic Mathematical Concepts; 3- Catastrophic Failure Models; 4-Parametric Failure Models; 5- Systems Reliability; 6- Reliability in Design and Project; 7- Reliability Tests; 8- Software Reliability. This book is in Spanish language and has a potentially diverse audience as a text book from academic to industrial courses. (author)

  19. Operational safety reliability research

    International Nuclear Information System (INIS)

    Hall, R.E.; Boccio, J.L.

    1986-01-01

    Operating reactor events such as the TMI accident and the Salem automatic-trip failures raised the concern that during a plant's operating lifetime the reliability of systems could degrade from the design level that was considered in the licensing process. To address this concern, NRC is sponsoring the Operational Safety Reliability Research project. The objectives of this project are to identify the essential tasks of a reliability program and to evaluate the effectiveness and attributes of such a reliability program applicable to maintaining an acceptable level of safety during the operating lifetime at the plant

  20. Circuit design for reliability

    CERN Document Server

    Cao, Yu; Wirth, Gilson

    2015-01-01

    This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units.  The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.

  1. Description of Anomalous Noise Events for Reliable Dynamic Traffic Noise Mapping in Real-Life Urban and Suburban Soundscapes

    Directory of Open Access Journals (Sweden)

    Francesc Alías

    2017-02-01

    Full Text Available Traffic noise is one of the main pollutants in urban and suburban areas. European authorities have driven several initiatives to study, prevent and reduce the effects of exposure of population to traffic. Recent technological advances have allowed the dynamic computation of noise levels by means of Wireless Acoustic Sensor Networks (WASN such as that developed within the European LIFE DYNAMAP project. Those WASN should be capable of detecting and discarding non-desired sound sources from road traffic noise, denoted as anomalous noise events (ANE, in order to generate reliable noise level maps. Due to the local, occasional and diverse nature of ANE, some works have opted to artificially build ANE databases at the cost of misrepresentation. This work presents the production and analysis of a real-life environmental audio database in two urban and suburban areas specifically conceived for anomalous noise events’ collection. A total of 9 h 8 min of labelled audio data is obtained differentiating among road traffic noise, background city noise and ANE. After delimiting their boundaries manually, the acoustic salience of the ANE samples is automatically computed as a contextual signal-to-noise ratio (SNR. The analysis of the real-life environmental database shows high diversity of ANEs in terms of occurrences, durations and SNRs, as well as confirming both the expected differences between the urban and suburban soundscapes in terms of occurrences and SNRs, and the rare nature of ANE.

  2. Historical changes in annual peak flows in Maine and implications for flood-frequency analyses

    Science.gov (United States)

    Hodgkins, Glenn A.

    2010-01-01

    difficult, therefore, to determine which approach will produce the most reliable future estimates of peak flows for selected recurrence intervals, using only recent years of record or the traditional method using the entire historical period. One possible conservative approach to computing peak flows of selected recurrence intervals would be to compute peak flows using recent annual peak flows and the entire period of record, then choose the higher computed value. Whether recent or entire periods of record are used to compute peak flows of selected recurrence intervals, the results of this study highlight the importance of using recent data in the computation of the peak flows. The use of older records alone could result in underestimation of peak flows, particularly peak flows with short recurrence intervals, such as the 5-year peak flows.

  3. Particle creation by peak electric field

    Energy Technology Data Exchange (ETDEWEB)

    Adorno, T.C. [Tomsk State University, Department of Physics, Tomsk (Russian Federation); Gavrilov, S.P. [Tomsk State University, Department of Physics, Tomsk (Russian Federation); Herzen State Pedagogical University of Russia, Department of General and Experimental Physics, St. Petersburg (Russian Federation); Gitman, D.M. [Tomsk State University, Department of Physics, Tomsk (Russian Federation); P. N. Lebedev Physical Institute, Moscow (Russian Federation); University of Sao Paulo, Institute of Physics, CP 66318, Sao Paulo, SP (Brazil)

    2016-08-15

    The particle creation by the so-called peak electric field is considered. The latter field is a combination of two exponential parts, one exponentially increasing and another exponentially decreasing. We find exact solutions of the Dirac equation with the field under consideration with appropriate asymptotic conditions and calculate all the characteristics of particle creation effect, in particular, differential mean numbers of created particle, total number of created particles, and the probability for a vacuum to remain a vacuum. Characteristic asymptotic regimes are discussed in detail and a comparison with the pure asymptotically decaying field is considered. (orig.)

  4. Octant vectorcardiography - the evaluation by peaks.

    Science.gov (United States)

    Laufberger, V

    1982-01-01

    From the Frank lead potentials a computer prints out an elementary table. Therein, the electrical space of left ventricle depolarization is divided into eight spatial parts labelled by numbers 1-8 and called octants. Within these octants six peaks are determined labelled with letters ALPR-IS. Their localization is described by six-digit topograms characteristic for each patient. From 300 cases of patients after myocardial infarction, three data bases were compiled enabling every case to be classified into classes, subclasses and types. The follow up of patients according to these principles gives an objective and detailed image about the progress of coronary artery disease.

  5. Energy peaks: A high energy physics outlook

    Science.gov (United States)

    Franceschini, Roberto

    2017-12-01

    Energy distributions of decay products carry information on the kinematics of the decay in ways that are at the same time straightforward and quite hidden. I will review these properties and discuss their early historical applications, as well as more recent ones in the context of (i) methods for the measurement of masses of new physics particle with semi-invisible decays, (ii) the characterization of Dark Matter particles produced at colliders, (iii) precision mass measurements of Standard Model particles, in particular of the top quark. Finally, I will give an outlook of further developments and applications of energy peak method for high energy physics at colliders and beyond.

  6. Method and apparatus for current-output peak detection

    Science.gov (United States)

    De Geronimo, Gianluigi

    2017-01-24

    A method and apparatus for a current-output peak detector. A current-output peak detector circuit is disclosed and works in two phases. The peak detector circuit includes switches to switch the peak detector circuit from the first phase to the second phase upon detection of the peak voltage of an input voltage signal. The peak detector generates a current output with a high degree of accuracy in the second phase.

  7. A Framework for Understanding and Generating Integrated Solutions for Residential Peak Energy Demand

    Science.gov (United States)

    Buys, Laurie; Vine, Desley; Ledwich, Gerard; Bell, John; Mengersen, Kerrie; Morris, Peter; Lewis, Jim

    2015-01-01

    Supplying peak energy demand in a cost effective, reliable manner is a critical focus for utilities internationally. Successfully addressing peak energy concerns requires understanding of all the factors that affect electricity demand especially at peak times. This paper is based on past attempts of proposing models designed to aid our understanding of the influences on residential peak energy demand in a systematic and comprehensive way. Our model has been developed through a group model building process as a systems framework of the problem situation to model the complexity within and between systems and indicate how changes in one element might flow on to others. It is comprised of themes (social, technical and change management options) networked together in a way that captures their influence and association with each other and also their influence, association and impact on appliance usage and residential peak energy demand. The real value of the model is in creating awareness, understanding and insight into the complexity of residential peak energy demand and in working with this complexity to identify and integrate the social, technical and change management option themes and their impact on appliance usage and residential energy demand at peak times. PMID:25807384

  8. Feature selection and classifier parameters estimation for EEG signals peak detection using particle swarm optimization.

    Science.gov (United States)

    Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.

  9. Peak fitting and identification software library for high resolution gamma-ray spectra

    International Nuclear Information System (INIS)

    Uher, Josef; Roach, Greg; Tickner, James

    2010-01-01

    A new gamma-ray spectral analysis software package is under development in our laboratory. It can be operated as a stand-alone program or called as a software library from Java, C, C++ and MATLAB TM environments. It provides an advanced graphical user interface for data acquisition, spectral analysis and radioisotope identification. The code uses a peak-fitting function that includes peak asymmetry, Compton continuum and flexible background terms. Peak fitting function parameters can be calibrated as functions of energy. Each parameter can be constrained to improve fitting of overlapping peaks. All of these features can be adjusted by the user. To assist with peak identification, the code can automatically measure half-lives of single or multiple overlapping peaks from a time series of spectra. It implements library-based peak identification, with options for restricting the search based on radioisotope half-lives and reaction types. The software also improves the reliability of isotope identification by utilizing Monte-Carlo simulation results.

  10. Hawaii Electric System Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Loose, Verne William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2012-08-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  11. Hawaii electric system reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  12. Improving machinery reliability

    CERN Document Server

    Bloch, Heinz P

    1998-01-01

    This totally revised, updated and expanded edition provides proven techniques and procedures that extend machinery life, reduce maintenance costs, and achieve optimum machinery reliability. This essential text clearly describes the reliability improvement and failure avoidance steps practiced by best-of-class process plants in the U.S. and Europe.

  13. LED system reliability

    NARCIS (Netherlands)

    Driel, W.D. van; Yuan, C.A.; Koh, S.; Zhang, G.Q.

    2011-01-01

    This paper presents our effort to predict the system reliability of Solid State Lighting (SSL) applications. A SSL system is composed of a LED engine with micro-electronic driver(s) that supplies power to the optic design. Knowledge of system level reliability is not only a challenging scientific

  14. Integrated system reliability analysis

    DEFF Research Database (Denmark)

    Gintautas, Tomas; Sørensen, John Dalsgaard

    Specific targets: 1) The report shall describe the state of the art of reliability and risk-based assessment of wind turbine components. 2) Development of methodology for reliability and risk-based assessment of the wind turbine at system level. 3) Describe quantitative and qualitative measures...

  15. Reliability of neural encoding

    DEFF Research Database (Denmark)

    Alstrøm, Preben; Beierholm, Ulrik; Nielsen, Carsten Dahl

    2002-01-01

    The reliability with which a neuron is able to create the same firing pattern when presented with the same stimulus is of critical importance to the understanding of neuronal information processing. We show that reliability is closely related to the process of phaselocking. Experimental results f...

  16. Design reliability engineering

    International Nuclear Information System (INIS)

    Buden, D.; Hunt, R.N.M.

    1989-01-01

    Improved design techniques are needed to achieve high reliability at minimum cost. This is especially true of space systems where lifetimes of many years without maintenance are needed and severe mass limitations exist. Reliability must be designed into these systems from the start. Techniques are now being explored to structure a formal design process that will be more complete and less expensive. The intent is to integrate the best features of design, reliability analysis, and expert systems to design highly reliable systems to meet stressing needs. Taken into account are the large uncertainties that exist in materials, design models, and fabrication techniques. Expert systems are a convenient method to integrate into the design process a complete definition of all elements that should be considered and an opportunity to integrate the design process with reliability, safety, test engineering, maintenance and operator training. 1 fig

  17. Bayesian methods in reliability

    Science.gov (United States)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  18. Computation of peak discharge at culverts

    Science.gov (United States)

    Carter, Rolland William

    1957-01-01

    Methods for computing peak flood flow through culverts on the basis of a field survey of highwater marks and culvert geometry are presented. These methods are derived from investigations of culvert flow as reported in the literature and on extensive laboratory studies of culvert flow. For convenience in computation, culvert flow has been classified into six types, according to the location of the control section and the relative heights of the head-water and tail-water levels. The type of flow which occurred at any site can be determined from the field data and the criteria given in this report. A discharge equation has been developed for each flow type by combining the energy and continuity equations for the distance between an approach section upstream from the culvert and a terminal section within the culvert barrel. The discharge coefficient applicable to each flow type is listed for the more common entrance geometries. Procedures for computing peak discharge through culverts are outlined in detail for each of the six flow types.

  19. Comparison of five portable peak flow meters.

    Science.gov (United States)

    Takara, Glaucia Nency; Ruas, Gualberto; Pessoa, Bruna Varanda; Jamami, Luciana Kawakami; Di Lorenzo, Valéria Amorim Pires; Jamami, Mauricio

    2010-05-01

    To compare the measurements of spirometric peak expiratory flow (PEF) from five different PEF meters and to determine if their values are in agreement. Inaccurate equipment may result in incorrect diagnoses of asthma and inappropriate treatments. Sixty-eight healthy, sedentary and insufficiently active subjects, aged from 19 to 40 years, performed PEF measurements using Air Zone, Assess, Galemed, Personal Best and Vitalograph peak flow meters. The highest value recorded for each subject for each device was compared to the corresponding spirometric values using Friedman's test with Dunn's post-hoc (pmeters were 428 (263-688 L/min), 450 (350-800 L/min), 420 (310-720 L/min), 380 (300-735 L/min), 400 (310-685 L/min) and 415 (335-610 L/min), respectively. Significant differences were found when the spirometric values were compared to those recorded by the Air Zone(R) (pmeters. There was no agreement between the spirometric values and the five PEF meters. The results suggest that the values recorded from Galemed meters may underestimate the actual value, which could lead to unnecessary interventions, and that Air Zone meters overestimate spirometric values, which could obfuscate the need for intervention. These findings must be taken into account when interpreting both devices' results in younger people. These differences should also be considered when directly comparing values from different types of PEF meters.

  20. Monitoring device for local power peaking coefficient

    International Nuclear Information System (INIS)

    Mitsuhashi, Ishi

    1987-01-01

    Purpose: To monitor the local power peaking coefficients obtained by the method not depending on the combination of fuel types. Method: A plurality of representative values for the local power distribution determined by the nuclear constant calculation for one fuel assembly are memorized regarding each of the burn-up degree and the void coefficient on every positions and fuel types in fuel rod assemblies. While on the other hand, the representative values for the local power distribution as described above are compensated by a compensation coefficient considering the effect of adjacent segments and a control rod compensation coefficient considering the effect due to the control rod insertion relative to the just-mentioned compensation coefficient. Then, the maximum value among them is selected to determine the local power peaking coefficient at each of the times and each of the segments, which is monitored. According to this system, the calculation and the working required for the fitting work depending on the combination of fuel types are no more required at all to facilitate the maintenance as well. (Horiuchi, T.)

  1. Peak capacity and peak capacity per unit time in capillary and microchip zone electrophoresis.

    Science.gov (United States)

    Foley, Joe P; Blackney, Donna M; Ennis, Erin J

    2017-11-10

    The origins of the peak capacity concept are described and the important contributions to the development of that concept in chromatography and electrophoresis are reviewed. Whereas numerous quantitative expressions have been reported for one- and two-dimensional separations, most are focused on chromatographic separations and few, if any, quantitative unbiased expressions have been developed for capillary or microchip zone electrophoresis. Making the common assumption that longitudinal diffusion is the predominant source of zone broadening in capillary electrophoresis, analytical expressions for the peak capacity are derived, first in terms of migration time, diffusion coefficient, migration distance, and desired resolution, and then in terms of the remaining underlying fundamental parameters (electric field, electroosmotic and electrophoretic mobilities) that determine the migration time. The latter expressions clearly illustrate the direct square root dependence of peak capacity on electric field and migration distance and the inverse square root dependence on solute diffusion coefficient. Conditions that result in a high peak capacity will result in a low peak capacity per unit time and vice-versa. For a given symmetrical range of relative electrophoretic mobilities for co- and counter-electroosmotic species (cations and anions), the peak capacity increases with the square root of the electric field even as the temporal window narrows considerably, resulting in a significant reduction in analysis time. Over a broad relative electrophoretic mobility interval [-0.9, 0.9], an approximately two-fold greater amount of peak capacity can be generated for counter-electroosmotic species although it takes about five-fold longer to do so, consistent with the well-known bias in migration time and resolving power for co- and counter-electroosmotic species. The optimum lower bound of the relative electrophoretic mobility interval [μ r,Z , μ r,A ] that provides the maximum

  2. Peak Wind Tool for General Forecasting

    Science.gov (United States)

    Barrett, Joe H., III

    2010-01-01

    The expected peak wind speed of the day is an important forecast element in the 45th Weather Squadron's (45 WS) daily 24-Hour and Weekly Planning Forecasts. The forecasts are used for ground and space launch operations at the Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS). The 45 WS also issues wind advisories for KSC/CCAFS when they expect wind gusts to meet or exceed 25 kt, 35 kt and 50 kt thresholds at any level from the surface to 300 ft. The 45 WS forecasters have indicated peak wind speeds are challenging to forecast, particularly in the cool season months of October - April. In Phase I of this task, the Applied Meteorology Unit (AMU) developed a tool to help the 45 WS forecast non-convective winds at KSC/CCAFS for the 24-hour period of 0800 to 0800 local time. The tool was delivered as a Microsoft Excel graphical user interface (GUI). The GUI displayed the forecast of peak wind speed, 5-minute average wind speed at the time of the peak wind, timing of the peak wind and probability the peak speed would meet or exceed 25 kt, 35 kt and 50 kt. For the current task (Phase II ), the 45 WS requested additional observations be used for the creation of the forecast equations by expanding the period of record (POR). Additional parameters were evaluated as predictors, including wind speeds between 500 ft and 3000 ft, static stability classification, Bulk Richardson Number, mixing depth, vertical wind shear, temperature inversion strength and depth and wind direction. Using a verification data set, the AMU compared the performance of the Phase I and II prediction methods. Just as in Phase I, the tool was delivered as a Microsoft Excel GUI. The 45 WS requested the tool also be available in the Meteorological Interactive Data Display System (MIDDS). The AMU first expanded the POR by two years by adding tower observations, surface observations and CCAFS (XMR) soundings for the cool season months of March 2007 to April 2009. The POR was expanded

  3. Emissions Scenarios and Fossil-fuel Peaking

    Science.gov (United States)

    Brecha, R.

    2008-12-01

    Intergovernmental Panel on Climate Change (IPCC) emissions scenarios are based on detailed energy system models in which demographics, technology and economics are used to generate projections of future world energy consumption, and therefore, of greenhouse gas emissions. Built into the assumptions for these scenarios are estimates for ultimately recoverable resources of various fossil fuels. There is a growing chorus of critics who believe that the true extent of recoverable fossil resources is much smaller than the amounts taken as a baseline for the IPCC scenarios. In a climate optimist camp are those who contend that "peak oil" will lead to a switch to renewable energy sources, while others point out that high prices for oil caused by supply limitations could very well lead to a transition to liquid fuels that actually increase total carbon emissions. We examine a third scenario in which high energy prices, which are correlated with increasing infrastructure, exploration and development costs, conspire to limit the potential for making a switch to coal or natural gas for liquid fuels. In addition, the same increasing costs limit the potential for expansion of tar sand and shale oil recovery. In our qualitative model of the energy system, backed by data from short- and medium-term trends, we have a useful way to gain a sense of potential carbon emission bounds. A bound for 21st century emissions is investigated based on two assumptions: first, that extractable fossil-fuel resources follow the trends assumed by "peak oil" adherents, and second, that little is done in the way of climate mitigation policies. If resources, and perhaps more importantly, extraction rates, of fossil fuels are limited compared to assumptions in the emissions scenarios, a situation can arise in which emissions are supply-driven. However, we show that even in this "peak fossil-fuel" limit, carbon emissions are high enough to surpass 550 ppm or 2°C climate protection guardrails. Some

  4. Peak Electric Load Relief in Northern Manhattan

    Directory of Open Access Journals (Sweden)

    Hildegaard D. Link

    2014-08-01

    Full Text Available The aphorism “Think globally, act locally,” attributed to René Dubos, reflects the vision that the solution to global environmental problems must begin with efforts within our communities. PlaNYC 2030, the New York City sustainability plan, is the starting point for this study. Results include (a a case study based on the City College of New York (CCNY energy audit, in which we model the impacts of green roofs on campus energy demand and (b a case study of energy use at the neighborhood scale. We find that reducing the urban heat island effect can reduce building cooling requirements, peak electricity loads stress on the local electricity grid and improve urban livability.

  5. Tim Peake and Britain's road to space

    CERN Document Server

    Seedhouse, Erik

    2017-01-01

    This book puts the reader in the flight suit of Britain’s first male astronaut, Tim Peake. It chronicles his life, along with the Principia mission and the down-to-the-last-bolt descriptions of life aboard the ISS, by way of the hurdles placed by the British government and the rigors of training at Russia’s Star City military base. In addition, this book discusses the learning curves required in astronaut and mission training and the complexity of the technologies required to launch an astronaut and keep them alive for months on end. This book underscores the fact that technology and training, unlike space, do not exist in a vacuum; complex technical systems, like the ISS, interact with the variables of human personality, and the cultural background of the astronauts. .

  6. Complex behavior of elevators in peak traffic

    Science.gov (United States)

    Nagatani, Takashi

    2003-08-01

    We study the dynamical behavior of elevators in the morning peak traffic. We present a stochastic model of the elevators to take into account the interactions between elevators through passengers. The dynamics of the elevators is expressed in terms of a coupled nonlinear map with noises. The number of passengers carried by an elevator and the time-headway between elevators exhibit the complex behavior with varying elevator trips. It is found that the behavior of elevators exhibits a deterministic chaos even if there are no noises. The chaotic motion depends on the loading parameter, the maximum capacity of an elevator, and the number of elevators. When the loading parameter is superior to the threshold, each elevator carries a full load of passengers throughout its trip. The dependence of the threshold (transition point) on the elevator capacity is clarified.

  7. Equivalence principle and the baryon acoustic peak

    Science.gov (United States)

    Baldauf, Tobias; Mirbabayi, Mehrdad; Simonović, Marko; Zaldarriaga, Matias

    2015-08-01

    We study the dominant effect of a long wavelength density perturbation δ (λL) on short distance physics. In the nonrelativistic limit, the result is a uniform acceleration, fixed by the equivalence principle, and typically has no effect on statistical averages due to translational invariance. This same reasoning has been formalized to obtain a "consistency condition" on the cosmological correlation functions. In the presence of a feature, such as the acoustic peak at ℓBAO, this naive expectation breaks down for λLexplicitly applied to the one-loop calculation of the power spectrum. Finally, the success of baryon acoustic oscillation reconstruction schemes is argued to be another empirical evidence for the validity of the results.

  8. Reliability of construction materials

    International Nuclear Information System (INIS)

    Merz, H.

    1976-01-01

    One can also speak of reliability with respect to materials. While for reliability of components the MTBF (mean time between failures) is regarded as the main criterium, this is replaced with regard to materials by possible failure mechanisms like physical/chemical reaction mechanisms, disturbances of physical or chemical equilibrium, or other interactions or changes of system. The main tasks of the reliability analysis of materials therefore is the prediction of the various failure reasons, the identification of interactions, and the development of nondestructive testing methods. (RW) [de

  9. Structural Reliability Methods

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Madsen, H. O.

    The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...... of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...... of the uncertainties and their interplay is the developed, step-by-step. The concepts presented are illustrated by numerous examples throughout the text....

  10. Reliability and mechanical design

    International Nuclear Information System (INIS)

    Lemaire, Maurice

    1997-01-01

    A lot of results in mechanical design are obtained from a modelisation of physical reality and from a numerical solution which would lead to the evaluation of needs and resources. The goal of the reliability analysis is to evaluate the confidence which it is possible to grant to the chosen design through the calculation of a probability of failure linked to the retained scenario. Two types of analysis are proposed: the sensitivity analysis and the reliability analysis. Approximate methods are applicable to problems related to reliability, availability, maintainability and safety (RAMS)

  11. RTE - 2013 Reliability Report

    International Nuclear Information System (INIS)

    Denis, Anne-Marie

    2014-01-01

    RTE publishes a yearly reliability report based on a standard model to facilitate comparisons and highlight long-term trends. The 2013 report is not only stating the facts of the Significant System Events (ESS), but it moreover underlines the main elements dealing with the reliability of the electrical power system. It highlights the various elements which contribute to present and future reliability and provides an overview of the interaction between the various stakeholders of the Electrical Power System on the scale of the European Interconnected Network. (author)

  12. Peak detection method evaluation for ion mobility spectrometry by using machine learning approaches.

    Science.gov (United States)

    Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna; Baumbach, Jörg Ingo; Rahmann, Sven; Baumbach, Jan

    2013-04-16

    Ion mobility spectrometry with pre-separation by multi-capillary columns (MCC/IMS) has become an established inexpensive, non-invasive bioanalytics technology for detecting volatile organic compounds (VOCs) with various metabolomics applications in medical research. To pave the way for this technology towards daily usage in medical practice, different steps still have to be taken. With respect to modern biomarker research, one of the most important tasks is the automatic classification of patient-specific data sets into different groups, healthy or not, for instance. Although sophisticated machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods and systematically study their classification performance based on the four peak detectors' results. Second, we investigate the classification variance and robustness regarding perturbation and overfitting. Our main finding is that the power of the classification accuracy is almost equally good for all methods, the manually created gold standard as well as the four automatic peak finding methods. In addition, we note that all tools, manual and automatic, are similarly robust against perturbations. However, the classification performance is more robust against overfitting when using the PME as peak calling preprocessor. In summary, we conclude that all methods, though small differences exist, are largely reliable and enable a wide spectrum of real-world biomedical applications.

  13. Comparison of five portable peak flow meters

    Directory of Open Access Journals (Sweden)

    Glaucia Nency Takara

    2010-01-01

    Full Text Available OBJECTIVE: To compare the measurements of spirometric peak expiratory flow (PEF from five different PEF meters and to determine if their values are in agreement. Inaccurate equipment may result in incorrect diagnoses of asthma and inappropriate treatments. METHODS: Sixty-eight healthy, sedentary and insufficiently active subjects, aged from 19 to 40 years, performed PEF measurements using Air Zone®, Assess®, Galemed®, Personal Best® and Vitalograph® peak flow meters. The highest value recorded for each subject for each device was compared to the corresponding spirometric values using Friedman's test with Dunn's post-hoc (p<0.05, Spearman's correlation test and Bland-Altman's agreement test. RESULTS: The median and interquartile ranges for the spirometric values and the Air Zone®, Assess®, Galemed®, Personal Best® and Vitalograph® meters were 428 (263-688 L/min, 450 (350-800 L/min, 420 (310-720 L/min, 380 (300-735 L/min, 400 (310-685 L/min and 415 (335-610 L/min, respectively. Significant differences were found when the spirometric values were compared to those recorded by the Air Zone® (p<0.001 and Galemed ® (p<0.01 meters. There was no agreement between the spirometric values and the five PEF meters. CONCLUSIONS: The results suggest that the values recorded from Galemed® meters may underestimate the actual value, which could lead to unnecessary interventions, and that Air Zone® meters overestimate spirometric values, which could obfuscate the need for intervention. These findings must be taken into account when interpreting both devices' results in younger people. These differences should also be considered when directly comparing values from different types of PEF meters.

  14. Approach to reliability assessment

    International Nuclear Information System (INIS)

    Green, A.E.; Bourne, A.J.

    1975-01-01

    Experience has shown that reliability assessments can play an important role in the early design and subsequent operation of technological systems where reliability is at a premium. The approaches to and techniques for such assessments, which have been outlined in the paper, have been successfully applied in variety of applications ranging from individual equipments to large and complex systems. The general approach involves the logical and systematic establishment of the purpose, performance requirements and reliability criteria of systems. This is followed by an appraisal of likely system achievment based on the understanding of different types of variational behavior. A fundamental reliability model emerges from the correlation between the appropriate Q and H functions for performance requirement and achievement. This model may cover the complete spectrum of performance behavior in all the system dimensions

  15. Technical Potential for Peak Load Management Programs in New Jersey

    Energy Technology Data Exchange (ETDEWEB)

    Kirby, B.J.

    2002-12-13

    Restructuring is attempting to bring the economic efficiency of competitive markets to the electric power industry. To at least some extent it is succeeding. New generation is being built in most areas of the country reversing the decades-long trend of declining reserve margins. Competition among generators is typically robust, holding down wholesale energy prices. Generators have shown that they are very responsive to price signals in both the short and long term. But a market that is responsive only on the supply side is only half a market. Demand response (elasticity) is necessary to gain the full economic advantages that restructuring can offer. Electricity is a form of energy that is difficult to store economically in large quantities. However, loads often have some ability to (1) conveniently store thermal energy and (2) defer electricity consumption. These inherent storage and control capabilities can be exploited to help reduce peak electric system consumption. In some cases they can also be used to provide system reliability reserves. Fortunately too, technology is helping. Advances in communications and control technologies are making it possible for loads ranging from residential through commercial and industrial to respond to economic signals. When we buy bananas, we don't simply take a dozen and wait a month to find out what the price was. We always ask about the price before we decide how many bananas we want. Technology is beginning to allow at least some customers to think about their electricity consumption the same way they think about most of their other purchases. And power system operators and regulators are beginning to understand that customers need to remain in control of their own destinies. Many customers (residential through industrial) are willing to respond to price signals. Most customers are not able to commit to specific responses months or years in advance. Electricity is a fluid market commodity with a volatile value to both

  16. The rating reliability calculator

    Directory of Open Access Journals (Sweden)

    Solomon David J

    2004-04-01

    Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

  17. Structural systems reliability analysis

    International Nuclear Information System (INIS)

    Frangopol, D.

    1975-01-01

    For an exact evaluation of the reliability of a structure it appears necessary to determine the distribution densities of the loads and resistances and to calculate the correlation coefficients between loads and between resistances. These statistical characteristics can be obtained only on the basis of a long activity period. In case that such studies are missing the statistical properties formulated here give upper and lower bounds of the reliability. (orig./HP) [de

  18. Reliability and maintainability

    International Nuclear Information System (INIS)

    1994-01-01

    Several communications in this conference are concerned with nuclear plant reliability and maintainability; their titles are: maintenance optimization of stand-by Diesels of 900 MW nuclear power plants; CLAIRE: an event-based simulation tool for software testing; reliability as one important issue within the periodic safety review of nuclear power plants; design of nuclear building ventilation by the means of functional analysis; operation characteristic analysis for a power industry plant park, as a function of influence parameters

  19. Reliability data book

    International Nuclear Information System (INIS)

    Bento, J.P.; Boerje, S.; Ericsson, G.; Hasler, A.; Lyden, C.O.; Wallin, L.; Poern, K.; Aakerlund, O.

    1985-01-01

    The main objective for the report is to improve failure data for reliability calculations as parts of safety analyses for Swedish nuclear power plants. The work is based primarily on evaluations of failure reports as well as information provided by the operation and maintenance staff of each plant. In the report are presented charts of reliability data for: pumps, valves, control rods/rod drives, electrical components, and instruments. (L.E.)

  20. Multidisciplinary System Reliability Analysis

    Science.gov (United States)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  1. Analysis and Application of Reliability

    International Nuclear Information System (INIS)

    Jeong, Hae Seong; Park, Dong Ho; Kim, Jae Ju

    1999-05-01

    This book tells of analysis and application of reliability, which includes definition, importance and historical background of reliability, function of reliability and failure rate, life distribution and assumption of reliability, reliability of unrepaired system, reliability of repairable system, sampling test of reliability, failure analysis like failure analysis by FEMA and FTA, and cases, accelerated life testing such as basic conception, acceleration and acceleration factor, and analysis of accelerated life testing data, maintenance policy about alternation and inspection.

  2. Forward Capacity Markets: Maintaining Grid Reliability in Europe

    OpenAIRE

    Chaigneau, Matthieu

    2012-01-01

    The liberalization process of the electricity industry in many countries leads to new rules and new challenges for the grid management. System reliability is a major concern, mainly because of (a) the high level penetration of renewable energy sources and (b) the growing peak load and environmental regulations In most electricity markets, peak resources operate only during a short period, and at a high operating cost, jeopardizing their return on investment, while low-cost base resources make...

  3. Improvement of Reliability of Diffusion Tensor Metrics in Thigh Skeletal Muscles.

    Science.gov (United States)

    Keller, Sarah; Chhabra, Avneesh; Ahmed, Shaheen; Kim, Anne C; Chia, Jonathan M; Yamamura, Jin; Wang, Zhiyue J

    2018-05-01

    Quantitative diffusion tensor imaging (DTI) of skeletal muscles is challenging due to the bias in DTI metrics, such as fractional anisotropy (FA) and mean diffusivity (MD), related to insufficient signal-to-noise ratio (SNR). This study compares the bias of DTI metrics in skeletal muscles via pixel-based and region-of-interest (ROI)-based analysis. DTI of the thigh muscles was conducted on a 3.0-T system in N = 11 volunteers using a fat-suppressed single-shot spin-echo echo planar imaging (SS SE-EPI) sequence with eight repetitions (number of signal averages (NSA) = 4 or 8 for each repeat). The SNR was calculated for different NSAs and estimated for the composite images combining all data (effective NSA = 48) as standard reference. The bias of MD and FA derived by pixel-based and ROI-based quantification were compared at different NSAs. An "intra-ROI diffusion direction dispersion angle (IRDDDA)" was calculated to assess the uniformity of diffusion within the ROI. Using our standard reference image with NSA = 48, the ROI-based and pixel-based measurements agreed for FA and MD. Larger disagreements were observed for the pixel-based quantification at NSA = 4. MD was less sensitive than FA to the noise level. The IRDDDA decreased with higher NSA. At NSA = 4, ROI-based FA showed a lower average bias (0.9% vs. 37.4%) and narrower 95% limits of agreement compared to the pixel-based method. The ROI-based estimation of FA is less prone to bias than the pixel-based estimations when SNR is low. The IRDDDA can be applied as a quantitative quality measure to assess reliability of ROI-based DTI metrics. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Peptide Peak Detection for Low Resolution MALDI-TOF Mass Spectrometry.

    Science.gov (United States)

    Yao, Jingwen; Utsunomiya, Shin-Ichi; Kajihara, Shigeki; Tabata, Tsuyoshi; Aoshima, Ken; Oda, Yoshiya; Tanaka, Koichi

    2014-01-01

    A new peak detection method has been developed for rapid selection of peptide and its fragment ion peaks for protein identification using tandem mass spectrometry. The algorithm applies classification of peak intensities present in the defined mass range to determine the noise level. A threshold is then given to select ion peaks according to the determined noise level in each mass range. This algorithm was initially designed for the peak detection of low resolution peptide mass spectra, such as matrix-assisted laser desorption/ionization Time-of-Flight (MALDI-TOF) mass spectra. But it can also be applied to other type of mass spectra. This method has demonstrated obtaining a good rate of number of real ions to noises for even poorly fragmented peptide spectra. The effect of using peak lists generated from this method produces improved protein scores in database search results. The reliability of the protein identifications is increased by finding more peptide identifications. This software tool is freely available at the Mass++ home page (http://www.first-ms3d.jp/english/achievement/software/).

  5. Norwegian hydropower a valuable peak power source

    Energy Technology Data Exchange (ETDEWEB)

    Brekke, Hermod

    2010-07-01

    given on a possible increase of the Norwegian hydropower peak power production to meet the growing the European demand for peak power caused by the growing non stationary production from wind mills and ocean energy from waves and sea current. Also building of reversible pump turbine power plants will be discussed even if approximately 10% power will be consumed by loss in the pumping phase compared to direct use of the water from reservoirs. (Author)

  6. Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity

    Science.gov (United States)

    Loria, Tristan; de Grosbois, John; Tremblay, Luc

    2016-01-01

    Purpose: At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study…

  7. OccuPeak: ChIP-Seq peak calling based on internal background modelling

    NARCIS (Netherlands)

    de Boer, Bouke A.; van Duijvenboden, Karel; van den Boogaard, Malou; Christoffels, Vincent M.; Barnett, Phil; Ruijter, Jan M.

    2014-01-01

    ChIP-seq has become a major tool for the genome-wide identification of transcription factor binding or histone modification sites. Most peak-calling algorithms require input control datasets to model the occurrence of background reads to account for local sequencing and GC bias. However, the

  8. Prediction of iodine activity peak during refuelling

    International Nuclear Information System (INIS)

    Hozer, Z.; Vajda, N.

    2001-01-01

    The increase of fission product activities in the primary circuit of a nuclear power plant indicates the existence of defects in some fuel rods. The power change leads to the cooling down of the fuel and results in the fragmentation of the UO 2 pellets, which facilitates the release of fission products from the intergranular regions. Furthermore the injection of boric acid after shutdown will increase the primary activity, due to the solution of deposited fission products from the surface of the core components. The calculation of these phenomena usually is based on the evaluation of activity measurements and power plant data. The estimation of iodine spiking peak during reactor transients is based on correlation with operating parameters, such as reactor power and primary pressure. The approach used in the present method was applied for CANDU reactors. The VVER-440 specific correlations were determined using the activity measurements of the Paks NPP and the data provided by the Russian fuel supplier. The present method is used for the evaluation of the iodine isotopes, as well as the noble gases. A numerical model has been developed for iodine spiking simulation and has been validated against several shutdown transients, measured at Paks NPP. (R.P.)

  9. Fast clustering using adaptive density peak detection.

    Science.gov (United States)

    Wang, Xiao-Feng; Xu, Yifan

    2017-12-01

    Common limitations of clustering methods include the slow algorithm convergence, the instability of the pre-specification on a number of intrinsic parameters, and the lack of robustness to outliers. A recent clustering approach proposed a fast search algorithm of cluster centers based on their local densities. However, the selection of the key intrinsic parameters in the algorithm was not systematically investigated. It is relatively difficult to estimate the "optimal" parameters since the original definition of the local density in the algorithm is based on a truncated counting measure. In this paper, we propose a clustering procedure with adaptive density peak detection, where the local density is estimated through the nonparametric multivariate kernel estimation. The model parameter is then able to be calculated from the equations with statistical theoretical justification. We also develop an automatic cluster centroid selection method through maximizing an average silhouette index. The advantage and flexibility of the proposed method are demonstrated through simulation studies and the analysis of a few benchmark gene expression data sets. The method only needs to perform in one single step without any iteration and thus is fast and has a great potential to apply on big data analysis. A user-friendly R package ADPclust is developed for public use.

  10. Safety and reliability criteria

    International Nuclear Information System (INIS)

    O'Neil, R.

    1978-01-01

    Nuclear power plants and, in particular, reactor pressure boundary components have unique reliability requirements, in that usually no significant redundancy is possible, and a single failure can give rise to possible widespread core damage and fission product release. Reliability may be required for availability or safety reasons, but in the case of the pressure boundary and certain other systems safety may dominate. Possible Safety and Reliability (S and R) criteria are proposed which would produce acceptable reactor design. Without some S and R requirement the designer has no way of knowing how far he must go in analysing his system or component, or whether his proposed solution is likely to gain acceptance. The paper shows how reliability targets for given components and systems can be individually considered against the derived S and R criteria at the design and construction stage. Since in the case of nuclear pressure boundary components there is often very little direct experience on which to base reliability studies, relevant non-nuclear experience is examined. (author)

  11. Proposed reliability cost model

    Science.gov (United States)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  12. Research Opportunities at Storm Peak Laboratory

    Science.gov (United States)

    Hallar, A. G.; McCubbin, I. B.

    2006-12-01

    The Desert Research Institute (DRI) operates a high elevation facility, Storm Peak Laboratory (SPL), located on the west summit of Mt. Werner in the Park Range near Steamboat Springs, Colorado at an elevation of 3210 m MSL (Borys and Wetzel, 1997). SPL provides an ideal location for long-term research on the interactions of atmospheric aerosol and gas- phase chemistry with cloud and natural radiation environments. The ridge-top location produces almost daily transition from free tropospheric to boundary layer air which occurs near midday in both summer and winter seasons. Long-term observations at SPL document the role of orographically induced mixing and convection on vertical pollutant transport and dispersion. During winter, SPL is above cloud base 25% of the time, providing a unique capability for studying aerosol-cloud interactions (Borys and Wetzel, 1997). A comprehensive set of continuous aerosol measurements was initiated at SPL in 2002. SPL includes an office-type laboratory room for computer and instrumentation setup with outside air ports and cable access to the roof deck, a cold room for precipitation and cloud rime ice sample handling and ice crystal microphotography, a 150 m2 roof deck area for outside sampling equipment, a full kitchen and two bunk rooms with sleeping space for nine persons. The laboratory is currently well equipped for aerosol and cloud measurements. Particles are sampled from an insulated, 15 cm diameter manifold within approximately 1 m of its horizontal entry point through an outside wall. The 4 m high vertical section outside the building is capped with an inverted can to exclude large particles.

  13. Peak MSC—Are We There Yet?

    Directory of Open Access Journals (Sweden)

    Timothy R. Olsen

    2018-06-01

    Full Text Available Human mesenchymal stem cells (hMSCs are a critical raw material for many regenerative medicine products, including cell-based therapies, engineered tissues, or combination products, and are on the brink of radically changing how the world of medicine operates. Their unique characteristics, potential to treat many indications, and established safety profile in more than 800 clinical trials have contributed to their current consumption and will only fuel future demand. Given the large target patient populations with typical dose sizes of 10's to 100's of millions of cells per patient, and engineered tissues being constructed with 100's of millions to billions of cells, an unprecedented demand has been created for hMSCs. The fulfillment of this demand faces an uphill challenge in the limited availability of large quantities of pharmaceutical grade hMSCs for the industry—fueling the need for parallel rapid advancements in the biomanufacturing of this living critical raw material. Simply put, hMSCs are no different than technologies like transistors, as they are a highly technical and modular product that requires stringent control over manufacturing that can allow for high quality and consistent performance. As hMSC manufacturing processes are optimized, it predicts a future time of abundance for hMSCs, where scientists and researchers around the world will have access to a consistent and readily available supply of high quality, standardized, and economical pharmaceutical grade product to buy off the shelf for their applications and drive product development—this is “Peak MSC.”

  14. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  15. Issues in cognitive reliability

    International Nuclear Information System (INIS)

    Woods, D.D.; Hitchler, M.J.; Rumancik, J.A.

    1984-01-01

    This chapter examines some problems in current methods to assess reactor operator reliability at cognitive tasks and discusses new approaches to solve these problems. The two types of human failures are errors in the execution of an intention and errors in the formation/selection of an intention. Topics considered include the types of description, error correction, cognitive performance and response time, the speed-accuracy tradeoff function, function based task analysis, and cognitive task analysis. One problem of human reliability analysis (HRA) techniques in general is the question of what are the units of behavior whose reliability are to be determined. A second problem for HRA is that people often detect and correct their errors. The use of function based analysis, which maps the problem space for plant control, is recommended

  16. Reliability issues in PACS

    Science.gov (United States)

    Taira, Ricky K.; Chan, Kelby K.; Stewart, Brent K.; Weinberg, Wolfram S.

    1991-07-01

    Reliability is an increasing concern when moving PACS from the experimental laboratory to the clinical environment. Any system downtime may seriously affect patient care. The authors report on the several classes of errors encountered during the pre-clinical release of the PACS during the past several months and present the solutions implemented to handle them. The reliability issues discussed include: (1) environmental precautions, (2) database backups, (3) monitor routines of critical resources and processes, (4) hardware redundancy (networks, archives), and (5) development of a PACS quality control program.

  17. Reliability Parts Derating Guidelines

    Science.gov (United States)

    1982-06-01

    226-30, October 1974. 66 I, 26. "Reliability of GAAS Injection Lasers", De Loach , B. C., Jr., 1973 IEEE/OSA Conference on Laser Engineering and...Vol. R-23, No. 4, 226-30, October 1974. 28. "Reliability of GAAS Injection Lasers", De Loach , B. C., Jr., 1973 IEEE/OSA Conference on Laser...opnatien ot 󈨊 deg C, mounted on a 4-inach square 0.250~ inch thick al~loy alum~nusi panel.. This mounting technique should be L~ ken into cunoidur~tiou

  18. Peak-valley-peak pattern of histone modifications delineates active regulatory elements and their directionality

    DEFF Research Database (Denmark)

    Pundhir, Sachin; Bagger, Frederik Otzen; Lauridsen, Felicia Kathrine Bratt

    2016-01-01

    Formation of nucleosome free region (NFR) accompanied by specific histone modifications at flanking nucleosomes is an important prerequisite for enhancer and promoter activity. Due to this process, active regulatory elements often exhibit a distinct shape of histone signal in the form of a peak......-valley-peak (PVP) pattern. However, different features of PVP patterns and their robustness in predicting active regulatory elements have never been systematically analyzed. Here, we present PARE, a novel computational method that systematically analyzes the H3K4me1 or H3K4me3 PVP patterns to predict NFRs. We show...... four ENCODE cell lines and four hematopoietic differentiation stages, we identified several enhancers whose regulatory activity is stage specific and correlates positively with the expression of proximal genes in a particular stage. In conclusion, our results demonstrate that PVP patterns delineate...

  19. A fast and reliable readout method for quantitative analysis of surface-enhanced Raman scattering nanoprobes on chip surface

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Hyejin; Jeong, Sinyoung; Ko, Eunbyeol; Jeong, Dae Hong, E-mail: yslee@snu.ac.kr, E-mail: debobkr@gmail.com, E-mail: jeongdh@snu.ac.kr [Department of Chemistry Education, Seoul National University, Seoul 151-742 (Korea, Republic of); Kang, Homan [Interdisciplinary Program in Nano-Science and Technology, Seoul National University, Seoul 151-742 (Korea, Republic of); Lee, Yoon-Sik, E-mail: yslee@snu.ac.kr, E-mail: debobkr@gmail.com, E-mail: jeongdh@snu.ac.kr [Interdisciplinary Program in Nano-Science and Technology, Seoul National University, Seoul 151-742 (Korea, Republic of); School of Chemical and Biological Engineering, Seoul National University, Seoul 151-742 (Korea, Republic of); Lee, Ho-Young, E-mail: yslee@snu.ac.kr, E-mail: debobkr@gmail.com, E-mail: jeongdh@snu.ac.kr [Department of Nuclear Medicine, Seoul National University Bundang Hospital, Seongnam 463-707 (Korea, Republic of)

    2015-05-15

    Surface-enhanced Raman scattering techniques have been widely used for bioanalysis due to its high sensitivity and multiplex capacity. However, the point-scanning method using a micro-Raman system, which is the most common method in the literature, has a disadvantage of extremely long measurement time for on-chip immunoassay adopting a large chip area of approximately 1-mm scale and confocal beam point of ca. 1-μm size. Alternative methods such as sampled spot scan with high confocality and large-area scan method with enlarged field of view and low confocality have been utilized in order to minimize the measurement time practically. In this study, we analyzed the two methods in respect of signal-to-noise ratio and sampling-led signal fluctuations to obtain insights into a fast and reliable readout strategy. On this basis, we proposed a methodology for fast and reliable quantitative measurement of the whole chip area. The proposed method adopted a raster scan covering a full area of 100 μm × 100 μm region as a proof-of-concept experiment while accumulating signals in the CCD detector for single spectrum per frame. One single scan with 10 s over 100 μm × 100 μm area yielded much higher sensitivity compared to sampled spot scanning measurements and no signal fluctuations attributed to sampled spot scan. This readout method is able to serve as one of key technologies that will bring quantitative multiplexed detection and analysis into practice.

  20. Columbus safety and reliability

    Science.gov (United States)

    Longhurst, F.; Wessels, H.

    1988-10-01

    Analyses carried out to ensure Columbus reliability, availability, and maintainability, and operational and design safety are summarized. Failure modes/effects/criticality is the main qualitative tool used. The main aspects studied are fault tolerance, hazard consequence control, risk minimization, human error effects, restorability, and safe-life design.

  1. Reliability versus reproducibility

    International Nuclear Information System (INIS)

    Lautzenheiser, C.E.

    1976-01-01

    Defect detection and reproducibility of results are two separate but closely related subjects. It is axiomatic that a defect must be detected from examination to examination or reproducibility of results is very poor. On the other hand, a defect can be detected on each of subsequent examinations for higher reliability and still have poor reproducibility of results

  2. Power transformer reliability modelling

    NARCIS (Netherlands)

    Schijndel, van A.

    2010-01-01

    Problem description Electrical power grids serve to transport and distribute electrical power with high reliability and availability at acceptable costs and risks. These grids play a crucial though preferably invisible role in supplying sufficient power in a convenient form. Today’s society has

  3. Designing reliability into accelerators

    International Nuclear Information System (INIS)

    Hutton, A.

    1992-08-01

    For the next generation of high performance, high average luminosity colliders, the ''factories,'' reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis

  4. Proof tests on reliability

    International Nuclear Information System (INIS)

    Mishima, Yoshitsugu

    1983-01-01

    In order to obtain public understanding on nuclear power plants, tests should be carried out to prove the reliability and safety of present LWR plants. For example, the aseismicity of nuclear power plants must be verified by using a large scale earthquake simulator. Reliability test began in fiscal 1975, and the proof tests on steam generators and on PWR support and flexure pins against stress corrosion cracking have already been completed, and the results have been internationally highly appreciated. The capacity factor of the nuclear power plant operation in Japan rose to 80% in the summer of 1983, and considering the period of regular inspection, it means the operation of almost full capacity. Japanese LWR technology has now risen to the top place in the world after having overcome the defects. The significance of the reliability test is to secure the functioning till the age limit is reached, to confirm the correct forecast of deteriorating process, to confirm the effectiveness of the remedy to defects and to confirm the accuracy of predicting the behavior of facilities. The reliability of nuclear valves, fuel assemblies, the heat affected zones in welding, reactor cooling pumps and electric instruments has been tested or is being tested. (Kako, I.)

  5. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  6. Reliability of Plastic Slabs

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    1989-01-01

    In the paper it is shown how upper and lower bounds for the reliability of plastic slabs can be determined. For the fundamental case it is shown that optimal bounds of a deterministic and a stochastic analysis are obtained on the basis of the same failure mechanisms and the same stress fields....

  7. Reliability based structural design

    NARCIS (Netherlands)

    Vrouwenvelder, A.C.W.M.

    2014-01-01

    According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A

  8. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  9. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  10. Parametric Mass Reliability Study

    Science.gov (United States)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  11. Characteristic of 120 degree C thermoluminescence peak of iceland spar

    International Nuclear Information System (INIS)

    Lu Xinwei; Han Jia

    2006-01-01

    The basic characteristic of 120 degree C thermoluminescence peak of iceland spar was studied. The experimental result indicates the longevity of 120 degree C thermoluminescence peak of iceland spar is about 2 h under 30 degree C. The thermoluminescence peak moves to the high temperature when the heating speed increasing. The intensity of 120 degree C thermoluminescence peak of iceland spar is directly proportional to radiation dose under 15 Gy. (authors)

  12. Reliability Approach of a Compressor System using Reliability Block ...

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... This paper presents a reliability analysis of such a system using reliability ... Keywords-compressor system, reliability, reliability block diagram, RBD .... the same structure has been kept with the three subsystems: air flow, oil flow and .... and Safety in Engineering Design", Springer, 2009. [3] P. O'Connor ...

  13. Reliability and Validity Assessment of a Linear Position Transducer

    Science.gov (United States)

    Garnacho-Castaño, Manuel V.; López-Lastra, Silvia; Maté-Muñoz, José L.

    2015-01-01

    The objectives of the study were to determine the validity and reliability of peak velocity (PV), average velocity (AV), peak power (PP) and average power (AP) measurements were made using a linear position transducer. Validity was assessed by comparing measurements simultaneously obtained using the Tendo Weightlifting Analyzer Systemi and T-Force Dynamic Measurement Systemr (Ergotech, Murcia, Spain) during two resistance exercises, bench press (BP) and full back squat (BS), performed by 71 trained male subjects. For the reliability study, a further 32 men completed both lifts using the Tendo Weightlifting Analyzer Systemz in two identical testing sessions one week apart (session 1 vs. session 2). Intraclass correlation coefficients (ICCs) indicating the validity of the Tendo Weightlifting Analyzer Systemi were high, with values ranging from 0.853 to 0.989. Systematic biases and random errors were low to moderate for almost all variables, being higher in the case of PP (bias ±157.56 W; error ±131.84 W). Proportional biases were identified for almost all variables. Test-retest reliability was strong with ICCs ranging from 0.922 to 0.988. Reliability results also showed minimal systematic biases and random errors, which were only significant for PP (bias -19.19 W; error ±67.57 W). Only PV recorded in the BS showed no significant proportional bias. The Tendo Weightlifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and estimating power in resistance exercises. The low biases and random errors observed here (mainly AV, AP) make this device a useful tool for monitoring resistance training. Key points This study determined the validity and reliability of peak velocity, average velocity, peak power and average power measurements made using a linear position transducer The Tendo Weight-lifting Analyzer Systemi emerged as a reliable system for measuring movement velocity and power. PMID:25729300

  14. Assessing peak aerobic capacity in Dutch law enforcement officers

    NARCIS (Netherlands)

    Wittink, Harriet; Takken, Tim; de Groot, Janke; Reneman, Michiel; Peters, Roelof; Vanhees, Luc

    2015-01-01

    Objectives: To cross-validate the existing peak rate of oxygen consumption (VO2peak) prediction equations in Dutch law enforcement officers and to determine whether these prediction equations can be used to predict VO2peak for groups and in a single individual. A further objective was to report

  15. Determination of the upper limit of a peak area

    International Nuclear Information System (INIS)

    Helene, O.

    1990-03-01

    This paper reports the procedure to extract an upper limit of a peak area in a multichannel spectrum. This procedure takes into account the finite shape of the peak and the uncertanties in the background and in the expected position of the peak. (author) [pt

  16. PEAK TRACKING WITH A NEURAL NETWORK FOR SPECTRAL RECOGNITION

    NARCIS (Netherlands)

    COENEGRACHT, PMJ; METTING, HJ; VANLOO, EM; SNOEIJER, GJ; DOORNBOS, DA

    1993-01-01

    A peak tracking method based on a simulated feed-forward neural network with back-propagation is presented. The network uses the normalized UV spectra and peak areas measured in one chromatogram for peak recognition. It suffices to train the network with only one set of spectra recorded in one

  17. Assessing peak aerobic capacity in Dutch law enforcement officers.

    NARCIS (Netherlands)

    Wittink, H.; Takken, T.; Groot, J.F. de; Reneman, M.; Peters, R.; Vanhees, L.

    2015-01-01

    Objectives: To cross-validate the existing peak rate of oxygen consumption (VO2peak) prediction equations in Dutch law enforcement officers and to determine whether these prediction equations can be used to predict VO2peak for groups and in a single individual. A further objective was to report

  18. 7 CFR 457.163 - Nursery peak inventory endorsement.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 6 2010-01-01 2010-01-01 false Nursery peak inventory endorsement. 457.163 Section... CORPORATION, DEPARTMENT OF AGRICULTURE COMMON CROP INSURANCE REGULATIONS § 457.163 Nursery peak inventory endorsement. Nursery Crop Insurance Peak Inventory Endorsement This endorsement is not continuous and must be...

  19. Bayesian approach for peak detection in two-dimensional chromatography

    NARCIS (Netherlands)

    Vivó-Truyols, G.

    2012-01-01

    A new method for peak detection in two-dimensional chromatography is presented. In a first step, the method starts with a conventional one-dimensional peak detection algorithm to detect modulated peaks. In a second step, a sophisticated algorithm is constructed to decide which of the individual

  20. Determination of the upper limit of a peak area

    International Nuclear Information System (INIS)

    Helene, O.

    1991-01-01

    This article reports the procedure to extract an upper limit of a peak area in a multichannel spectrum. This procedure takes into account the finite shape of the peak and the uncertainties both in the background and in the expected position of the peak. (orig.)

  1. Fluctuations of the peak current of tunnel diodes in multi-junction solar cells

    International Nuclear Information System (INIS)

    Jandieri, K; Baranovskii, S D; Stolz, W; Gebhard, F; Guter, W; Hermle, M; Bett, A W

    2009-01-01

    Interband tunnel diodes are widely used to electrically interconnect the individual subcells in multi-junction solar cells. Tunnel diodes have to operate at high current densities and low voltages, especially when used in concentrator solar cells. They represent one of the most critical elements of multi-junction solar cells and the fluctuations of the peak current in the diodes have an essential impact on the performance and reliability of the devices. Recently we have found that GaAs tunnel diodes exhibit extremely high peak currents that can be explained by resonant tunnelling through defects homogeneously distributed in the junction. Experiments evidence rather large fluctuations of the peak current in the diodes fabricated from the same wafer. It is a challenging task to clarify the reason for such large fluctuations in order to improve the performance of the multi-junction solar cells. In this work we show that the large fluctuations of the peak current in tunnel diodes can be caused by relatively small fluctuations of the dopant concentration. We also show that the fluctuations of the peak current become smaller for deeper energy levels of the defects responsible for the resonant tunnelling.

  2. Reliability in the utility computing era: Towards reliable Fog computing

    DEFF Research Database (Denmark)

    Madsen, Henrik; Burtschy, Bernard; Albeanu, G.

    2013-01-01

    This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....

  3. Peak-by-peak correction of Ge(Li) gamma-ray spectra for photopeaks from background

    International Nuclear Information System (INIS)

    Cutshall, N.H.; Larsen, I.L.

    1980-01-01

    Background photopeaks can interfere with accurate measurement of low levels of radionuclides by gamma-ray spectrometry. A flowchart for peak-by-peak correction of sample spectra to produce accurate results is presented. (orig.)

  4. Peak-by-peak correction of Ge(Li) gamma-ray spectra for photopeaks from background

    Energy Technology Data Exchange (ETDEWEB)

    Cutshall, N H; Larsen, I L [Oak Ridge National Lab., TN (USA)

    1980-12-01

    Background photopeaks can interfere with accurate measurement of low levels of radionuclides by gamma-ray spectrometry. A flowchart for peak-by-peak correction of sample spectra to produce accurate results is presented.

  5. The effect of head size/shape, miscentering, and bowtie filter on peak patient tissue doses from modern brain perfusion 256-slice CT: How can we minimize the risk for deterministic effects?

    International Nuclear Information System (INIS)

    Perisinakis, Kostas; Seimenis, Ioannis; Tzedakis, Antonis; Papadakis, Antonios E.; Damilakis, John

    2013-01-01

    signal-to-noise ratio mainly to the peripheral region of the phantom. Conclusions: Despite typical peak doses to skin, eye lens, brain, and RBM from the standard low-dose brain perfusion 256-slice CT protocol are well below the corresponding thresholds for the induction of erythema, cataract, cerebrovascular disease, and depression of hematopoiesis, respectively, every effort should be made toward optimization of the procedure and minimization of dose received by these tissues. The current study provides evidence that the use of the narrower bowtie filter available may considerably reduce peak absorbed dose to all above radiosensitive tissues with minimal deterioration in image quality. Considerable reduction in peak eye-lens dose may also be achieved by positioning patient head center a few centimeters above isocenter during the exposure.

  6. RTE - Reliability report 2016

    International Nuclear Information System (INIS)

    2017-06-01

    Every year, RTE produces a reliability report for the past year. This document lays out the main factors that affected the electrical power system's operational reliability in 2016 and the initiatives currently under way intended to ensure its reliability in the future. Within a context of the energy transition, changes to the European interconnected network mean that RTE has to adapt on an on-going basis. These changes include the increase in the share of renewables injecting an intermittent power supply into networks, resulting in a need for flexibility, and a diversification in the numbers of stakeholders operating in the energy sector and changes in the ways in which they behave. These changes are dramatically changing the structure of the power system of tomorrow and the way in which it will operate - particularly the way in which voltage and frequency are controlled, as well as the distribution of flows, the power system's stability, the level of reserves needed to ensure supply-demand balance, network studies, assets' operating and control rules, the tools used and the expertise of operators. The results obtained in 2016 are evidence of a globally satisfactory level of reliability for RTE's operations in somewhat demanding circumstances: more complex supply-demand balance management, cross-border schedules at interconnections indicating operation that is closer to its limits and - most noteworthy - having to manage a cold spell just as several nuclear power plants had been shut down. In a drive to keep pace with the changes expected to occur in these circumstances, RTE implemented numerous initiatives to ensure high levels of reliability: - maintaining investment levels of euro 1.5 billion per year; - increasing cross-zonal capacity at borders with our neighbouring countries, thus bolstering the security of our electricity supply; - implementing new mechanisms (demand response, capacity mechanism, interruptibility, etc.); - involvement in tests or projects

  7. Analysis of operating reliability of WWER-1000 unit

    International Nuclear Information System (INIS)

    Bortlik, J.

    1985-01-01

    The nuclear power unit was divided into 33 technological units. Input data for reliability analysis were surveys of operating results obtained from the IAEA information system and certain indexes of the reliability of technological equipment determined using the Bayes formula. The missing reliability data for technological equipment were used from the basic variant. The fault tree of the WWER-1000 unit was determined for the peak event defined as the impossibility of reaching 100%, 75% and 50% of rated power. The period was observed of the nuclear power plant operation with reduced output owing to defect and the respective time needed for a repair of the equipment. The calculation of the availability of the WWER-1000 unit was made for different variant situations. Certain indexes of the operating reliability of the WWER-1000 unit which are the result of a detailed reliability analysis are tabulated for selected variants. (E.S.)

  8. Waste package reliability analysis

    International Nuclear Information System (INIS)

    Pescatore, C.; Sastre, C.

    1983-01-01

    Proof of future performance of a complex system such as a high-level nuclear waste package over a period of hundreds to thousands of years cannot be had in the ordinary sense of the word. The general method of probabilistic reliability analysis could provide an acceptable framework to identify, organize, and convey the information necessary to satisfy the criterion of reasonable assurance of waste package performance according to the regulatory requirements set forth in 10 CFR 60. General principles which may be used to evaluate the qualitative and quantitative reliability of a waste package design are indicated and illustrated with a sample calculation of a repository concept in basalt. 8 references, 1 table

  9. Accelerator reliability workshop

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, L; Duru, Ph; Koch, J M; Revol, J L; Van Vaerenbergh, P; Volpe, A M; Clugnet, K; Dely, A; Goodhew, D

    2002-07-01

    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop.

  10. Human Reliability Program Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Landers, John; Rogers, Erin; Gerke, Gretchen

    2014-05-18

    A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

  11. Reliability and construction control

    Directory of Open Access Journals (Sweden)

    Sherif S. AbdelSalam

    2016-06-01

    Full Text Available The goal of this study was to determine the most reliable and efficient combination of design and construction methods required for vibro piles. For a wide range of static and dynamic formulas, the reliability-based resistance factors were calculated using EGYPT database, which houses load test results for 318 piles. The analysis was extended to introduce a construction control factor that determines the variation between the pile nominal capacities calculated using static versus dynamic formulae. From the major outcomes, the lowest coefficient of variation is associated with Davisson’s criterion, and the resistance factors calculated for the AASHTO method are relatively high compared with other methods. Additionally, the CPT-Nottingham and Schmertmann method provided the most economic design. Recommendations related to a pile construction control factor were also presented, and it was found that utilizing the factor can significantly reduce variations between calculated and actual capacities.

  12. Scyllac equipment reliability analysis

    International Nuclear Information System (INIS)

    Gutscher, W.D.; Johnson, K.J.

    1975-01-01

    Most of the failures in Scyllac can be related to crowbar trigger cable faults. A new cable has been designed, procured, and is currently undergoing evaluation. When the new cable has been proven, it will be worked into the system as quickly as possible without causing too much additional down time. The cable-tip problem may not be easy or even desirable to solve. A tightly fastened permanent connection that maximizes contact area would be more reliable than the plug-in type of connection in use now, but it would make system changes and repairs much more difficult. The balance of the failures have such a low occurrence rate that they do not cause much down time and no major effort is underway to eliminate them. Even though Scyllac was built as an experimental system and has many thousands of components, its reliability is very good. Because of this the experiment has been able to progress at a reasonable pace

  13. Improving Power Converter Reliability

    DEFF Research Database (Denmark)

    Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon

    2014-01-01

    of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental......The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... is measured in a wind power converter at a low fundamental frequency. To illustrate more, the test method as well as the performance of the measurement circuit are also presented. This measurement is also useful to indicate failure mechanisms such as bond wire lift-off and solder layer degradation...

  14. Accelerator reliability workshop

    International Nuclear Information System (INIS)

    Hardy, L.; Duru, Ph.; Koch, J.M.; Revol, J.L.; Van Vaerenbergh, P.; Volpe, A.M.; Clugnet, K.; Dely, A.; Goodhew, D.

    2002-01-01

    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop

  15. Safety and reliability assessment

    International Nuclear Information System (INIS)

    1979-01-01

    This report contains the papers delivered at the course on safety and reliability assessment held at the CSIR Conference Centre, Scientia, Pretoria. The following topics were discussed: safety standards; licensing; biological effects of radiation; what is a PWR; safety principles in the design of a nuclear reactor; radio-release analysis; quality assurance; the staffing, organisation and training for a nuclear power plant project; event trees, fault trees and probability; Automatic Protective Systems; sources of failure-rate data; interpretation of failure data; synthesis and reliability; quantification of human error in man-machine systems; dispersion of noxious substances through the atmosphere; criticality aspects of enrichment and recovery plants; and risk and hazard analysis. Extensive examples are given as well as case studies

  16. Reliability of Circumplex Axes

    Directory of Open Access Journals (Sweden)

    Micha Strack

    2013-06-01

    Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.

  17. The cost of reliability

    International Nuclear Information System (INIS)

    Ilic, M.

    1998-01-01

    In this article the restructuring process under way in the US power industry is being revisited from the point of view of transmission system provision and reliability was rolled into the average cost of electricity to all, it is not so obvious how is this cost managed in the new industry. A new MIT approach to transmission pricing is here suggested as a possible solution [it

  18. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  19. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed

    2013-01-07

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  20. Software reliability studies

    Science.gov (United States)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.