Kakade, Rohan; Walker, John G.; Phillips, Andrew J.
2016-08-01
Confocal fluorescence microscopy (CFM) is widely used in biological sciences because of its enhanced 3D resolution that allows image sectioning and removal of out-of-focus blur. This is achieved by rejection of the light outside a detection pinhole in a plane confocal with the illuminated object. In this paper, an alternative detection arrangement is examined in which the entire detection/image plane is recorded using an array detector rather than a pinhole detector. Using this recorded data an attempt is then made to recover the object from the whole set of recorded photon array data; in this paper maximum-likelihood estimation has been applied. The recovered object estimates are shown (through computer simulation) to have good resolution, image sectioning and signal-to-noise ratio compared with conventional pinhole CFM images.
Jirasek, A [Department of Physics and Astronomy, University of Victoria, Victoria BC V8W 3P6 (Canada); Matthews, Q [Department of Physics and Astronomy, University of Victoria, Victoria BC V8W 3P6 (Canada); Hilts, M [Medical Physics, BC Cancer Agency-Vancouver Island Centre, Victoria BC V8R 6V5 (Canada); Schulze, G [Michael Smith Laboratories, University of British Columbia, Vancouver BC V6T 1Z4 (Canada); Blades, M W [Department of Chemistry, University of British Columbia, Vancouver BC V6T 1Z1 (Canada); Turner, R F B [Michael Smith Laboratories, University of British Columbia, Vancouver BC V6T 1Z4 (Canada); Department of Chemistry, University of British Columbia, Vancouver BC V6T 1Z1 (Canada); Department of Electrical and Computer Engineering, University of British Columbia, Vancouver BC V6T 1Z4 (Canada)
2006-05-21
This study presents a new method of image signal-to-noise ratio (SNR) enhancement by utilizing a newly developed 2D two-point maximum entropy regularization method (TPMEM). When utilized as an image filter, it is shown that 2D TPMEM offers unsurpassed flexibility in its ability to balance the complementary requirements of image smoothness and fidelity. The technique is evaluated for use in the enhancement of x-ray computed tomography (CT) images of irradiated polymer gels used in radiation dosimetry. We utilize a range of statistical parameters (e.g. root-mean square error, correlation coefficient, error histograms, Fourier data) to characterize the performance of TPMEM applied to a series of synthetic images of varying initial SNR. These images are designed to mimic a range of dose intensity patterns that would occur in x-ray CT polymer gel radiation dosimetry. Analysis is extended to a CT image of a polymer gel dosimeter irradiated with a stereotactic radiation therapy dose distribution. Results indicate that TPMEM performs strikingly well on radiation dosimetry data, significantly enhancing the SNR of noise-corrupted images (SNR enhancement factors >15 are possible) while minimally distorting the original image detail (as shown by the error histograms and Fourier data). It is also noted that application of this new TPMEM filter is not restricted exclusively to x-ray CT polymer gel dosimetry image data but can in future be extended to a wide range of radiation dosimetry data.
Jirasek, A; Matthews, Q; Hilts, M; Schulze, G; Blades, M W; Turner, R F B
2006-05-21
This study presents a new method of image signal-to-noise ratio (SNR) enhancement by utilizing a newly developed 2D two-point maximum entropy regularization method (TPMEM). When utilized as an image filter, it is shown that 2D TPMEM offers unsurpassed flexibility in its ability to balance the complementary requirements of image smoothness and fidelity. The technique is evaluated for use in the enhancement of x-ray computed tomography (CT) images of irradiated polymer gels used in radiation dosimetry. We utilize a range of statistical parameters (e.g. root-mean square error, correlation coefficient, error histograms, Fourier data) to characterize the performance of TPMEM applied to a series of synthetic images of varying initial SNR. These images are designed to mimic a range of dose intensity patterns that would occur in x-ray CT polymer gel radiation dosimetry. Analysis is extended to a CT image of a polymer gel dosimeter irradiated with a stereotactic radiation therapy dose distribution. Results indicate that TPMEM performs strikingly well on radiation dosimetry data, significantly enhancing the SNR of noise-corrupted images (SNR enhancement factors >15 are possible) while minimally distorting the original image detail (as shown by the error histograms and Fourier data). It is also noted that application of this new TPMEM filter is not restricted exclusively to x-ray CT polymer gel dosimetry image data but can in future be extended to a wide range of radiation dosimetry data.
Lashkari, Bahman; Mandelis, Andreas
2011-09-01
In this work, a detailed theoretical and experimental comparison between various key parameters of the pulsed and frequency-domain (FD) photoacoustic (PA) imaging modalities is developed. The signal-to-noise ratios (SNRs) of these methods are theoretically calculated in terms of transducer bandwidth, PA signal generation physics, and laser pulse or chirp parameters. Large differences between maximum (peak) SNRs were predicted. However, it is shown that in practice the SNR differences are much smaller. Typical experimental SNRs were 23.2 dB and 26.1 dB for FD-PA and time-domain (TD)-PA peak responses, respectively, from a subsurface black absorber. The SNR of the pulsed PA can be significantly improved with proper high-pass filtering of the signal, which minimizes but does not eliminate baseline oscillations. On the other hand, the SNR of the FD method can be enhanced substantially by increasing laser power and decreasing chirp duration (exposure) correspondingly, so as to remain within the maximum permissible exposure guidelines. The SNR crossover chirp duration is calculated as a function of transducer bandwidth and the conditions yielding higher SNR for the FD mode are established. Furthermore, it was demonstrated that the FD axial resolution is affected by both signal amplitude and limited chirp bandwidth. The axial resolution of the pulse is, in principle, superior due to its larger bandwidth; however, the bipolar shape of the signal is a drawback in this regard. Along with the absence of baseline oscillation in cross-correlation FD-PA, the FD phase signal can be combined with the amplitude signal to yield better axial resolution than pulsed PA, and without artifacts. The contrast of both methods is compared both in depth-wise (delay-time) and fixed delay time images. It was shown that the FD method possesses higher contrast, even after contrast enhancement of the pulsed response through filtering.
Peak signal-to-noise ratio revisited: Is simple beautiful?
Korhonen, Jari; You, Junyong
2012-01-01
Heavy criticism has been directed against using peak signal-to-noise ratio (PSNR) as a full reference quality metric for digitally processed images and video, since many studies have shown a weak correlation between subjective quality scores and the respective PSNR values. In this paper, we show...
Radar antenna pointing for optimized signal to noise ratio.
Doerry, Armin Walter; Marquette, Brandeis
2013-01-01
The Signal-to-Noise Ratio (SNR) of a radar echo signal will vary across a range swath, due to spherical wavefront spreading, atmospheric attenuation, and antenna beam illumination. The antenna beam illumination will depend on antenna pointing. Calculations of geometry are complicated by the curved earth, and atmospheric refraction. This report investigates optimizing antenna pointing to maximize the minimum SNR across the range swath.
Graphene Nanogrids FET Immunosensor: Signal to Noise Ratio Enhancement
Jayeeta Basu
2016-10-01
Full Text Available Recently, a reproducible and scalable chemical method for fabrication of smooth graphene nanogrids has been reported which addresses the challenges of graphene nanoribbons (GNR. These nanogrids have been found to be capable of attomolar detection of biomolecules in field effect transistor (FET mode. However, for detection of sub-femtomolar concentrations of target molecule in complex mixtures with reasonable accuracy, it is not sufficient to only explore the steady state sensitivities, but is also necessary to investigate the flicker noise which dominates at frequencies below 100 kHz. This low frequency noise is dependent on the exposure time of the graphene layer in the buffer solution and concentration of charged impurities at the surface. In this paper, the functionalization strategy of graphene nanogrids has been optimized with respect to concentration and incubation time of the cross linker for an enhancement in signal to noise ratio (SNR. It has been interestingly observed that as the sensitivity and noise power change at different rates with the functionalization parameters, SNR does not vary monotonically but is maximum corresponding to a particular parameter. The optimized parameter has improved the SNR by 50% which has enabled a detection of 0.05 fM Hep-B virus molecules with a sensitivity of around 30% and a standard deviation within 3%. Further, the SNR enhancement has resulted in improvement of quantification accuracy by five times and selectivity by two orders of magnitude.
Selection Relaying at Low Signal to Noise Ratios
Rajawat, Ketan
2007-01-01
Performance of cooperative diversity schemes at Low Signal to Noise Ratios (LSNR) was recently studied by Avestimehr et. al. [1] who emphasized the importance of diversity gain over multiplexing gain at low SNRs. It has also been pointed out that continuous energy transfer to the channel is necessary for achieving the max-flow min-cut bound at LSNR. Motivated by this we propose the use of Selection Decode and Forward (SDF) at LSNR and analyze its performance in terms of the outage probability. We also propose an energy optimization scheme which further brings down the outage probability.
Signal-to-noise ratio of Singer product apertures
Shutler, Paul M. E.; Byard, Kevin
2017-09-01
Formulae for the signal-to-noise ratio (SNR) of Singer product apertures are derived, allowing optimal Singer product apertures to be identified, and the CPU time required to decode them is quantified. This allows a systematic comparison to be made of the performance of Singer product apertures against both conventionally wrapped Singer apertures, and also conventional product apertures such as square uniformly redundant arrays. For very large images, equivalently for images at very high resolution, the SNR of Singer product apertures is asymptotically as good as the best conventional apertures, but Singer product apertures decode faster than any conventional aperture by at least a factor of ten for image sizes up to several megapixels. These theoretical predictions are verified using numerical simulations, demonstrating that coded aperture video is for the first time a realistic possibility.
Edge Detection Operators: Peak Signal to Noise Ratio Based Comparison
D. Poobathy
2014-09-01
Full Text Available Edge detection is the vital task in digital image processing. It makes the image segmentation and pattern recognition more comfort. It also helps for object detection. There are many edge detectors available for pre-processing in computer vision. But, Canny, Sobel, Laplacian of Gaussian (LoG, Robert’s and Prewitt are most applied algorithms. This paper compares each of these operators by the manner of checking Peak signal to Noise Ratio (PSNR and Mean Squared Error (MSE of resultant image. It evaluates the performance of each algorithm with Matlab and Java. The set of four universally standardized test images are used for the experimentation. The PSNR and MSE results are numeric values, based on that, performance of algorithms identified. The time required for each algorithm to detect edges is also documented. After the Experimentation, Canny operator found as the best among others in edge detection accuracy.
Signal-to-noise ratio in parametrically driven oscillators.
Batista, Adriano A; Moreira, Raoni S N
2011-12-01
We report a theoretical model based on Green's functions and averaging techniques that gives analytical estimates to the signal-to-noise ratio (SNR) near the first parametric instability zone in parametrically driven oscillators in the presence of added ac drive and added thermal noise. The signal term is given by the response of the parametrically driven oscillator to the added ac drive, while the noise term has two different measures: one is dc and the other is ac. The dc measure of noise is given by a time average of the statistically averaged fluctuations of the displacement from equilibrium in the parametric oscillator due to thermal noise. The ac measure of noise is given by the amplitude of the statistically averaged fluctuations at the frequency of the parametric pump. We observe a strong dependence of the SNR on the phase between the external drive and the parametric pump. For some range of the phase there is a high SNR, while for other values of phase the SNR remains flat or decreases with increasing pump amplitude. Very good agreement between analytical estimates and numerical results is achieved.
High signal-to-noise ratio quantum well bolometer materials
Wissmar, Stanley; Höglund, Linda; Andersson, Jan; Vieider, Christian; Savage, Susan; Ericsson, Per
2006-09-01
Novel single crystalline high-performance temperature sensing materials (quantum well structures) have been developed for the manufacturing of uncooled infrared bolometers. SiGe/Si and AlGaAs/GaAs quantum wells are grown epitaxially on standard Si and GaAs substrates respectively. The former use holes as charge carriers utilizing the discontinuities in the valence band structure, whereas the latter operate in a similar manner with electrons in the conduction band. By optimizing parameters such as the barrier height (by variation of the germanium/aluminium content respectively) and the fermi level E f (by variation of the quantum well width and doping level) these materials provide the potential to engineer layer structures with a very high temperature coefficient of resistance, TCR, as compared with conventional thin film materials such as vanadium oxide and amorphous silicon. In addition, the high quality crystalline material promises very low 1/f-noise characteristics promoting an outstanding signal to noise ratio and well defined and uniform material properties, A comparison between the two (SiGe/Si and AlGaAs/GaAs) quantum well structures and their fundamental theoretical limits are discussed and compared to experimental results. A TCR of 2.0%/K and 4.5%/K have been obtained experimentally for SiGe/Si and AlGaAs/GaAs respectively. The noise level for both materials is measured as being several orders of magnitude lower than that of a-Si and VOx. These uncooled thermistor materials can be hybridized with read out circuits by using conventional flip-chip assembly or wafer level adhesion bonding. The increased bolometer performance so obtained can either be exploited for increasing the imaging system performance, i. e. obtaining a low NETD, or to reduce the vacuum packaging requirements for low cost applications (e.g. automotive).
Malik, R; Kumpera, A; Olsson, S L I; Andrekson, P A; Karlsson, M
2014-05-05
We investigate the beating of signal and idler waves, which have imbalanced signal to noise ratios, in a phase-sensitive parametric amplifier. Imbalanced signal to noise ratios are achieved in two ways; first by imbalanced noise loading; second by varying idler to signal input power ratio. In the case of imbalanced noise loading the phase-sensitive amplifier improved the signal to noise ratio from 3 to 6 dB, and in the case of varying idler to signal input power ratio, the signal to noise ratio improved from 3 to in excess of 20 dB.
Approximation Formula for Easy Calculation of Signal-to-Noise Ratio of Sigma-Delta Modulators
2011-01-01
The signal-to-noise ratio (SNR) is one of the most significant measures of performance of the sigma-delta modulators. An approximate formula for calculation of signal-to-noise ratio of an arbitrary sigma-delta modulator (SDM) has been proposed. Our approach for signal-to-noise ratio computation does not require modulator modeling and simulation. The proposed formula is compared with SNR calculations based on output bitstream obtained by simulations, and the reasons for small discrepancies are...
Fisher information vs. signal-to-noise ratio for a split detector
Knee, George C
2015-01-01
We study the problem of estimating the magnitude of a Gaussian beam displacement using a two pixel or 'split' detector. We calculate the maximum likelihood estimator, and compute its asymptotic mean-squared-error via the Fisher information. Although the signal-to-noise ratio is known to be simply related to the Fisher information under idealised detection, we find the two measures of precision differ markedly for a split detector. We show that a greater signal-to-noise ratio 'before' the detector leads to a greater information penalty, unless adaptive realignment is used. We find that with an initially balanced split detector, tuning the normalised difference in counts to 0.884753... gives the highest posterior Fisher information, and that this provides an improvement by at least a factor of about 2.5 over operating in the usual linear regime. We discuss the implications for weak-value amplification, a popular probabilistic signal amplification technique.
French, Doug [School of Nuclear Engineering, Purdue University, West Lafayette, IN 47907 (United States)], E-mail: french@purdue.edu; Huang Zun; Pao, H.-Y.; Jovanovic, Igor [School of Nuclear Engineering, Purdue University, West Lafayette, IN 47907 (United States)
2009-03-02
A quantum phase amplifier operated in the spatial domain can improve the signal-to-noise ratio in imaging beyond the classical limit. The scaling of the signal-to-noise ratio with the gain of the quantum phase amplifier is derived from classical information theory.
Degradation of signal-to-noise ratio due to amplitude distortion
Sadr, Ramin; Shahshahani, Mehrdad; Hurd, William J.
1989-01-01
The effect of filtering on the signal-to-noise ratio (SNR) of a coherently demodulated band-limited signal is determined in the presence of worst-case amplitude ripple. The problem is formulated as an optimizaton in the Hilbert space L2. The form of the worst-case amplitude ripple is specified, and the degradation in the SNR is derived in closed form. It is shown that, when the maximum passband amplitude ripple is 2Delta (peak-to-peak), the SNR is degraded by at most (1-Delta-squared), even when the ripple is unknown or uncompensated. For example, an SNR loss of less than 0.01 dB due to amplitude ripple can be assured by keeping the amplitude ripple under 0.42 dB.
The analysis of signal-to-noise ratio of airborne LIDAR system under state of motion
Hao, Huang; Lan, Tian; Zhang, Yingchao; Ni, Guoqiang
2010-11-01
This article gives an overview of airborne LIDAR (laser light detection and ranging) system and its application. By analyzing the transmission and reception process of laser signal, the article constructs a model of echo signal of the LIDAR system, and gives some basic formulas which make up the relationship of signal-to-noise ratio, for example, the received power, the dark noise power and so on. And this article carefully studies and analyzes the impact of some important parameters in the equation on the signal-to-noise ratio, such as the atmospheric transmittance coefficient, the work distance. And the matlab software is used to simulate the detection environment, and obtains a series values of signal-to-noise (SNR) ratio under different circumstances such as sunny day, cloudy day, day, night. And the figures which describe how the SNR of LIDAR system is influenced by the critical factors are shown in the article. Finally according to the series values of signal-to-noise ratio and the figures, the SNR of LIDAR system decreases as the distance increases, and the atmospheric transmittance coefficient caused by bad weather, and also high work temperature drops the SNR. Depending on these conclusions, the LIDAR system will work even better.
Tranter, W. H.; Turner, M. D.
1977-01-01
Techniques are developed to estimate power gain, delay, signal-to-noise ratio, and mean square error in digital computer simulations of lowpass and bandpass systems. The techniques are applied to analog and digital communications. The signal-to-noise ratio estimates are shown to be maximum likelihood estimates in additive white Gaussian noise. The methods are seen to be especially useful for digital communication systems where the mapping from the signal-to-noise ratio to the error probability can be obtained. Simulation results show the techniques developed to be accurate and quite versatile in evaluating the performance of many systems through digital computer simulation.
Liu, Jinzhen; Qiao, Xiaoyan; Wang, Mengjun; Zhang, Weibo; Li, Gang; Lin, Ling
2014-05-01
The stability and signal to noise ratio (SNR) of the current source circuit are the important factors contributing to enhance the accuracy and sensitivity in bioimpedance measurement system. In this paper we propose a new differential Howland topology current source and evaluate its output characters by simulation and actual measurement. The results include (1) the output current and impedance in high frequencies are stabilized after compensation methods. And the stability of output current in the differential current source circuit (DCSC) is 0.2%. (2) The output impedance of two current circuits below the frequency of 200 KHz is above 1 MΩ, and below 1 MHz the output impedance can arrive to 200 KΩ. Then in total the output impedance of the DCSC is higher than that of the Howland current source circuit (HCSC). (3) The SNR of the DCSC are 85.64 dB and 65 dB in the simulation and actual measurement with 10 KHz, which illustrates that the DCSC effectively eliminates the common mode interference. (4) The maximum load in the DCSC is twice as much as that of the HCSC. Lastly a two-dimensional phantom electrical impedance tomography is well reconstructed with the proposed HCSC. Therefore, the measured performance shows that the DCSC can significantly improve the output impedance, the stability, the maximum load, and the SNR of the measurement system.
Liu, Jinzhen; Li, Gang; Lin, Ling, E-mail: linling@tju.edu.cn [State Key Laboratory of Precision Measurement Technology and Instruments, Tianjin University, Tianjin, People' s Republic of China, and Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin (China); Qiao, Xiaoyan [College of Physics and Electronic Engineering, Shanxi University, Shanxi (China); Wang, Mengjun [School of Information Engineering, Hebei University of Technology, Tianjin (China); Zhang, Weibo [Institute of Acupuncture and Moxibustion China Academy of Chinese Medical Sciences, Beijing (China)
2014-05-15
The stability and signal to noise ratio (SNR) of the current source circuit are the important factors contributing to enhance the accuracy and sensitivity in bioimpedance measurement system. In this paper we propose a new differential Howland topology current source and evaluate its output characters by simulation and actual measurement. The results include (1) the output current and impedance in high frequencies are stabilized after compensation methods. And the stability of output current in the differential current source circuit (DCSC) is 0.2%. (2) The output impedance of two current circuits below the frequency of 200 KHz is above 1 MΩ, and below 1 MHz the output impedance can arrive to 200 KΩ. Then in total the output impedance of the DCSC is higher than that of the Howland current source circuit (HCSC). (3) The SNR of the DCSC are 85.64 dB and 65 dB in the simulation and actual measurement with 10 KHz, which illustrates that the DCSC effectively eliminates the common mode interference. (4) The maximum load in the DCSC is twice as much as that of the HCSC. Lastly a two-dimensional phantom electrical impedance tomography is well reconstructed with the proposed HCSC. Therefore, the measured performance shows that the DCSC can significantly improve the output impedance, the stability, the maximum load, and the SNR of the measurement system.
Liu, Jinzhen; Qiao, Xiaoyan; Wang, Mengjun; Zhang, Weibo; Li, Gang; Lin, Ling
2014-05-01
The stability and signal to noise ratio (SNR) of the current source circuit are the important factors contributing to enhance the accuracy and sensitivity in bioimpedance measurement system. In this paper we propose a new differential Howland topology current source and evaluate its output characters by simulation and actual measurement. The results include (1) the output current and impedance in high frequencies are stabilized after compensation methods. And the stability of output current in the differential current source circuit (DCSC) is 0.2%. (2) The output impedance of two current circuits below the frequency of 200 KHz is above 1 MΩ, and below 1 MHz the output impedance can arrive to 200 KΩ. Then in total the output impedance of the DCSC is higher than that of the Howland current source circuit (HCSC). (3) The SNR of the DCSC are 85.64 dB and 65 dB in the simulation and actual measurement with 10 KHz, which illustrates that the DCSC effectively eliminates the common mode interference. (4) The maximum load in the DCSC is twice as much as that of the HCSC. Lastly a two-dimensional phantom electrical impedance tomography is well reconstructed with the proposed HCSC. Therefore, the measured performance shows that the DCSC can significantly improve the output impedance, the stability, the maximum load, and the SNR of the measurement system.
Adaptive Filtering for FSCW Signal-to-noise Ratio Enhancement of SAW Interrogation Units
Díaz Luis
2016-01-01
Full Text Available A digital filter that improves the signal-to-noise ratio of the response of a FSCW (Frequency Stepped Continuous Wave scheme is presented. An improvement in signal-to-noise ratio represents an enhanced readout distance. This work considers this architecture as an interrogation unit for SAW tags with time and phase encoding. The parameters of the proposed digital filter, which is a non-linear edge preserving filter, were studied and tested for this specific application. An improvement of around 20dB in the SNR level was achieved. This filter preserves the phase of the signal at the time position of the reflectors, which is critical for correct identification of the code in phase encoding schemes.
Anggun Fitrian Isnawati
2010-11-01
Full Text Available ADSL (Asymetric Digital Subscriber Line adalah teknologi yang paling banyak digunakan untuk memberikan layanan broadband , lebih dari 60% pasar broadband di dunia menggunakan teknologi ini. ADSL merupakan sebuah teknologi yang tangguh, mempunyai kemampuan untuk mendukung aplikasi-aplikasi multimedia seperti voice, video, dan juga data. Konfigurasi ADSL juga sangat sederhana, cukup menggunakan infrastruktur jaringan lokal kabel tembaga yang sudah ada. Namun ADSL juga masih memiliki kekurangan, diantaranya jarak jangkauan untuk ADSL hanya berkisar ± 5 km. Selain itu, jarak pelanggan yang jauh dari sentral sangat mempengaruhi untuk nilai kecepatan download. Hal ini, dikarenakan semakin jauh jarak yang berarti media penghantar maka akan semakin banyak redaman yang terjadi pada media tersebut yang menyebabkan turunnya Signal to Noise Ratio dimana dalam hal ini dapat diartikan kekuatan sinyal. Sehingga dari hal-hal tersebut akan mempengaruhi kualitas kecepatan download. Dari sinilah yang kemudian akan dibandingkan bagaimana pengaruh jarak terhadap redaman, Signal to Noise Ratio, dan juga kecepatan download.
Theoretical signal-to-noise ratio of a slotted surface coil for magnetic resonance imaging
Ocegueda, K; Solis, S E; Rodriguez, A O
2011-01-01
The analytical expression for the signal-to-noise ratio of a slotted surface coil with an arbitrary number of slots was derived using the quasi-static approach. This surface coil based on the vane-type magnetron tube. To study the coil perfomance, the theoretical signal-to-noise ratio predictions of this coil design were computed using a different number of slots. Results were also compared with theoretical results obtained for a circular coil with similar dimensions. It can be appreciated that slotted surface coil performance improves as the number of coils increases and, outperformed the circular-shaped coil. This makes it a good candidate for other MRI applications involving coil array techniques.
Optimum signal-to-noise ratio in off-axis integrated cavity output spectroscopy.
Dyroff, Christoph
2011-04-01
The signal-to-noise ratio (SNR) in off-axis integrated cavity output spectroscopy (OA-ICOS) is investigated and compared to direct absorption spectroscopy using multipass absorption cells [tunable diode laser absorption spectroscopy (TDLAS)]. Applying measured noise characteristics of a near-IR tunable diode laser and detector, it is shown that the optimum SNR is not generally reached at the highest effective absorption path length. Simulations are used to determine the parameters for maximized SNR of OA-ICOS.
Estimation of a multivariate normal mean with a bounded signal to noise ratio
Kortbi, Othmane
2012-01-01
For normal canonical models with $X \\sim N_p(\\theta, \\sigma^{2} I_{p}), \\;\\; S^{2} \\sim \\sigma^{2}\\chi^{2}_{k}, \\;{independent}$, we consider the problem of estimating $\\theta$ under scale invariant squared error loss $\\frac{\\|d-\\theta \\|^{2}}{\\sigma^{2}}$, when it is known that the signal-to-noise ratio $\\frac{\\|\\theta\\|}{\\sigma}$ is bounded above by $m$. Risk analysis is achieved by making use of a conditional risk decomposition and we obtain in particular sufficient conditions for an estimator to dominate either the unbiased estimator $\\delta_{UB}(X)=X$, or the maximum likelihood estimator $\\delta_{\\hbox{mle}}(X,S^2)$, or both of these benchmark procedures. The given developments bring into play the pivotal role of the boundary Bayes estimator $\\delta_{BU}$ associated with a prior on $(\\theta,\\sigma)$ such that $\\theta|\\sigma$ is uniformly distributed on the (boundary) sphere of radius $m$ and a non-informative $\\frac{1}{\\sigma}$ prior measure is placed marginally on $\\sigma$. With a series of technical re...
Enhanced signal-to-noise ratios in frog hearing can be achieved through amplitude death
Ahn, Kang-Hun
2013-01-01
In the ear, hair cells transform mechanical stimuli into neuronal signals with great sensitivity relying on certain active processes. Individual hair cell bundles of non-mammals such as frogs and turtles are known to show spontaneous oscillation. However hair bundles in vivo must be quiet in the absence of stimuli, otherwise, the signal is drowned in intrinsic noise. Thus, a certain mechanism is needed to exist in order to suppress intrinsic noise. Here, through a model study of elastically coupled hair bundles of bullfrog sacculi, we show that a low stimulus threshold and a high signal-to-noise ratio (SNR) can be achieved through the amplitude death phenomenon (the cessation of spontaneous oscillations by coupling). This phenomenon occurs only when the coupled hair bundles have inhomogeneous distribution, which is likely to be the case in biological systems. We show that the SNR has non-monotonic dependence on the mass of the overlying membrane, and find out that the SNR has maximum value in the region of th...
Heterodyne detection with an injection laser. Part 2; Signal-to-noise ratio
Marcuse, D. (AT and T Bell Labs., Holmdel, NJ (USA))
1990-04-01
The authors previously presented a theory of the conversion efficiency of the self-heterodyne laser detector. In this device a light signal is passed into the resonant cavity of an actively oscillating injection laser, causing an electrical signal at the difference frequency between laser and signal to flow through the wire supplying the dc bias to the laser. In this paper they derive an expression for the signal-to-noise ratio of the self-heterodyne laser detector. The authors' result shows that in the limit of ideal operation (that is complete population inversion and no internal losses) the signal-to-noise ratio of the self-heterodyne laser detector reaches one-half of the quantum noise limit. To describe the signal-to-noise ratio in a realistic self-heterodyne laser detector, the authors introduce an excess noise factor and plot its value for a few representative examples. Excess noise is typically on the order of 10 to 20 dB.
Increasing signal-to-noise ratio of marine seismic data: A case study from offshore Korea
Kim, Taeyoun; Jang, Seonghyung
2016-11-01
Subsurface imaging is difficult without removing the multiples intrinsic to most marine seismic data. Choosing the right multiple suppression method when working with marine data depends on the type of multiples and sometimes involves trial and error. A major amount of multiple energy in seismic data is related to the large reflectivity of the surface. Surface-related multiple elimination (SRME) is effective for suppressing free-surface-related multiples. Although SRME has some limitations, it is widely used because it requires no assumptions about the subsurface velocities, positions, and reflection coefficients of the reflector causing the multiples. The common reflector surface (CRS) stacking technique uses CRS reflectors rather than common mid-point (CMP) reflectors. It stacks more traces than conventional stacking methods and increases the signal-to-noise ratio. The purpose of this study is to address a process issue for multiple suppression with SRME and Radon filtering, and to increase the signal-to-noise ratio by using CRS stacking on seismic data from the eastern continental margin of Korea. To remove free surface multiples, SRME and Radon filtering are applied to attenuate the interbed multiples. Results obtained using synthetic data and field data show that the combination of SRME and Radon filtering is effective for suppressing free-surface multiples and peg-leg multiples. Applying CRS stacking to seismic data in which multiples have been eliminated increases the signal-to-noise ratio for the area examined, which is being considered for carbon dioxide capture and storage.
Signal-to-Noise Ratio Analysis of a Phase-Sensitive Voltmeter for Electrical Impedance Tomography.
Murphy, Ethan K; Takhti, Mohammad; Skinner, Joseph; Halter, Ryan J; Odame, Kofi
2017-04-01
In this paper, thorough analysis along with mathematical derivations of the matched filter for a voltmeter used in electrical impedance tomography systems are presented. The effect of the random noise in the system prior to the matched filter, generated by other components, are considered. Employing the presented equations allow system/circuit designers to find the maximum tolerable noise prior to the matched filter that leads to the target signal-to-noise ratio (SNR) of the voltmeter, without having to over-design internal components. A practical model was developed that should fall within 2 dB and 5 dB of the median SNR measurements of signal amplitude and phase, respectively. In order to validate our claims, simulation and experimental measurements have been performed with an analog-to-digital converter (ADC) followed by a digital matched filter, while the noise of the whole system was modeled as the input referred at the ADC input. The input signal was contaminated by a known value of additive white Gaussian noise (AWGN) noise, and the noise level was swept from 3% to 75% of the least significant bit (LSB) of the ADC. Differences between experimental and both simulated and analytical SNR values were less than 0.59 and 0.35 dB for RMS values ≥ 20% of an LSB and less than 1.45 and 2.58 dB for RMS values phase, respectively. Overall, this study provides a practical model for circuit designers in EIT, and a more accurate error analysis that was previously missing in EIT literature.
Signal to noise ratio in water balance maps with different resolution
Yan, Ziqi; Gottschalk, Lars; Wang, Jianhua
2016-12-01
What is the best resolution of annual water balance maps for a correct balance between the basic spatial signal in the observations of precipitation, actual evapotranspiration and runoff across a larger drainage basin and the error in estimates for grid cells in the map to avoid giving a false impression of accuracy? To answer this question an approach based a signal to noise ratio is proposed, which allows finding the optimal resolution maximizing the signal in the map. The approach is demonstrated on gauge data in the Huai River Basin, China. Stochastic interpolation methods were applied to create grid maps of long-term mean values, as well as for estimating variances of the three water balance components in a range of scales from 5 × 5 km to 200 × 200 km2 grid cells. Interpolation algorithms using covariances of long-term means of data with different spatial support were developed. The identified optimal resolutions by the signal to noise ratio appeared to be very different - 10 × 10, 50 × 50, and 30 × 30 km2 for precipitation, actual evapotranspiration, and runoff, respectively. These values are directly linked to the observation network densities. The magnitude of the signal to noise ratio shows similar strong differences with values 34, 3.7, and 5.4, respectively. It gives a direct indication of the reliability of the map, which can be considered as satisfactory only for precipitation for the data available for the present study. The critical factors for this magnitude are parameters characterising the spatial covariance in data and the network density.
Enhancement of the Signal-to-Noise Ratio in Sonic Logging Waveforms by Seismic Interferometry
Aldawood, Ali
2012-04-01
Sonic logs are essential tools for reliably identifying interval velocities which, in turn, are used in many seismic processes. One problem that arises, while logging, is irregularities due to washout zones along the borehole surfaces that scatters the transmitted energy and hence weakens the signal recorded at the receivers. To alleviate this problem, I have extended the theory of super-virtual refraction interferometry to enhance the signal-to-noise ratio (SNR) sonic waveforms. Tests on synthetic and real data show noticeable signal-to-noise ratio (SNR) enhancements of refracted P-wave arrivals in the sonic waveforms. The theory of super-virtual interferometric stacking is composed of two redatuming steps followed by a stacking procedure. The first redatuming procedure is of correlation type, where traces are correlated together to get virtual traces with the sources datumed to the refractor. The second datuming step is of convolution type, where traces are convolved together to dedatum the sources back to their original positions. The stacking procedure following each step enhances the signal to noise ratio of the refracted P-wave first arrivals. Datuming with correlation and convolution of traces introduces severe artifacts denoted as correlation artifacts in super-virtual data. To overcome this problem, I replace the datuming with correlation step by datuming with deconvolution. Although the former datuming method is more robust, the latter one reduces the artifacts significantly. Moreover, deconvolution can be a noise amplifier which is why a regularization term is utilized, rendering the datuming with deconvolution more stable. Tests of datuming with deconvolution instead of correlation with synthetic and real data examples show significant reduction of these artifacts. This is especially true when compared with the conventional way of applying the super-virtual refraction interferometry method.
Signal-to-noise-ratio analysis for nonlinear N-ary phase filters.
Miller, Paul C
2007-09-01
The problem of recognizing targets in nonoverlapping clutter using nonlinear N-ary phase filters is addressed. Using mathematical analysis, expressions were derived for an N-ary phase filter and the intensity variance of an optical correlator output. The N-ary phase filter was shown to consist of an infinite sum of harmonic terms whose periodicity was determined by N. For the intensity variance, it was found that under certain conditions the variance was minimized due to a previously undiscovered phase quadrature effect. Comparison showed that optimal real filters produced greater signal-to-noise-ratio values than the continuous phase versions as a consequence of this effect.
Modeling the behavior of signal-to-noise ratio for repeated snapshot imaging
Li, Junhui; Yang, Dongyue; Wu, Guohua; Yin, Longfei; Guo, Hong
2016-01-01
For imaging of static object by the means of sequential repeated independent measurements, a theoretical modeling of the behavior of signal-to-noise ratio (SNR) with varying number of measurement is developed, based on the information capacity of optical imaging systems. Experimental veritification of imaging using pseudo-thermal light source is implemented, for both the direct average of multiple measurements, and the image reconstructed by second order fluctuation correlation (SFC) which is closely related to ghost imaging. Successful curve fitting of data measured under different conditions verifies the model.
Calculation of mutual information for nonlinear communication channel at large signal-to-noise ratio
Terekhov, I. S.; Reznichenko, A. V.; Turitsyn, S. K.
2016-10-01
Using the path-integral technique we examine the mutual information for the communication channel modeled by the nonlinear Schrödinger equation with additive Gaussian noise. The nonlinear Schrödinger equation is one of the fundamental models in nonlinear physics, and it has a broad range of applications, including fiber optical communications—the backbone of the internet. At large signal-to-noise ratio we present the mutual information through the path-integral, which is convenient for the perturbative expansion in nonlinearity. In the limit of small noise and small nonlinearity we derive analytically the first nonzero nonlinear correction to the mutual information for the channel.
Generalized FMD Detection for Spectrum Sensing Under Low Signal-to-Noise Ratio
Lin, Feng; Hu, Zhen; Hou, Shujie; Browning, James P; Wicks, Michael C
2012-01-01
Spectrum sensing is a fundamental problem in cognitive radio. We propose a function of covariance matrix based detection algorithm for spectrum sensing in cognitive radio network. Monotonically increasing property of function of matrix involving trace operation is utilized as the cornerstone for this algorithm. The advantage of proposed algorithm is it works under extremely low signal-to-noise ratio, like lower than -30 dB with limited sample data. Theoretical analysis of threshold setting for the algorithm is discussed. A performance comparison between the proposed algorithm and other state-of-the-art methods is provided, by the simulation on captured digital television (DTV) signal.
The selection of testing methods for biofuels using the Taguchi signal-to-noise ratio
Irusta, R. (Valladolid Univ. (Spain). Dept. de Ingenieria Quimica Centro Regional para la Promocion de la Calidad en Castilla y Leon (A.E.C.C.), Valladolid (Spain)); Antolin, G.; Velasco, E.; Garcia, J.C. (Valladolid Univ. (Spain). Dept. de Ingenieria Quimica)
1994-01-01
This paper describes a statistical criterion for evaluation and selection of different testing methods for solid biofuels taking into consideration accuracy, precision, sensitivity, reproducibility, repeatability, testing costs and testing time. The signal-to-noise ratio as suggested by Taguchi has been used in a way similar to a traditional method (analysis of variance, ANOVA) used for this purpose. Some simulated examples are described to illustrate the development of the proposed technique. Application to real situations can be made by treating experimental data in a similar way. (Author)
Study of signal-to-noise ratio driven by colored noise
ZHAO Jin
2016-01-01
This paper investigates the signal-to-noise ratio(SNR)driven by colored noise and weak input signals. Based on the Cauchy-Schwarz and Rayleigh quotients inequalities, an analytical expression of SNR is developed, and its upper band is closely related to the Fisher information of noise. For mimicking the colored noise, we adopt the first-order moving-average model and propose the optimal input signal waveform. The stochastic resonance effect in threshold systems is demonstrated for the Gaussian mixture colored noise. The obtained results will be interesting in the case of improving the nonlinear filter performance by adding noise to the weak signal corrupted by the colored noise.
Signal to Noise Ratio Estimations for a Volcanic ASH Detection Lidar. Case Study: The Met Office
Georgoussis, George; Adam, Mariana; Avdikos, George
2016-06-01
In this paper we calculate the Signal-to-Noise (SNR) ratio of a 3-channel commercial (Raymetics) volcanic ash detection system, (LR111-D300), already operating under Met Office organization. The methodology for the accurate estimation is presented for day and nighttime conditions. The results show that SNR values are higher than 10 for ranges up to 13 km for both nighttime and daytime conditions. This is a quite good result compared with other values presented in bibliography and proves that such system is able to detect volcanic ash over a range of 20 km.
Signal to Noise Ratio Estimations for a Volcanic ASH Detection Lidar. Case Study: The Met Office
Georgoussis George
2016-01-01
Full Text Available In this paper we calculate the Signal-to-Noise (SNR ratio of a 3-channel commercial (Raymetics volcanic ash detection system, (LR111-D300, already operating under Met Office organization. The methodology for the accurate estimation is presented for day and nighttime conditions. The results show that SNR values are higher than 10 for ranges up to 13 km for both nighttime and daytime conditions. This is a quite good result compared with other values presented in bibliography and proves that such system is able to detect volcanic ash over a range of 20 km.
Dehé, Alfons
2017-06-01
After decades of research and more than ten years of successful production in very high volumes Silicon MEMS microphones are mature and unbeatable in form factor and robustness. Audio applications such as video, noise cancellation and speech recognition are key differentiators in smart phones. Microphones with low self-noise enable those functions. Backplate-free microphones enter the signal to noise ratios above 70dB(A). This talk will describe state of the art MEMS technology of Infineon Technologies. An outlook on future technologies such as the comb sensor microphone will be given.
A. Muñoz-Acevedo
2012-06-01
Full Text Available This paper proposes a quiet zone probing approach which deals with low dynamic range quiet zone acquisitions. Lack of dynamic range is a feature of millimeter and sub-millimeter wavelength technologies. It is consequence of the gradually smaller power generated by the instrumentation, that follows a f^α law with frequency, being α≥1 variable depending on the signal source’s technology. The proposed approach is based on an optimal data reduction scenario which redounds in a maximum signal to noise ratio increase for the signal pattern, with minimum information losses. After theoretical formulation, practical applications of the technique are proposed.
The concept of signal-to-noise ratio in the modulation domain and speech intelligibility.
Dubbelboer, Finn; Houtgast, Tammo
2008-12-01
A new concept is proposed that relates to intelligibility of speech in noise. The concept combines traditional estimations of signal-to-noise ratios (S/N) with elements from the modulation transfer function model, which results in the definition of the signal-to-noise ratio in the modulation domain: the (SN)(mod). It is argued that this (SN)(mod), quantifying the strength of speech modulations relative to a floor of spurious modulations arising from the speech-noise interaction, is the key factor in relation to speech intelligibility. It is shown that, by using a specific test signal, the strength of these spurious modulations can be measured, allowing an estimation of the (SN)(mod) for various conditions of additive noise, noise suppression, and amplitude compression. By relating these results to intelligibility data for these same conditions, the relevance of the (SN)(mod) as the key factor underlying speech intelligibility is clearly illustrated. For instance, it is shown that the commonly observed limited effect of noise suppression on speech intelligibility is correctly "predicted" by the (SN)(mod), whereas traditional measures such as the speech transmission index, considering only the changes in the speech modulations, fall short in this respect. It is argued that (SN)(mod) may provide a relevant tool in the design of successful noise-suppression systems.
Effects of manipulating the signal-to-noise envelope power ratio on speech intelligibility
Jørgensen, Søren; Decorsière, Remi Julien Blaise; Dau, Torsten
2015-01-01
Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475–1487] suggested a metric for speech intelligibility prediction based on the signal-to-noise envelope power ratio (SNRenv), calculated at the output of a modulation-frequency selective process. In the framework of the speech-based envelope...... power spectrum model (sEPSM), the SNRenv was demonstrated to account for speech intelligibility data in various conditions with linearly and nonlinearly processed noisy speech, as well as for conditions with stationary and fluctuating interferers. Here, the relation between the SNRenv and speech...... intelligibility was investigated further by systematically varying the modulation power of either the speech or the noise before mixing the two components, while keeping the overall power ratio of the two components constant. A good correspondence between the data and the corresponding sEPSM predictions...
Pilot Signal Design for Massive MIMO Systems: A Received Signal-To-Noise-Ratio-Based Approach
So, Jungho; Kim, Donggun; Lee, Yuni; Sung, Youngchul
2015-05-01
In this paper, the pilot signal design for massive MIMO systems to maximize the training-based received signal-to-noise ratio (SNR) is considered under two channel models: block Gauss-Markov and block independent and identically distributed (i.i.d.) channel models. First, it is shown that under the block Gauss-Markov channel model, the optimal pilot design problem reduces to a semi-definite programming (SDP) problem, which can be solved numerically by a standard convex optimization tool. Second, under the block i.i.d. channel model, an optimal solution is obtained in closed form. Numerical results show that the proposed method yields noticeably better performance than other existing pilot design methods in terms of received SNR.
Determination of signal-to-noise ratio on the base of information-entropic analysis
Zhanabaev, Z Zh; Kozhagulov, E T; Karibayev, B A
2016-01-01
In this paper we suggest a new algorithm for determination of signal-to-noise ratio (SNR). SNR is a quantitative measure widely used in science and engineering. Generally, methods for determination of SNR are based on using of experimentally defined power of noise level, or some conditional noise criterion which can be specified for signal processing. In the present work we describe method for determination of SNR of chaotic and stochastic signals at unknown power levels of signal and noise. For this aim we use information as difference between unconditional and conditional entropy. Our theoretical results are confirmed by results of analysis of signals which can be described by nonlinear maps and presented as overlapping of harmonic and stochastic signals.
Xue, Zhenyu; Vlachos, Pavlos P
2014-01-01
In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations. In addition, the notion of a valid measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct ...
Castello, Marco [Nanobiophotonics, Nanophysics, Istituto Italiano di Tecnologia, Via Morego 30, Genoa, 16163 (Italy); DIBRIS, University of Genoa, Via Opera Pia 13, Genoa 16145 (Italy); Diaspro, Alberto [Nanobiophotonics, Nanophysics, Istituto Italiano di Tecnologia, Via Morego 30, Genoa, 16163 (Italy); Nikon Imaging Center, Via Morego 30, Genoa 16163 (Italy); Vicidomini, Giuseppe, E-mail: giuseppe.vicidomini@iit.it [Nanobiophotonics, Nanophysics, Istituto Italiano di Tecnologia, Via Morego 30, Genoa, 16163 (Italy)
2014-12-08
Time-gated detection, namely, only collecting the fluorescence photons after a time-delay from the excitation events, reduces complexity, cost, and illumination intensity of a stimulated emission depletion (STED) microscope. In the gated continuous-wave- (CW-) STED implementation, the spatial resolution improves with increased time-delay, but the signal-to-noise ratio (SNR) reduces. Thus, in sub-optimal conditions, such as a low photon-budget regime, the SNR reduction can cancel-out the expected gain in resolution. Here, we propose a method which does not discard photons, but instead collects all the photons in different time-gates and recombines them through a multi-image deconvolution. Our results, obtained on simulated and experimental data, show that the SNR of the restored image improves relative to the gated image, thereby improving the effective resolution.
Signal-to-noise ratio of Geiger-mode avalanche photodiode single-photon counting detectors
Kolb, Kimberly
2014-08-01
Geiger-mode avalanche photodiodes (GM-APDs) use the avalanche mechanism of semiconductors to amplify signals in individual pixels. With proper thresholding, a pixel will be either "on" (avalanching) or "off." This discrete detection scheme eliminates read noise, which makes these devices capable of counting single photons. Using these detectors for imaging applications requires a well-developed and comprehensive expression for the expected signal-to-noise ratio (SNR). This paper derives the expected SNR of a GM-APD detector in gated operation based on gate length, number of samples, signal flux, dark count rate, photon detection efficiency, and afterpulsing probability. To verify the theoretical results, carrier-level Monte Carlo simulation results are compared to the derived equations and found to be in good agreement.
Reddy, V R; Reddy, T G; Reddy, P Y; Reddy, K R
2003-01-01
An AC modulation technique is described to convert stochastic signal variations into an amplitude variation and its retrieval through Fourier analysis. It is shown that this AC detection of signals of stochastic processes when processed through auto- and cross-correlation techniques improve the signal-to-noise ratio; the correlation techniques serve a similar purpose of frequency and phase filtering as that of phase-sensitive detection. A few model calculations applied to nuclear spectroscopy measurements such as Angular Correlations, Mossbauer spectroscopy and Pulse Height Analysis reveal considerable improvement in the sensitivity of signal detection. Experimental implementation of the technique is presented in terms of amplitude variations of harmonics representing the derivatives of normal spectra. Improved detection sensitivity to spectral variations is shown to be significant. These correlation techniques are general and can be made applicable to all the fields of particle counting where measurements ar...
Influence of exchange on signal-to-noise ratio in [CoX/Pt]4 media
Zhao, Zhen; Li, Jiangnan; Wang, Longze; Wei, Dan
2017-05-01
In longitudinal hard disk drives, the medium Signal-to-Noise Ratio (SNR) is higher with better grain segregation or lower inter-grain exchange. In current energy assisted magnetic recording system, multilayer perpendicular media are utilized; thus, it is significant to study the influence of grain segregation on SNR, as well as the relevant percolation phenomenon, to give suggestions on the recording media design. In this study, micromagnetic recording models of Microwave Assisted Magnetic Recording (MAMR) is built up to calculate SNR to find optimized [CoX/Pt]4 media parameters such as the inter-grain exchange Agb and the anisotropy orientation distribution αθ, with different field generation layer (FGL) saturation in the spin torque oscillator (STO). The constrained relationship between Agb and αθ in MAMR have been estimated, and the medium SNR will be optimized in the perpendicular [CoX/Pt]4 with a proper but not lowest exchange.
M. M. Kazi,
2011-01-01
Full Text Available In the fingerprint recognition system the main goal of the fingerprint enhancement algorithm is to reduce the noise present in the image. There are several factors that affect the quality of the acquired fingerprint image viz. presence of scars, variations of the pressure between the finger and acquisition sensor, worn artifacts, and environmental conditions during acquisition process. An input fingerprint image is thereby transformed by the enhancement algorithm to reduce the noise present in the image. This paper shows the work performed on the new database of fingerprint images acquired with a 500dpi optical sensor. Three different enhancement algorithms are applied on the images and the qualities of the reconstructed images are compared using mean-square error and peak-signal to noise ratio.
Telenkov, Sergey A; Alwi, Rudolf; Mandelis, Andreas
2013-10-01
Photoacoustic (PA) imaging of biological tissues using laser diodes instead of conventional Q-switched pulsed systems provides an attractive alternative for biomedical applications. However, the relatively low energy of laser diodes operating in the pulsed regime, results in generation of very weak acoustic waves, and low signal-to-noise ratio (SNR) of the detected signals. This problem can be addressed if optical excitation is modulated using custom waveforms and correlation processing is employed to increase SNR through signal compression. This work investigates the effect of the parameters of the modulation waveform on the resulting correlation signal and offers a practical means for optimizing PA signal detection. The advantage of coherent signal averaging is demonstrated using theoretical analysis and a numerical model of PA generation. It was shown that an additional 5-10 dB of SNR can be gained through waveform engineering by adjusting the parameters and profile of optical modulation waveforms.
Signal-to-noise ratio application to seismic marker analysis and fracture detection
Xu Hui-Qun; and Gui Zhi-Xian
2014-01-01
Seismic data with high signal-to-noise ratios (SNRs) are useful in reservoir exploration. To obtain high SNR seismic data, significant effort is required to achieve noise attenuation in seismic data processing, which is costly in materials, and human and financial resources. We introduce a method for improving the SNR of seismic data. The SNR is calculated by using the frequency domain method. Furthermore, we optimize and discuss the critical parameters and calculation procedure. We applied the proposed method on real data and found that the SNR is high in the seismic marker and low in the fracture zone. Consequently, this can be used to extract detailed information about fracture zones that are inferred by structural analysis but not observed in conventional seismic data.
B. Heese
2010-12-01
Full Text Available The potential of a new generation of ceilometer instruments for aerosol monitoring has been studied in the Ceilometer Lidar Comparison (CLIC study. The used ceilometer was developed by Jenoptik, Germany, and is designed to find both thin cirrus clouds at tropopause level and aerosol layers at close ranges during day and night-time. The comparison study was performed to determine up to which altitude the ceilometers are capable to deliver particle backscatter coefficient profiles. For this, the derived ceilometer profiles are compared to simultaneously measured lidar profiles at the same wavelength. The lidar used for the comparison was the multi-wavelengths Raman lidar Polly^{XT}. To demonstrate the capabilities and limits of ceilometers for the derivation of particle backscatter coefficient profiles from their measurements two examples of the comparison results are shown. Two cases, a daytime case with high background noise and a less noisy night-time case, are chosen. In both cases the ceilometer profiles compare well with the lidar profiles in atmospheric structures like aerosol layers or the boundary layer top height. However, the determination of the correct magnitude of the particle backscatter coefficient needs a calibration of the ceilometer data with an independent measurement of the aerosol optical depth by a sun photometer. To characterizes the ceilometers signal performance with increasing altitude a comprehensive signal-to-noise ratio study was performed. During daytime the signal-to-noise ratio is higher than 1 up to 4–5 km depending on the aerosol content. In our night-time case the SNR is higher than 1 even up to 8.5 km, so that also aerosol layers in the upper troposphere had been detected by the ceilometer.
Singh, Gurmeet; Nguyen, Thanh; Kressler, Bryan; Spincemaille, Pascal; Raj, Ashish; Zabih, Ramin; Wang, Yi
2006-01-01
High resolution 3D coronary artery MR angiography is time-consuming and can benefit from accelerated data acquisition provided by parallel imaging techniques without sacrificing spatial resolution. Currently, popular maximum likelihood based parallel imaging reconstruction techniques such as the SENSE algorithm offer this advantage at the cost of reduced signal-to-noise ratio (SNR). Maximum a posteriori (MAP) reconstruction techniques that incorporate globally smooth priors have been developed to recover this SNR loss, but they tend to blur sharp edges in the target image. The objective of this study is to demonstrate the feasibility of employing edge-preserving Markov random field priors in a MAP reconstruction framework, which can be solved efficiently using a graph cuts based optimization algorithm. The preliminary human study shows that our reconstruction provides significantly better SNR than the SENSE reconstruction performed by a commercially available scanner for navigator gated steady state free precession 3D coronary magnetic resonance angiography images (n = 4).
Intrinsic low pass filtering improves signal-to-noise ratio in critical-point flexure biosensors
Jain, Ankit; Alam, Muhammad Ashraful, E-mail: alam@purdue.edu [School of ECE, Purdue University, West Lafayette, Indiana 47906 (United States)
2014-08-25
A flexure biosensor consists of a suspended beam and a fixed bottom electrode. The adsorption of the target biomolecules on the beam changes its stiffness and results in change of beam's deflection. It is now well established that the sensitivity of sensor is maximized close to the pull-in instability point, where effective stiffness of the beam vanishes. The question: “Do the signal-to-noise ratio (SNR) and the limit-of-detection (LOD) also improve close to the instability point?”, however remains unanswered. In this article, we systematically analyze the noise response to evaluate SNR and establish LOD of critical-point flexure sensors. We find that a flexure sensor acts like an effective low pass filter close to the instability point due to its relatively small resonance frequency, and rejects high frequency noise, leading to improved SNR and LOD. We believe that our conclusions should establish the uniqueness and the technological relevance of critical-point biosensors.
Sellar, R. Glenn; Deen, Robert G.; Huffman, William C.; Willson, Reginald G.
2016-09-01
Stereophotogrammetry typically employs a pair of cameras, or a single moving camera, to acquire pairs of images from different camera positions, in order to create a three dimensional `range map' of the area being observed. Applications of this technique for building three-dimensional shape models include aerial surveying, remote sensing, machine vision, and robotics. Factors that would be expected to affect the quality of the range maps include the projection function (distortion) of the lenses and the contrast (modulation) and signal-to-noise ratio (SNR) of the acquired image pairs. Basic models of the precision with which the range can be measured assume a pinhole-camera model of the geometry, i.e. that the lenses provide perspective projection with zero distortion. Very-wide-angle or `fisheye' lenses, however (for e.g. those used by robotic vehicles) typically exhibit projection functions that differ significantly from this assumption. To predict the stereophotogrammetric range precision for such applications, we extend the model to the case of an equidistant lens projection function suitable for a very-wide-angle lens. To predict the effects of contrast and SNR on range precision, we perform numerical simulations using stereo image pairs acquired by a stereo camera pair on NASA's Mars rover Curiosity. Contrast is degraded and noise is added to these data in a controlled fashion and the effects on the quality of the resulting range maps are assessed.
Modeling high signal-to-noise ratio in a novel silicon MEMS microphone with comb readout
Manz, Johannes; Dehe, Alfons; Schrag, Gabriele
2017-05-01
Strong competition within the consumer market urges the companies to constantly improve the quality of their devices. For silicon microphones excellent sound quality is the key feature in this respect which means that improving the signal-to-noise ratio (SNR), being strongly correlated with the sound quality is a major task to fulfill the growing demands of the market. MEMS microphones with conventional capacitive readout suffer from noise caused by viscous damping losses arising from perforations in the backplate [1]. Therefore, we conceived a novel microphone design based on capacitive read-out via comb structures, which is supposed to show a reduction in fluidic damping compared to conventional MEMS microphones. In order to evaluate the potential of the proposed design, we developed a fully energy-coupled, modular system-level model taking into account the mechanical motion, the slide film damping between the comb fingers, the acoustic impact of the package and the capacitive read-out. All submodels are physically based scaling with all relevant design parameters. We carried out noise analyses and due to the modular and physics-based character of the model, were able to discriminate the noise contributions of different parts of the microphone. This enables us to identify design variants of this concept which exhibit a SNR of up to 73 dB (A). This is superior to conventional and at least comparable to high-performance variants of the current state-of-the art MEMS microphones [2].
The ultimate signal-to-noise ratio in realistic body models.
Guérin, Bastien; Villena, Jorge F; Polimeridis, Athanasios G; Adalsteinsson, Elfar; Daniel, Luca; White, Jacob K; Wald, Lawrence L
2016-12-04
We compute the ultimate signal-to-noise ratio (uSNR) and G-factor (uGF) in a realistic head model from 0.5 to 21 Tesla. We excite the head model and a uniform sphere with a large number of electric and magnetic dipoles placed at 3 cm from the object. The resulting electromagnetic fields are computed using an ultrafast volume integral solver, which are used as basis functions for the uSNR and uGF computations. Our generalized uSNR calculation shows good convergence in the sphere and the head and is in close agreement with the dyadic Green's function approach in the uniform sphere. In both models, the uSNR versus B0 trend was linear at shallow depths and supralinear at deeper locations. At equivalent positions, the rate of increase of the uSNR with B0 was greater in the sphere than in the head model. The uGFs were lower in the realistic head than in the sphere for acceleration in the anterior-posterior direction, but similar for the left-right direction. The uSNR and uGFs are computable in nonuniform body models and provide fundamental performance limits for human imaging with close-fitting MRI array coils. Magn Reson Med, 2016. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Estimation of sufficient signal to noise ratio for texture analysis of magnetic resonance images
Savio, Sami; Harrison, Lara; Ryymin, Pertti; Dastidar, Prasun; Soimakallio, Seppo; Eskola, Hannu
2011-03-01
In this study, we have studied the effect of background noise on the texture analysis of muscle, bone marrow and fat tissues in 1.5 T magnetic resonance (MR) images using different statistical methods. Variable levels of noise were first added on 3-mm thick T2 weighted image slices of voluntary subjects to simulate several signal-to-noise ratio (SNR) levels. For each original and simulated image, the values for 264 texture parameters were calculated using MaZda, a texture analysis toolkit. We also determined Fisher coefficients based on the texture parameter values in order to enable high discrimination between different tissues. Linear discriminant analysis (LDA) and two different nearest neighbour (NN) methods were then applied for the texture parameters with the highest Fisher coefficient values. Several training and test sets were used to approximate the variation in the classification results. All the above-mentioned methods had the same classification accuracy, which in turn depended on the image SNR. We conclude that these tissues can be detected by texture analysis methods with a sufficient accuracy (90%) especially if SNR is at least 30-40 dB, even though the separation of different muscles remains a very challenging task.
Fast identification of digital amplitude modulation level at low signal-to-noise ratio
WEI Xiao-wei; CAO Zhi-gang
2006-01-01
In order to rapidly and automatically identify the modulation level of digital amplitude modulated signals at low signal-to-noise ratio (SNR),a method of identifying the modulation levels of M-ary quadrature amplitude modulation (M-QAM)and M-ary amplitude shift keying (M-ASK) is proposed.In this method,wavelet transform with the optimal scale is used to identify the modulation levels of M-QAM and M-ASK signals.The performance of this method was investigated through simulations.Simulation results show that when the SNR is not lower than - 4 dB,the percentage of correct identification of M-QAM is higher than 93%,and when the SNR is not lower than -10 dB,the percentage of correct identification of M-ASK is higher than 90%,using only 100 observed symbols.It shows that this method can rapidly acquire good performance at a low SNR.
Threshold value for acceptable video quality using signal-to-noise ratio
Vaahteranoksa, Mikko; Vuori, Tero
2007-01-01
Noise decreases video quality considerably, particularly in dark environments. In a video clip, noise can be seen as an unwanted spatial or temporal variation in pixel values. The object of the study was to find a threshold value for signal-to-noise ratio (SNR) in which the video quality is perceived to be good enough. Different illumination levels for video shooting were studied using both subjective and objective (SNR measurements) methodologies. Five camcorders were selected to cover different sensor technologies, recording formats and price categories. The test material for the subjective test was recorded in an environment simulator, where it was possible to adjust lighting levels. Double staircase test was used as the subjective test method. The test videos for objective measurements were recorded using an ISO 15739 based environment. There was a correlation found between objective and subjective measurements, between measured SNR and perceived quality. Good enough video quality was reached between SNR values of 15.3 dB and 17.2 dB. With 3CCD and super HAD-CCD technologies, video quality was brighter, less noisy, and the SNR was better in low light conditions compared to the quality with conventional CCDs.
Mauger, Stefan J; Dawson, Pam W; Hersbach, Adam A
2012-01-01
Noise reduction in cochlear implants has achieved significant speech perception improvements through spectral subtraction and signal-to-noise ratio based noise reduction techniques. Current methods use gain functions derived through mathematical optimization or motivated by normal listening psychoacoustic experiments. Although these gain functions have been able to improve speech perception, recent studies have indicated that they are not optimal for cochlear implant noise reduction. This study systematically investigates cochlear implant recipients' speech perception and listening preference of noise reduction with a range of gain functions. Results suggest an advantageous gain function and show that gain functions currently used for noise reduction are not optimal for cochlear implant recipients. Using the cochlear implant optimised gain function, a 27% improvement over the current advanced combination encoder (ACE) stimulation strategy in speech weighted noise and a 7% improvement over current noise reduction strategies were observed in babble noise conditions. The optimized gain function was also most preferred by cochlear implant recipients. The CI specific gain function derived from this study can be easily incorporated into existing noise reduction strategies, to further improve listening performance for CI recipients in challenging environments.
X-ray spectral optimization for mammography applications using signal-to-noise ratio
Tucker, Jonathan Ernest
2000-07-01
A hypotheses that optimum exposure technique factors for mammography can be computed using uncorrected x-ray spectra measured with an inexpensive semiconductor detector is proven. A parametric model is developed, based upon the minimum signal-to-noise ratio required to perceive an object against background, to predict optimum exposure technique factors. Using published molybdenum- and rhodium-target x-ray spectra, the model predicts that aluminum-filtered molybdenum and rhodium spectra are optimum. The model is subsequently used to predict optimum exposure technique factors using uncorrected x- ray spectra from a GE Senographe DMR mammography unit measured with a cadmium zinc telluride detector and multichannel analyzer. The computed optimum exposure technique factors using uncorrected measured spectra and published spectra are comparable. The model is validated using the uncorrected measured spectra and a phantom containing objects mimicking microcalcifications and fibrous tissue structures. Entrance skin exposure and breast dose for aluminum-filtered spectra are well below those produced using currently popular k-edge filtered spectra. Aluminum-filtered spectra should be considered useful because (1)structures associated with breast cancer can be successfully imaged, and (2)the patient receives a greatly reduced dose.
Downhole microseismic monitoring for low signal-to-noise ratio events
Zhou, Hang; Zhang, Wei; Zhang, Jie
2016-10-01
Microseismic monitoring plays an important role in the process of hydraulic fracturing for shale gas/oil production. The accuracy of event location is an essential issue in microseismic monitoring. The data obtained from downhole monitoring system usually show a higher signal-to-noise ratio (SNR) than the recorded data from the surface. For small microseismic events, however, P waves recorded in a downhole array may be very weak, while S waves are generally dominant and strong. Numerical experiments suggest that inverting S-wave arrival times alone is not sufficient to constrain event locations. In this study, we perform extensive location tests with various noise effects using a grid search method that matches the travel time data of the S wave across a recording array. We conclude that fitting S-wave travel time data along with at least one P-wave travel time of the same event can significantly improve location accuracy. In practice, picking S-wave arrival time data and at least one P-wave pick is possible for many small events. We demonstrate that fitting the combination of the travel time data is a robust approach, which can help increase the number of microseismic events to be located accurately during hydraulic fracturing.
Pedersen, Anders Tegtmeier; Foroughi Abari, Farzad; Mann, Jakob;
2014-01-01
A new direction sensing continuous-wave Doppler lidar based on an image-reject homodyne receiver has recently been demonstrated at DTU Wind Energy, Technical University of Denmark. In this contribution we analyse the signal-to-noise ratio resulting from two different data processing methods both...... leading to the direction sensing capability. It is found that using the auto spectrum of the complex signal to determine the wind speed leads to a signal-to-noise ratio equivalent to that of a standard self-heterodyne receiver. Using the imaginary part of the cross spectrum to estimate the Doppler shift...... has the benefit of a zero-mean background spectrum, but comes at the expense of a decrease in the signal-to noise ratio by a factor of √2....
Tegtmeier Pedersen, A.; Abari, C. F.; Mann, J.; Mikkelsen, T.
2014-06-01
A new direction sensing continuous-wave Doppler lidar based on an image-reject homodyne receiver has recently been demonstrated at DTU Wind Energy, Technical University of Denmark. In this contribution we analyse the signal-to-noise ratio resulting from two different data processing methods both leading to the direction sensing capability. It is found that using the auto spectrum of the complex signal to determine the wind speed leads to a signal-to-noise ratio equivalent to that of a standard self-heterodyne receiver. Using the imaginary part of the cross spectrum to estimate the Doppler shift has the benefit of a zero-mean background spectrum, but comes at the expense of a decrease in the signal-to noise ratio by a factor of √2.
Swayze, G.A.; Clark, R.N.; Goetz, A.F.H.; Chrien, T.H.; Gorelick, N.S.
2003-01-01
Estimates of spectrometer band pass, sampling interval, and signal-to-noise ratio required for identification of pure minerals and plants were derived using reflectance spectra convolved to AVIRIS, HYDICE, MIVIS, VIMS, and other imaging spectrometers. For each spectral simulation, various levels of random noise were added to the reflectance spectra after convolution, and then each was analyzed with the Tetracorder spectra identification algorithm [Clark et al., 2003]. The outcome of each identification attempt was tabulated to provide an estimate of the signal-to-noise ratio at which a given percentage of the noisy spectra were identified correctly. Results show that spectral identification is most sensitive to the signal-to-noise ratio at narrow sampling interval values but is more sensitive to the sampling interval itself at broad sampling interval values because of spectral aliasing, a condition when absorption features of different materials can resemble one another. The band pass is less critical to spectral identification than the sampling interval or signal-to-noise ratio because broadening the band pass does not induce spectral aliasing. These conclusions are empirically corroborated by analysis of mineral maps of AVIRIS data collected at Cuprite, Nevada, between 1990 and 1995, a period during which the sensor signal-to-noise ratio increased up to sixfold. There are values of spectrometer sampling and band pass beyond which spectral identification of materials will require an abrupt increase in sensor signal-to-noise ratio due to the effects of spectral aliasing. Factors that control this threshold are the uniqueness of a material's diagnostic absorptions in terms of shape and wavelength isolation, and the spectral diversity of the materials found in nature and in the spectral library used for comparison. Array spectrometers provide the best data for identification when they critically sample spectra. The sampling interval should not be broadened to
Effects of the physiological parameters on the signal-to-noise ratio of single myoelectric channel
Zhang YT
2007-08-01
Full Text Available Abstract Background An important measure of the performance of a myoelectric (ME control system for powered artificial limbs is the signal-to-noise ratio (SNR at the output of ME channel. However, few studies illustrated the neuron-muscular interactive effects on the SNR at ME control channel output. In order to obtain a comprehensive understanding on the relationship between the physiology of individual motor unit and the ME control performance, this study investigates the effects of physiological factors on the SNR of single ME channel by an analytical and simulation approach, where the SNR is defined as the ratio of the mean squared value estimation at the channel output and the variance of the estimation. Methods Mathematical models are formulated based on three fundamental elements: a motoneuron firing mechanism, motor unit action potential (MUAP module, and signal processor. Myoelectric signals of a motor unit are synthesized with different physiological parameters, and the corresponding SNR of single ME channel is numerically calculated. Effects of physiological multi factors on the SNR are investigated, including properties of the motoneuron, MUAP waveform, recruitment order, and firing pattern, etc. Results The results of the mathematical model, supported by simulation, indicate that the SNR of a single ME channel is associated with the voluntary contraction level. We showed that a model-based approach can provide insight into the key factors and bioprocess in ME control. The results of this modelling work can be potentially used in the improvement of ME control performance and for the training of amputees with powered prostheses. Conclusion The SNR of single ME channel is a force, neuronal and muscular property dependent parameter. The theoretical model provides possible guidance to enhance the SNR of ME channel by controlling physiological variables or conscious contraction level.
Garonne River monitoring from Signal-to-Noise Ratio data collected by a single geodetic receiver
Roussel, Nicolas; Frappart, Frédéric; Darrozes, José; Ramillien, Guillaume; Bonneton, Philippe; Bonneton, Natalie; Detandt, Guillaume; Roques, Manon; Orseau, Thomas
2016-04-01
GNSS-Reflectometry (GNSS-R) altimetry has demonstrated a strong potential for water level monitoring through the last decades. Interference Pattern Technique (IPT) based on the analysis of the Signal-to-Noise Ratio (SNR) estimated by a GNSS receiver, presents the main advantage of being applicable everywhere by using a single geodetic antenna and a classical GNSS receiver. Such a technique has already been tested in various configurations of acquisition of surface-reflected GNSS signals with an accuracy of a few centimeters. Nevertheless, classical SNR analysis method used to estimate the variations of the reflecting surface height h(t) has a limited domain of validity due to its variation rate dh/dt(t) assumed to be negligible. In [1], authors solve this problem with a "dynamic SNR method" taking the dynamic of the surface into account to conjointly estimate h(t) and dh/dt(t) over areas characterized by high amplitudes of tides. If the performance of this dynamic SNR method is already well-established for ocean monitoring [1], it was not validated in continental areas (i.e., river monitoring). We carried out a field study during 3 days in August and September, 2015, using a GNSS antenna to measure the water level variations in the Garonne River (France) in Podensac located 140 km downstream of the estuary mouth. In this site, the semi-diurnal tide amplitude reaches ~5 m. The antenna was located ~10 m above the water surface, and reflections of the GNSS electromagnetic waves on the Garonne River occur until 140 m from the antenna. Both classical SNR method and dynamic SNR method are tested and results are compared. [1] N. Roussel, G. Ramillien, F. Frappart, J. Darrozes, A. Gay, R. Biancale, N. Striebig, V. Hanquiez, X. Bertin, D. Allain : "Sea level monitoring and sea state estimate using a single geodetic receiver", Remote Sensing of Environment 171 (2015) 261-277.
B. Heese
2010-08-01
Full Text Available The potential of a new generation of ceilometer instruments for aerosol monitoring has been studied in the Ceilometer-Lidar Inter- Comparison (CLIC study. The ceilometer is of type CHM15k from Jenoptik, Germany, which uses a solid state laser at the wavelength of 1064 nm and an avalanche photodiode for photon counting detection. The German Meteorological Service is in progress of setting up a ceilometer network for aerosol monitoring in Germany. The intercomparison study was performed to determine whether the ceilometers are capable to deliver quality assured particle backscatter coefficient profiles. For this, the derived ceilometer profiles were compared to simultaneously measured lidar profiles at the same wavelength. The lidar used for this intercomparison was IfTs multi-wavelengths Raman lidar Polly^{XT}. During the EARLINET lidar intercomparison campaign EARLI 09 in Leipzig, Germany, a new type of the Jenoptik ceilometer, the CHM15k-X, took part. This new ceilometer has a new optical setup resulting in a complete overlap at 150 m. The derived particle backscatter profiles were compared to profiles derived from Polly^{XT}s measurements, too. The elastic daytime particle backscatter profiles as well as the less noisy night-time Raman particle backscatter profiles compare well with the ceilometers profiles in atmospheric structures like aerosol layers or the boundary layer top height. The calibration of the ceilometer profiles by an independent measurement of the aerosol optical depth (AOD by a sun photometer is necessary to determine the correct magnitude of the particle backscatter coefficient profiles. A comprehensive signal-to-noise ratio study was carried out to characterize the ceilometers signal performance with increasing altitude.
Real-time determination of the signal-to-noise ratio of partly coherent seismic time series
Kjeldsen, Peter Møller
1994-01-01
A suitable measure of the quality of signals used in exploration seismics is the signal-to-noise ratio (S/N) of the recorded signals (traces). However, the S/N of the single unstacked traces may vary considerably due to changing weather conditions during the exploration session. Since...
Fliess, Michel
2008-01-01
The signal to noise ratio, which plays such an important r\\^ole in information theory, is shown to become pointless for digital communications where the demodulation is achieved via new fast estimation techniques. Operational calculus, differential algebra, noncommutative algebra and nonstandard analysis are the main mathematical tools.
Droogendijk, H.; de Boer, Meint J.; Brookhuis, Robert Anton; Sanders, Remco G.P.; Krijnen, Gijsbertus J.M.
2013-01-01
We demonstrate that, using stochastic resonance (SR) in a voltage-controlled MEMS-slider, the signal-to-noise ratio can be increased by adding white noise. Using a Silicon-on-Insulator (SOI) based process, we realised a slider structure with periodic capacitive structures to obtain tunable unstable
Alkemade, C.T.J.; Snelleman, W.; Boutilier, G.D.; Winefordner, J.D.
1980-01-01
In this review, signal-to-noise ratios are discussed in a tutorial fashion for the case of multiplicative noise. Multiplicative noise is introduced simultaneously with the analyte signal and is therefore much more difficult to reduce than additive noise. The sources of noise, the mathematical repres
Evers, W.-J.; Besselink, I.J.M.; Teerhuis, A.P.; Oomen, T.; Nijmeijer, H.
2010-01-01
There is a large body of literature on model validation, but there is no method available that can effectively use asynchronous repeated measurements with low signal-to-noise ratios. The aim of this paper is to present a novel frequency-domain model validation method, which is suitable for this type
B. Langford
2015-03-01
Full Text Available All eddy-covariance flux measurements are associated with random uncertainties which are a combination of sampling error due to natural variability in turbulence and sensor noise. The former is the principal error for systems where the signal-to-noise ratio of the analyser is high, as is usually the case when measuring fluxes of heat, CO2 or H2O. Where signal is limited, which is often the case for measurements of other trace gases and aerosols, instrument uncertainties dominate. We are here applying a consistent approach based on auto- and cross-covariance functions to quantifying the total random flux error and the random error due to instrument noise separately. As with previous approaches, the random error quantification assumes that the time-lag between wind and concentration measurement is known. However, if combined with commonly used automated methods that identify the individual time-lag by looking for the maximum in the cross-covariance function of the two entities, analyser noise additionally leads to a systematic bias in the fluxes. Combining datasets from several analysers and using simulations we show that the method of time-lag determination becomes increasingly important as the magnitude of the instrument error approaches that of the sampling error. The flux bias can be particularly significant for disjunct data, whereas using a prescribed time-lag eliminates these effects (provided the time-lag does not fluctuate unduly over time. We also demonstrate that when sampling at higher elevations, where low frequency turbulence dominates and covariance peaks are broader, both the probability and magnitude of bias are magnified. We show that the statistical significance of noisy flux data can be increased (limit of detection can be decreased by appropriate averaging of individual fluxes, but only if systematic biases are avoided by using a prescribed time-lag. Finally, we make recommendations for the analysis and reporting of data with
Techniques and software tools for estimating ultrasonic signal-to-noise ratios
Chiou, Chien-Ping; Margetan, Frank J.; McKillip, Matthew; Engle, Brady J.; Roberts, Ronald A.
2016-02-01
At Iowa State University's Center for Nondestructive Evaluation (ISU CNDE), the use of models to simulate ultrasonic inspections has played a key role in R&D efforts for over 30 years. To this end a series of wave propagation models, flaw response models, and microstructural backscatter models have been developed to address inspection problems of interest. One use of the combined models is the estimation of signal-to-noise ratios (S/N) in circumstances where backscatter from the microstructure (grain noise) acts to mask sonic echoes from internal defects. Such S/N models have been used in the past to address questions of inspection optimization and reliability. Under the sponsorship of the National Science Foundation's Industry/University Cooperative Research Center at ISU, an effort was recently initiated to improve existing research-grade software by adding graphical user interface (GUI) to become user friendly tools for the rapid estimation of S/N for ultrasonic inspections of metals. The software combines: (1) a Python-based GUI for specifying an inspection scenario and displaying results; and (2) a Fortran-based engine for computing defect signal and backscattered grain noise characteristics. The latter makes use of several models including: the Multi-Gaussian Beam Model for computing sonic fields radiated by commercial transducers; the Thompson-Gray Model for the response from an internal defect; the Independent Scatterer Model for backscattered grain noise; and the Stanke-Kino Unified Model for attenuation. The initial emphasis was on reformulating the research-grade code into a suitable modular form, adding the graphical user interface and performing computations rapidly and robustly. Thus the initial inspection problem being addressed is relatively simple. A normal-incidence pulse/echo immersion inspection is simulated for a curved metal component having a non-uniform microstructure, specifically an equiaxed, untextured microstructure in which the average
Enhanced signal-to-noise ratio estimation for tropospheric lidar channels
Saeed, Umar; Barragan, Rubén; Rocadenbosch, Francesc
2016-04-01
This works combines the fields of tropospheric lidar remote sensing and signal processing to come up with a robust signal-to-noise ratio (SNR) estimator apt for elastic and Raman channels. The estimator uses a combined low-pass / high-pass filtering scheme along with high-order statistics (kurtosis) to estimate the range-dependent signal and noise components with minimum distortion. While low-pass filtering is used to estimate the range-dependent signal level, high-pass filtering is used to estimate the noise component with minimum distortion. From this noise component estimate (a random realization) the noise level (e.g., variance) is computed as a function of range along with error bars. The minimum-distortion specification determines the optimal cut-off de-noising filter frequency and, in turn, the spatial resolution of the SNR estimation algorithm. The proposed SNR estimator has a much wider dynamic range of operation than well-known classic SNR estimation techniques, in which the SNR is directly computed from the mean and standard deviation of the measured noise-corrupted lidar signal along successive adjacent range intervals and where the spatial resolution is just a subjective input from the user's side. Aligned with ACTRIS (http://www.actris.net) WP on "optimization of the processing chain and Single-Calculus Chain (SCC)" the proposed topic is of application to assess lidar reception channel performance and confidence on the detected atmospheric morphology (e.g., cloud base and top, and location of aerosol layers). The SNR algorithm is tested against the classic SNR estimation approach using test-bed synthetic lidar data modelling the UPC multi-spectral lidar. Towards this end, the Nd:YAG UPC elastic-Raman lidar provides aerosol channels in the near-infrared (1064 nm), visible (532 nm), and ultra-violet (355 nm) as well as aerosol Raman and water-vapour channels with fairly varying SNR levels. The SNR estimator is also used to compare SNR levels between
Muon Signals at a Low Signal-to-Noise Ratio Environment
Zakareishvili, Tamar; The ATLAS collaboration
2017-01-01
Calorimeters provide high-resolution energy measurements for particle detection. Muon signals are important for evaluating electronics performance, since they produce a signal that is close to electronic noise values. This work provides a noise RMS analysis for the Demonstrator drawer of the 2016 Tile Calorimeter (TileCal) Test Beam in order to help reconstruct events in a low signal-to-noise environment. Muon signals were then found for a beam penetrating through all three layers of the drawer. The Demonstrator drawer is an electronic candidate for TileCal, part of the ATLAS experiment for the Large Hadron Collider that operates at the European Organization for Nuclear Research (CERN).
Signal-to-Noise Ratio Measures Efficacy of Biological Computing Devices and Circuits
Jacob eBeal
2015-06-01
Full Text Available Engineering biological cells to perform computations has a broad range of important potential applications, including precision medical therapies, biosynthesis process control, and environmental sensing. Implementing predictable and effective computation, however, has been extremely difficult to date, due to a combination of poor composability of available parts and of insufficient characterization of parts and their interactions with the complex environment in which they operate. In this paper, I argue that this situation can be improved by quantitative signal-to-noise analysis of the relationship between computational abstractions and the variation and uncertainty endemic in biological organisms. This analysis takes the form of a ∆SNR function for each computational device, which can be computed from measurements of a device's input/output curve and expression noise. These functions can then be combined to predict how well a circuit will implement an intended computation, as well as evaluating the general suitability of biological devices for engineering computational circuits. Applying signal-to-noise analysis to current repressor libraries shows that no library is currently sufficient for general circuit engineering, but also indicates key targets to remedy this situation and vastly improve the range of computations that can be used effectively in the implementation of biological applications.
Signal-to-Noise Ratio Measures Efficacy of Biological Computing Devices and Circuits.
Beal, Jacob
2015-01-01
Engineering biological cells to perform computations has a broad range of important potential applications, including precision medical therapies, biosynthesis process control, and environmental sensing. Implementing predictable and effective computation, however, has been extremely difficult to date, due to a combination of poor composability of available parts and of insufficient characterization of parts and their interactions with the complex environment in which they operate. In this paper, the author argues that this situation can be improved by quantitative signal-to-noise analysis of the relationship between computational abstractions and the variation and uncertainty endemic in biological organisms. This analysis takes the form of a ΔSNRdB function for each computational device, which can be computed from measurements of a device's input/output curve and expression noise. These functions can then be combined to predict how well a circuit will implement an intended computation, as well as evaluating the general suitability of biological devices for engineering computational circuits. Applying signal-to-noise analysis to current repressor libraries shows that no library is currently sufficient for general circuit engineering, but also indicates key targets to remedy this situation and vastly improve the range of computations that can be used effectively in the implementation of biological applications.
Hedjazi, Lyamine; Le Lann, Marie-Véronique; Kempowsky, Tatiana; Dalenc, Florence; Aguilar-Martin, Joseph; Favre, Gilles
2013-01-01
Microarray profiling has recently generated the hope to gain new insights into breast cancer biology and thereby improve the performance of current prognostic tools. However, it also poses several serious challenges to classical data analysis techniques related to the characteristics of resulting data, mainly high dimensionality and low signal-to-noise ratio. Despite the tremendous research work performed to handle the first challenge in the feature selection framework, very little attention ...
Wang, Qiang; Bi, Sheng
2017-01-01
To predict the peak signal-to-noise ratio (PSNR) quality of decoded images in fractal image coding more efficiently and accurately, an improved method is proposed. After some derivations and analyses, we find that the linear correlation coefficients between coded range blocks and their respective best-matched domain blocks can determine the dynamic range of their collage errors, which can also provide the minimum and the maximum of the accumulated collage error (ACE) of uncoded range blocks. Moreover, the dynamic range of the actual percentage of accumulated collage error (APACE), APACEmin to APACEmax, can be determined as well. When APACEmin reaches a large value, such as 90%, APACEmin to APACEmax will be limited in a small range and APACE can be computed approximately. Furthermore, with ACE and the approximate APACE, the ACE of all range blocks and the average collage error (ACER) can be obtained. Finally, with the logarithmic relationship between ACER and the PSNR quality of decoded images, the PSNR quality of decoded images can be predicted directly. Experiments show that compared with the previous similar method, the proposed method can predict the PSNR quality of decoded images more accurately and needs less computation time simultaneously.
Measurement of low signal-to-noise-ratio solar p modes in spatially-resolved helioseismic data
Salabert, D; Appourchaux, T; Hill, F
2009-01-01
We present an adaptation of the rotation-corrected, m-averaged spectrum technique designed to observe low signal-to-noise-ratio, low-frequency solar p modes. The frequency shift of each of the 2l+1 m spectra of a given (n,l) multiplet is chosen that maximizes the likelihood of the m-averaged spectrum. A high signal-to-noise ratio can result from combining individual low signal-to-noise-ratio, individual-m spectra, none of which would yield a strong enough peak to measure. We apply the technique to GONG and MDI data and show that it allows us to measure modes with lower frequencies than those obtained with classic peak-fitting analysis of the individual-m spectra. We measure their central frequencies, splittings, asymmetries, lifetimes, and amplitudes. The low-frequency, low- and intermediate-angular degrees rendered accessible by this new method correspond to modes that are sensitive to the deep solar interior down to the core and to the radiative interior. Moreover, the low-frequency modes have deeper upper ...
Dao, L; Lucotte, B; Glancy, B; Chang, L-C; Hsu, L-Y; Balaban, R S
2014-11-01
In conventional multi-probe fluorescence microscopy, narrow bandwidth filters on detectors are used to avoid bleed-through artefacts between probes. The limited bandwidth reduces the signal-to-noise ratio of the detection, often severely compromising one or more channels. Herein, we describe a process of using independent component analysis to discriminate the position of different probes using only a dichroic mirror to differentiate the signals directed to the detectors. Independent component analysis was particularly effective in samples where the spatial overlap between the probes is minimal, a very common case in cellular microscopy. This imaging scheme collects nearly all of the emitted light, significantly improving the image signal-to-noise ratio. In this study, we focused on the detection of two fluorescence probes used in vivo, NAD(P)H and ANEPPS. The optimal dichroic mirror cutoff frequency was determined with simulations using the probes spectral emissions. A quality factor, defined as the cross-channel contrast-to-noise ratio, was optimized to maximize signals while maintaining spatial discrimination between the probes after independent component analysis post-processing. Simulations indicate that a ∼3 fold increase in signal-to-noise ratio using the independent component analysis approach can be achieved over the conventional narrow-band filtering approach without loss of spatial discrimination. We confirmed this predicted performance from experimental imaging of NAD(P)H and ANEPPS in mouse skeletal muscle, in vivo. For many multi-probe studies, the increased sensitivity of this 'full bandwidth' approach will lead to improved image quality and/or reduced excitation power requirements.
Zhang Qi-Cheng; Ni Yi; Xu Duan-Yi; Hu Heng
2006-01-01
The recording density of multilevel photochromic memory is limited by the signal-to-noise ratio (SNR) of the readout signal. In this paper, shot noise and material noise are investigated through theoretical analysis of SNR. When the bandwidth of a system is less than 1MHz, the material noise takes a prominent position; when the bandwidth of the system is more than 10MHz, the shot noise becomes dominant. The thickness of recording layer can be optimized to maximize the SNR and reduce the influence of the bandwidth of the system on SNR.
Xie Shaofei [Center for Instrumental Analysis, China Pharmaceutical University, Key Laboratory of Drug Quality Control and Pharmacovigilance, Ministry of Education, Nanjing 210009 (China); Xiang Bingren [Center for Instrumental Analysis, China Pharmaceutical University, Key Laboratory of Drug Quality Control and Pharmacovigilance, Ministry of Education, Nanjing 210009 (China)]. E-mail: cpuxsf@hotmail.com; Deng Haishan [Center for Instrumental Analysis, China Pharmaceutical University, Key Laboratory of Drug Quality Control and Pharmacovigilance, Ministry of Education, Nanjing 210009 (China); Xiang Suyun [Center for Instrumental Analysis, China Pharmaceutical University, Key Laboratory of Drug Quality Control and Pharmacovigilance, Ministry of Education, Nanjing 210009 (China); Lu Jun [Center for Instrumental Analysis, China Pharmaceutical University, Key Laboratory of Drug Quality Control and Pharmacovigilance, Ministry of Education, Nanjing 210009 (China)
2007-02-28
Based on the theory of stochastic resonance, an improved stochastic resonance algorithm with a new criterion for optimizing system parameters to enhance signal-to-noise ratio (SNR) of HPLC/UV chromatographic signal for trace analysis was presented in this study. Compared with the conventional criterion in stochastic resonance, the proposed one can ensure satisfactory SNR as well as good peak shape of chromatographic peak in output signal. Application of the criterion to experimental weak signals of HPLC/UV was investigated and the results showed an excellent quantitative relationship between different concentrations and responses.
Boscolo, Sonia; Fatome, Julien; Finot, Christophe
2017-04-01
We numerically study the effects of amplitude fluctuations and signal-to-noise ratio degradation of the seed pulses on the spectral compression process arising from nonlinear propagation in an optical fibre. The unveiled quite good stability of the process against these pulse degradation factors is assessed in the context of optical regeneration of intensity-modulated signals, by combining nonlinear spectral compression with centered bandpass optical filtering. The results show that the proposed nonlinear processing scheme indeed achieves mitigation of the signal's amplitude noise. However, in the presence of a jitter of the temporal duration of the pulses, the performance of the device deteriorates. © 2016 Elsevier
Jensen, P.S.; Bak, J.
2002-01-01
The optimal choice of optical pathlength, source intensity, and detector for near-infrared transmission measurements of trace components in aqueous solutions depends on the strong absorption of water. In this study we examine under which experimental circumstances one may increase the pathlength...... to obtain a measurement with higher signal-to-noise ratio. The noise level of measurements at eight different pathlengths from 0.2 to 2.0 mm of pure water and of 1 g/dL aqueous glucose signals were measured using a Fourier transform near-infrared spectrometer and a variable pathlength transmission cell...
Wang, Wei-Bo; Chen, De-Ying; Fan, Rong-Wei; Xia, Yuan-Qin
2010-02-01
The effects of the stability of dye laser on the signal to noise ratio in degenerate four-wave mixing (DFWM) were first investigated in iodine vapor using forward geometries. Frequency-doubled outputs from a multi-mode Nd : YAG laser pumped dye laser with laser dye PM580 dissolved in ethanol was used. With the help of forward compensated beam-split technique and imaging detecting system, the saturation intensity of DFWM spectrum in the iodine vapor at 5 554.013 nm was first measured to be 290 microJ under the condition of atmospheric pressure and room temperature. The features of the dye laser such as wavelength ranges, beam quality and energy conversion efficiency decreased gradually with increasing pumping service use, pulse number and intensity. Additionally, with the comparison of the stable and unstable dye laser output, it was found that the instability of dye laser output had greatly influenced the DFWM signal and decreased the signal to background noise ratio. Shot to shot jitter and the broadening in the output frequency leads to an effective broadening of the recorded spectrum and loss of the DFWM signal to noise ratio under the same pumping intensity at different time. The study is of importance to the detection of trace atom, molecule and radical in combustion diagnosis.
A high signal-to-noise ratio passive near-field microscope equipped with a helium-free cryostat
Lin, Kuan-Ting; Komiyama, Susumu; Kim, Sunmi; Kawamura, Ken-ichi; Kajihara, Yusuke
2017-01-01
We have developed a passive long-wavelength infrared (LWIR) scattering-type scanning near-field optical microscope (s-SNOM) installed in a helium-free mechanically cooled cryostat, which facilitates cooling of an LWIR detector and optical elements to 4.5 K. To reduce mechanical vibration propagation from a compressor unit, we have introduced a metal bellows damper and a helium gas damper. These dampers ensure the performance of the s-SNOM to be free from mechanical vibration. Furthermore, we have introduced a solid immersion lens to improve the confocal microscope performance. To demonstrate the passive s-SNOM capability, we measured thermally excited surface evanescent waves on Au/SiO2 gratings. A near-field signal-to-noise ratio is 4.5 times the improvement with an acquisition time of 1 s/pixel. These improvements have made the passive s-SNOM a more convenient and higher-performance experimental tool with a higher signal-to-noise ratio for a shorter acquisition time of 0.1 s.
H. Bormann
2005-01-01
Full Text Available Many model applications suffer from the fact that although it is well known that model application implies different sources of uncertainty there is no objective criterion to decide whether a model is suitable for a particular application or not. This paper introduces a comparative index between the uncertainty of a model and the change effects of scenario calculations which enables the modeller to objectively decide about suitability of a model to be applied in scenario analysis studies. The index is called "signal-to-noise-ratio", and it is applied for an exemplary scenario study which was performed within the GLOWA-IMPETUS project in Benin. The conceptual UHP model was applied on the upper Ouémé basin. Although model calibration and validation were successful, uncertainties on model parameters and input data could be identified. Applying the "signal-to-noise-ratio" on regional scale subcatchments of the upper Ouémé comparing water availability indicators for uncertainty studies and scenario analyses the UHP model turned out to be suitable to predict long-term water balances under the present poor data availability and changing environmental conditions in subhumid West Africa.
A high signal-to-noise ratio passive near-field microscope equipped with a helium-free cryostat.
Lin, Kuan-Ting; Komiyama, Susumu; Kim, Sunmi; Kawamura, Ken-Ichi; Kajihara, Yusuke
2017-01-01
We have developed a passive long-wavelength infrared (LWIR) scattering-type scanning near-field optical microscope (s-SNOM) installed in a helium-free mechanically cooled cryostat, which facilitates cooling of an LWIR detector and optical elements to 4.5 K. To reduce mechanical vibration propagation from a compressor unit, we have introduced a metal bellows damper and a helium gas damper. These dampers ensure the performance of the s-SNOM to be free from mechanical vibration. Furthermore, we have introduced a solid immersion lens to improve the confocal microscope performance. To demonstrate the passive s-SNOM capability, we measured thermally excited surface evanescent waves on Au/SiO2 gratings. A near-field signal-to-noise ratio is 4.5 times the improvement with an acquisition time of 1 s/pixel. These improvements have made the passive s-SNOM a more convenient and higher-performance experimental tool with a higher signal-to-noise ratio for a shorter acquisition time of 0.1 s.
Van den Broeck, C; Broeck, Chris Van Den; Sengupta, Anand S.
2006-01-01
We study the phenomenological consequences of amplitude-corrected post-Newtonian (PN) gravitational waveforms, as opposed to the more commonly used restricted PN waveforms, for the quasi-circular, adiabatic inspiral of compact binary objects. In the case of initial detectors it has been shown that the use of amplitude-corrected waveforms for detection templates would lead to significantly lower signal-to-noise ratios (SNRs) than those suggested by simulations based exclusively on restricted waveforms. We further elucidate the origin of the effect by an in-depth analytic treatment. The discussion is extended to advanced detectors, where new features emerge. Non-restricted waveforms are linear combinations of harmonics in the orbital phase, and in the frequency domain the $k$th harmonic is cut off at $k f_{LSO}$, with $f_{LSO}$ the orbital frequency at the last stable orbit. As a result, with non-restricted templates it is possible to achieve sizeable signal-to-noise ratios in cases where the dominant harmonic ...
Design of flying vehicle control system by signal to noise ratio
无
2001-01-01
Presents the new concept of ″Desired to be small″ based on thebasic function of vehicle flight control system for an optimal design of flying vehicle control system, and the definition of S/N ratio and calculation formula for ″Desired to be small″ dynamic characteristics, and the S/N ratio method established for design of velicle flight control systems, by which, an orthogrnal table is used to arrange test schemes, and error facters are used to simulate various interferences, and the use of S/N ratio as a design criterion to synthesise the design of dynamic and static characteristics for definition of an optimal scheme, the application of S/N ratio method to the design of a type of vehicle control system and the single run success abtained in design of control system, technical evaluation test and design finalization flight test.
Analysis of Signal-to-Noise Ratio of the Laser Doppler Velocimeter
Lading, Lars
1973-01-01
The signal-to-shot-noise ratio of the photocurrent of a laser Doppler anemometer is calculated as a function of the parameters which describe the system. It is found that the S/N is generally a growing function of receiver area, that few large particles are better than many small ones, and that g......The signal-to-shot-noise ratio of the photocurrent of a laser Doppler anemometer is calculated as a function of the parameters which describe the system. It is found that the S/N is generally a growing function of receiver area, that few large particles are better than many small ones...
Improving signal-to-noise ratio of structured light microscopy based on photon reassignment.
Singh, Vijay Raj; Choi, Heejin; Yew, Elijah Y S; Bhattacharya, Dipanjan; Yuan, Luo; Sheppard, Colin J R; Rajapakse, Jagath C; Barbastathis, George; So, Peter T C
2012-01-01
In this paper, we report a method for 3D visualization of a biological specimen utilizing a structured light wide-field microscopic imaging system. This method improves on existing structured light imaging modalities by reassigning fluorescence photons generated from off-focal plane excitation, improving in-focus signal strength. Utilizing a maximum likelihood approach, we identify the most likely fluorophore distribution in 3D that will produce the observed image stacks under structured and uniform illumination using an iterative maximization algorithm. Our results show the optical sectioning capability of tissue specimens while mostly preserving image stack photon count, which is usually not achievable with other existing structured light imaging methods.
Adrián-Martínez, S; Bou-Cabo, M; Felis, I; Llorens, C; Martínez-Mora, J A; Saldaña, M
2015-01-01
The study and application of signal detection techniques based on cross-correlation method for acoustic transient signals in noisy and reverberant environments are presented. These techniques are shown to provide high signal to noise ratio, good signal discernment from very close echoes and accurate detection of signal arrival time. The proposed methodology has been tested on real data collected in environments and conditions where its benefits can be shown. This work focuses on the acoustic detection applied to tasks of positioning in underwater structures and calibration such those as ANTARES and KM3NeT deep-sea neutrino telescopes, as well as, in particle detection through acoustic events for the COUPP/PICO detectors. Moreover, a method for obtaining the real amplitude of the signal in time (voltage) by using cross correlation has been developed and tested and is described in this work.
Sim, K S; Lim, M S; Yeap, Z X
2016-07-01
A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation.
Gelfer, M P; Fendel, D M
1995-12-01
The purpose of this study was to compare jitter, shimmer, and signal-to-noise ratio (SNR) measures obtained from tape-recorded samples with the same measures made on directly digitized voice samples, with use of the CSpeech acoustic analysis program. Subjects included 30 young women who phonated the vowel /a/ at a comfortable pitch and loudness level. Voice samples were simultaneously recorded and digitized, and the resulting perturbation measures for the two conditions were compared. Results indicated that there were small but statistically significant differences between percent jitter, percent shimmer, and SNR calculated from taped samples compared with the same measures calculated from directly digitized samples. It was concluded that direct digitization for clinical measures of vocal perturbation was most desirable, but that taped samples could be used, if necessary, with some caution.
Nagai, Hiroyuki [Advanced Research Institute for Science and Engineering, Waseda University, 17 Kikuicho, Shinjuku-ku, Tokyo 162-0044 (Japan)], E-mail: physik-albert@suou.waseda.jp; Kawaguchi, Masaaki; Sakaue, Kazuyuki; Komiya, Keita; Nomoto, Tomoaki; Kamiya, Yoshio; Hama, Yoshimasa; Washio, Masakazu [Advanced Research Institute for Science and Engineering, Waseda University, 17 Kikuicho, Shinjuku-ku, Tokyo 162-0044 (Japan); Ushida, Kiminori [The Institute of Physical and Chemical Research, 2-1 Hirosawa, Wako, Saitama 351-0198 (Japan); Kashiwagi, Shigeru [The Institute of Scientific and Industrial Research, Osaka University, 8-1 Mihogaoka, Ibaraki, Osaka 567-0047 (Japan); Kuroda, Ryunosuke [National Institute of Advanced Industrial Science and Technology, AIST Tsukuba Central 2, Tsukuba, Ibaraki 305-8568 (Japan)
2007-12-15
A compact pico-second pulse radiolysis system has been developing at Waseda University for studying primary processes in radiation chemistry. The system is composed of a photo-injector system and a pico-second all-solid-state laser system. An infrared (IR) and an ultraviolet (UV) laser pulses are obtained from mode-locked Nd:YLF laser system and used for generation of the white light continuum as a probe light and the irradiation to the Cu cathode of a photo-cathode RF-gun, respectively. To improve signal-to-noise (S/N) ratio and time resolution of this pulse radiolysis system, we optimized both probe light and pump electron beam. As a result, our pico-second pulse radiolysis system has been enough to study the primary processes of radiation chemistry. The experimental results and the improvements of our system are described in this paper.
Cheng, Bingbing; Wei, Ming-Yuan; Pei, Yanbo; DSouza, Francis; Nguyen, Kytai T; Hong, Yi; Tang, Liping; Yuan, Baohong
2015-01-01
Fluorescence microscopic imaging in centimeter-deep tissue has been highly sought-after for many years because much interesting in vivo micro-information, such as microcirculation, tumor angiogenesis, and metastasis, may deeply locate in tissue. In this study, for the first time this goal has been achieved in 3-centimeter deep tissue with high signal-to-noise ratio (SNR) and picomole sensitivity under radiation safety thresholds. These results are demonstrated not only in tissue-mimic phantoms but also in actual tissues, such as porcine muscle, ex vivo mouse liver, ex vivo spleen, and in vivo mouse tissue. These results are achieved based on three unique technologies: excellent near infrared ultrasound-switchable fluorescence (USF) contrast agents, a sensitive USF imaging system, and an effective correlation method. Multiplex USF fluorescence imaging is also achieved. It is useful to simultaneously image multiple targets and observe their interactions. This work opens the door for future studies of centimeter...
Jørgensen, Søren; Dau, Torsten
2012-01-01
The speech-based envelope power spectrum model (sEPSM) presented by Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] estimates the envelope signal-to-noise ratio (SNRenv) after modulation-frequency selective processing, which accurately predicts the speech intelligibility for normal......-hearing listeners in conditions with additive stationary noise, reverberation, and nonlinear processing with spectral subtraction. The latter condition represents a case in which the standardized speech intelligibility index and speech transmission index fail. However, the sEPSM is limited to conditions...... with stationary interferers due to the long-term estimation of the envelope power and cannot account for the well known phenomenon of speech masking release. Here, a short-term version of the sEPSM is presented, estimating the envelope SNR in 10-ms time frames. Predictions obtained with the short-term s...
Jørgensen, Søren; Dau, Torsten
2012-01-01
The speech-based envelope power spectrum model (sEPSM) presented by Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] estimates the envelope signal-to-noise ratio (SNRenv) after modulation-frequency selective processing. This approach accurately predicts the speech intelligibility...... for normal-hearing listeners in conditions with additive stationary noise, reverberation, and nonlinear processing with spectral subtraction. The latter condition represents a case in which the standardized speech intelligibility index and the speech transmission index fail. However, the sEPSM is limited...... to conditions with stationary interferers due to the long-term estimation of the envelope power and cannot account for the well-known phenomenon of speech masking release. Here, a short-term version of the sEPSM is described [Jørgensen and Dau, 2012, in preparation], which estimates the SNRenv in short temporal...
Kim Pansoo
2009-01-01
Full Text Available Recent standards for wireless transmission require reliable synchronization for channels with low signal-to-noise ratio (SNR as well as with a large amount of frequency offset, which necessitates a robust correlator structure for the initial frame synchronization process. In this paper, a new correlation strategy especially targeted for low SNR regions is proposed and its performance is analyzed. By utilizing a modified energy correction term, the proposed method effectively reduces the variance of the decision variable to enhance the detection performance. Most importantly, the method is demonstrated to outperform all previously reported schemes by a significant margin, for SNRs below 5 dB regardless of the existence of the frequency offsets. A variation of the proposed method is also presented for further enhancement over the channels with small frequency errors. The particular application considered for the performance verification is the second generation digital video broadcasting system for satellites (DVB-S2.
Zhou, Jizhong; He, Zhili; Zhou, Jizhong
2008-03-06
Signal-to-noise-ratio (SNR) thresholds for microarray data analysis were experimentally determined with an oligonucleotide array that contained perfect match (PM) and mismatch (MM) probes based upon four genes from Shewanella oneidensis MR-1. A new SNR calculation, called signal to both standard deviations ratio (SSDR) was developed, and evaluated along with other two methods, signal to standard deviation ratio (SSR), and signal to background ratio (SBR). At a low stringency, the thresholds of SSR, SBR, and SSDR were 2.5, 1.60 and 0.80 with oligonucleotide and PCR amplicon as target templates, and 2.0, 1.60 and 0.70 with genomic DNA as target templates. Slightly higher thresholds were obtained at the high stringency condition. The thresholds of SSR and SSDR decreased with an increase in the complexity of targets (e.g., target types), and the presence of background DNA, and a decrease in the composition of targets, while SBR remained unchanged under all situations. The lowest percentage of false positives (FP) and false negatives (FN) was observed with the SSDR calculation method, suggesting that it may be a better SNR calculation for more accurate determination of SNR thresholds. Positive spots identified by SNR thresholds were verified by the Student t-test, and consistent results were observed. This study provides general guidance for users to select appropriate SNR thresholds for different samples under different hybridization conditions.
Peng, Hao, E-mail: penghao@mcmaster.ca [Department of Medical Physics, McMaster University, Canada L8S 4K1 (Canada); Department of Electrical and Computer Engineering, McMaster University, Canada L8S 4K1 (Canada)
2015-10-21
A fundamental challenge for PET block detector designs is to deploy finer crystal elements while limiting the number of readout channels. The standard Anger-logic scheme including light sharing (an 8 by 8 crystal array coupled to a 2×2 photodetector array with an optical diffuser, multiplexing ratio: 16:1) has been widely used to address such a challenge. Our work proposes a generalized model to study the impacts of two critical parameters on spatial resolution performance of a PET block detector: multiple interaction events and signal-to-noise ratio (SNR). The study consists of the following three parts: (1) studying light output profile and multiple interactions of 511 keV photons within crystal arrays of different crystal widths (from 4 mm down to 1 mm, constant height: 20 mm); (2) applying the Anger-logic positioning algorithm to investigate positioning/decoding uncertainties (i.e., “block effect”) in terms of peak-to-valley ratio (PVR), with light sharing, multiple interactions and photodetector SNR taken into account; and (3) studying the dependency of spatial resolution on SNR in the context of modulation transfer function (MTF). The proposed model can be used to guide the development and evaluation of a standard Anger-logic based PET block detector including: (1) selecting/optimizing the configuration of crystal elements for a given photodetector SNR; and (2) predicting to what extent additional electronic multiplexing may be implemented to further reduce the number of readout channels.
Dumont, Douglas M.; Walsh, Kristy M.; Byram, Brett C.
2017-01-01
Radiation force-based elasticity imaging is currently being investigated as a possible diagnostic modality for a number of clinical tasks, including liver fibrosis staging and the characterization of cardiovascular tissue. In this study, we evaluate the relationship between peak displacement magnitude and image quality and propose using a Bayesian estimator to overcome the challenge of obtaining viable data in low displacement signal environments. Displacement data quality were quantified for two common radiation force-based applications, acoustic radiation force impulse imaging, which measures the displacement within the region of excitation, and shear wave elasticity imaging, which measures displacements outside the region of excitation. Performance as a function of peak displacement magnitude for acoustic radiation force impulse imaging was assessed in simulations and lesion phantoms by quantifying signal-to-noise ratio (SNR) and contrast-to-noise ratio for varying peak displacement magnitudes. Overall performance for shear wave elasticity imaging was assessed in ex vivo chicken breast samples by measuring the displacement SNR as a function of distance from the excitation source. The results show that for any given displacement magnitude level, the Bayesian estimator can increase the SNR by approximately 9 dB over normalized cross-correlation and the contrast-to-noise ratio by a factor of two. We conclude from the results that a Bayesian estimator may be useful for increasing data quality in SNR-limited imaging environments. PMID:27157861
Peng, Hao
2015-10-01
A fundamental challenge for PET block detector designs is to deploy finer crystal elements while limiting the number of readout channels. The standard Anger-logic scheme including light sharing (an 8 by 8 crystal array coupled to a 2×2 photodetector array with an optical diffuser, multiplexing ratio: 16:1) has been widely used to address such a challenge. Our work proposes a generalized model to study the impacts of two critical parameters on spatial resolution performance of a PET block detector: multiple interaction events and signal-to-noise ratio (SNR). The study consists of the following three parts: (1) studying light output profile and multiple interactions of 511 keV photons within crystal arrays of different crystal widths (from 4 mm down to 1 mm, constant height: 20 mm); (2) applying the Anger-logic positioning algorithm to investigate positioning/decoding uncertainties (i.e., "block effect") in terms of peak-to-valley ratio (PVR), with light sharing, multiple interactions and photodetector SNR taken into account; and (3) studying the dependency of spatial resolution on SNR in the context of modulation transfer function (MTF). The proposed model can be used to guide the development and evaluation of a standard Anger-logic based PET block detector including: (1) selecting/optimizing the configuration of crystal elements for a given photodetector SNR; and (2) predicting to what extent additional electronic multiplexing may be implemented to further reduce the number of readout channels.
Parvin Nassiri
2016-01-01
Full Text Available Introduction: Noise is considered as the most common cause of harmful physical effects in the workplace. A sound that is generated from within the inner ear is known as an otoacoustic emission (OAE. Distortion-product otoacoustic emissions (DPOAEs assess evoked emission and hearing capacity. The aim of this study was to assess the signal-to-noise ratio in different frequencies and at different times of the shift work in workers exposed to various levels of noise. It was also aimed to provide a statistical model for signal-to-noise ratio (SNR of OAEs in different frequencies based on the two variables of sound pressure level (SPL and exposure time. Materials and Methods: This case–control study was conducted on 45 workers during autumn 2014. The workers were divided into three groups based on the level of noise exposure. The SNR was measured in frequencies of 1000, 2000, 3000, 4000, and 6000 Hz in both ears, and in three different time intervals during the shift work. According to the inclusion criterion, SNR of 6 dB or greater was included in the study. The analysis was performed using repeated measurements of analysis of variance, spearman correlation coefficient, and paired samples t-test. Results: The results showed that there was no statistically significant difference between the three exposed groups in terms of the mean values of SNR (P > 0.05. Only in signal pressure levels of 88 dBA with an interval time of 10:30–11:00 AM, there was a statistically significant difference between the right and left ears with the mean SNR values of 3000 frequency (P = 0.038. The SPL had a significant effect on the SNR in both the right and left ears (P = 0.023, P = 0.041. The effect of the duration of measurement on the SNR was statistically significant in both the right and left ears (P = 0.027, P < 0.001. Conclusion: The findings of this study demonstrated that after noise exposure during the shift, SNR of OAEs reduced from the
Jørgensen, Søren; Dau, Torsten
2011-09-01
A model for predicting the intelligibility of processed noisy speech is proposed. The speech-based envelope power spectrum model has a similar structure as the model of Ewert and Dau [(2000). J. Acoust. Soc. Am. 108, 1181-1196], developed to account for modulation detection and masking data. The model estimates the speech-to-noise envelope power ratio, SNR(env), at the output of a modulation filterbank and relates this metric to speech intelligibility using the concept of an ideal observer. Predictions were compared to data on the intelligibility of speech presented in stationary speech-shaped noise. The model was further tested in conditions with noisy speech subjected to reverberation and spectral subtraction. Good agreement between predictions and data was found in all cases. For spectral subtraction, an analysis of the model's internal representation of the stimuli revealed that the predicted decrease of intelligibility was caused by the estimated noise envelope power exceeding that of the speech. The classical concept of the speech transmission index fails in this condition. The results strongly suggest that the signal-to-noise ratio at the output of a modulation frequency selective process provides a key measure of speech intelligibility.
Hedjazi, Lyamine; Le Lann, Marie-Veronique; Kempowsky, Tatiana; Dalenc, Florence; Aguilar-Martin, Joseph; Favre, Gilles
2013-08-01
Microarray profiling has recently generated the hope to gain new insights into breast cancer biology and thereby improve the performance of current prognostic tools. However, it also poses several serious challenges to classical data analysis techniques related to the characteristics of resulting data, mainly high dimensionality and low signal-to-noise ratio. Despite the tremendous research work performed to handle the first challenge in the feature selection framework, very little attention has been directed to address the second one. We propose in this article to address both issues simultaneously based on symbolic data analysis capabilities in order to derive more accurate genetic marker-based prognostic models. In particular, interval data representation is employed to model various uncertainties in microarray measurements. A recent feature selection algorithm that handles symbolic interval data is used then to derive a genetic signature. The predictive value of the derived signature is then assessed by following a rigorous experimental setup and compared with existing prognostic approaches in terms of predictive performance and estimated survival probability. It is shown that the derived signature (GenSym) performs significantly better than other prognostic models, including the 70-gene signature, St. Gallen, and National Institutes of Health criteria.
Measurement of Low Signal-To-Noise Ratio Solar p-Modes in Spatially Resolved Helioseismic Data
Salabert, D.; Leibacher, J.; Appourchaux, T.; Hill, F.
2009-05-01
We present an adaptation of the rotation-corrected, m-averaged spectrum technique designed to observe low signal-to-noise ratio (S/N), low-frequency solar p-modes. The frequency shift of each of the 2l + 1 m spectra of a given (n, l) multiplet is chosen that maximizes the likelihood of the m-averaged spectrum. A high S/N can result from combining individual low S/N, individual-m spectra, none of which would yield a strong enough peak to measure. We apply the technique to Global Oscillation Network Group and Michelson Doppler Imager data and show that it allows us to measure modes with lower frequencies than those obtained with classic peak-fitting analysis of the individual-m spectra. We measure their central frequencies, splittings, asymmetries, lifetimes, and amplitudes. The low frequency, low- and intermediate-angular degrees rendered accessible by this new method correspond to modes that are sensitive to the deep solar interior down to the core (l interior (4 <= l <= 35). Moreover, the low-frequency modes have deeper upper turning points, and are thus less sensitive to the turbulence and magnetic fields of the outer layers, as well as uncertainties in the nature of the external boundary condition. As a result of their longer lifetimes (narrower linewidths) at the same S/N the determination of the frequencies of lower frequency modes is more accurate, and the resulting inversions should be more precise.
Liang, Kun; Niu, Qunjie; Wu, Xiangkui; Xu, Jiaqi; Peng, Li; Zhou, Bo
2017-09-01
A lidar system with Fabry-Pérot etalon and an intensified charge coupled device can be used to obtain the scattering spectrum of the ocean and retrieve oceanic temperature profiles. However, the spectrum would be polluted by noise and result in a measurement error. To analyze the effect of signal to noise ratio (SNR) on the accuracy of measurements for Brillouin lidar in water, the theory model and characteristics of SNR are researched. The noise spectrums with different SNR are repetitiously measured based on simulation and experiment. The results show that accuracy is related to SNR, and considering the balance of time consumption and quality, the average of five measurements is adapted for real remote sensing under the pulse laser conditions of wavelength 532 nm, pulse energy 650 mJ, repetition rate 10 Hz, pulse width 8 ns and linewidth 0.003 cm-1 (90 MHz). Measuring with the Brillouin linewidth has a better accuracy at a lower temperature (15 °C), based on the classical retrieval model we adopt. The experimental results show that the temperature error is 0.71 °C and 0.06 °C based on shift and linewidth respectively when the image SNR is at the range of 3.2 dB-3.9 dB.
Method to improve the signal-to-noise ratio of photon-counting chirped amplitude modulation ladar.
Zhang, Zijing; Wu, Long; Zhang, Yong; Zhao, Yuan
2013-01-10
Photon-counting chirped amplitude modulation (PCCAM) ladar employs Geiger mode avalanche photodiode as a detector. After the detector corresponding to the echo signal is reflected from an object or target, the modulation depth (MD) of the detection outputs has some certain loss relative to that of the transmitting signal. The signal-to-noise ratio (SNR) of PCCAM ladar is mainly determined by the MD of detection outputs of the echo signal. There is a proper echo signal intensity that can decrease the MD loss and improve the SNR of the ladar receiver. In this paper, an improved PCCAM ladar system is presented, which employs an echo signal intensity optimization strategy with an iris diaphragm under different signal and noise intensities. The improved system is demonstrated with the background noise of a sunny day and the echo signal intensity from 0.1 to 10 counts/ns. The experimental results show that it can effectively improve the SNR of the ladar receiver compared with the typical PCCAM ladar system. © 2013 Optical Society of America
Hygge, Staffan; Kjellberg, Anders; Nöstl, Anatole
2015-01-01
Free recall of spoken words in Swedish (native tongue) and English were assessed in two signal-to-noise ratio (SNR) conditions (+3 and +12 dB), with and without half of the heard words being repeated back orally directly after presentation [shadowing, speech intelligibility (SI)]. A total of 24 word lists with 12 words each were presented in English and in Swedish to Swedish speaking college students. Pre-experimental measures of working memory capacity (operation span, OSPAN) were taken. A basic hypothesis was that the recall of the words would be impaired when the encoding of the words required more processing resources, thereby depleting working memory resources. This would be the case when the SNR was low or when the language was English. A low SNR was also expected to impair SI, but we wanted to compare the sizes of the SNR-effects on SI and recall. A low score on working memory capacity was expected to further add to the negative effects of SNR and language on both SI and recall. The results indicated that SNR had strong effects on both SI and recall, but also that the effect size was larger for recall than for SI. Language had a main effect on recall, but not on SI. The shadowing procedure had different effects on recall of the early and late parts of the word lists. Working memory capacity was unimportant for the effect on SI and recall. Thus, recall appear to be a more sensitive indicator than SI for the acoustics of learning, which has implications for building codes and recommendations concerning classrooms and other workplaces, where both hearing and learning is important.
Staffan eHygge
2015-09-01
Full Text Available Free recall of spoken words in Swedish (native tongue and English were assessed in two signal-to-noise ratio (SNR conditions (+3 and +12 dB, with and without half of the heard words being repeated back orally directly after presentation (shadowing, speech intelligibility, (SI. A total of 24 wordlists with 12 words each were presented in English and in Swedish to Swedish speaking college students. Pre-experimental measures of working memory capacity (OSPAN were taken.A basic hypothesis was that the recall of the words would be impaired when the encoding of the words required more processing resources, thereby depleting working memory resources. This would be the case when the SNR was low or when the language was English. A low SNR was also expected to impair SI, but we wanted to compare the sizes of the SNR-effects on SI and recall. A low score on working memory capacity was expected to further add to the negative effects of SNR and Language on both SI and recall.The results indicated that SNR had strong effects on both SI and recall, but also that the effect size was larger for recall than for SI. Language had a main effect on recall, but not on SI. The shadowing procedure had different effects on recall of the early and late parts of the word lists. Working memory capacity was unimportant for the effect on SI and recall.Thus, recall appear to be a more sensitive indicator than SI for the acoustics of learning, which has implications for building codes and recommendations concerning classrooms and other workplaces where both hearing and learning is important.
Kousi, Evanthia; Borri, Marco; Dean, Jamie; Panek, Rafal; Scurr, Erica; Leach, Martin O.; Schmidt, Maria A.
2016-01-01
MRI has been extensively used in breast cancer staging, management and high risk screening. Detection sensitivity is paramount in breast screening, but variations of signal-to-noise ratio (SNR) as a function of position are often overlooked. We propose and demonstrate practical methods to assess spatial SNR variations in dynamic contrast-enhanced (DCE) breast examinations and apply those methods to different protocols and systems. Four different protocols in three different MRI systems (1.5 and 3.0 T) with receiver coils of different design were employed on oil-filled test objects with and without uniformity filters. Twenty 3D datasets were acquired with each protocol; each dataset was acquired in under 60 s, thus complying with current breast DCE guidelines. In addition to the standard SNR calculated on a pixel-by-pixel basis, we propose other regional indices considering the mean and standard deviation of the signal over a small sub-region centred on each pixel. These regional indices include effects of the spatial variation of coil sensitivity and other structured artefacts. The proposed regional SNR indices demonstrate spatial variations in SNR as well as the presence of artefacts and sensitivity variations, which are otherwise difficult to quantify and might be overlooked in a clinical setting. Spatial variations in SNR depend on protocol choice and hardware characteristics. The use of uniformity filters was shown to lead to a rise of SNR values, altering the noise distribution. Correlation between noise in adjacent pixels was associated with data truncation along the phase encoding direction. Methods to characterise spatial SNR variations using regional information were demonstrated, with implications for quality assurance in breast screening and multi-centre trials.
Fliess, Michel
2007-01-01
The signal to noise ratio, which plays such an important role in information theory, is shown to become pointless in digital communications where - symbols are modulating carriers, which are solutions of linear differential equations with polynomial coefficients, - demodulations is achieved thanks to new algebraic estimation techniques. Operational calculus, differential algebra and nonstandard analysis are the main mathematical tools.
Roussel, Nicolas; Frappart, Frédéric; Ramillien, Guillaume; Darrozes, José; Cornu, Gwendolyne; Koummarasy, Khanithalath
2016-04-01
GNSS-Reflectometry (GNSS-R) altimetry has demonstrated a strong potential for sea level monitoring. Interference Pattern Technique (IPT) based on the analysis of the Signal-to-Noise Ratio (SNR) estimated by a GNSS receiver, presents the main advantage of being applicable everywhere by using a single geodetic antenna and receiver, transforming them to real tide gauges. Such a technique has already been tested in various configurations of acquisition of surface-reflected GNSS signals with an accuracy of a few centimeters. Nevertheless, the classical SNR analysis method for estimating the reflecting surface-antenna height is limited by an approximation: the vertical velocity of the reflecting surface must be negligible. Authors present a significant improvement of the SNR technique to solve this problem and broaden the scope of SNR-based tide monitoring. The performances achieved on the different GNSS frequency band (L1, L2 and L5) are analyzed. The method is based on a Least-Mean Square Resolution Method (LSM), combining simultaneous measurements from different GNSS constellations (GPS, GLONASS), which permits to take the dynamic of the surface into account. It was validated in situ [1], with an antenna placed at 60 meters above the Atlantic Ocean surface with variations reaching ±3 meters, and amplitude rate of the semi-diurnal tide up to 0.5 mm/s. Over the three months of SNR records on L1 frequency band for sea level determination, we found linear correlations of 0.94 by comparing with a classical tide gauge record. Our SNR-based time series was also compared to a tide theoretical model and amplitudes and phases of the main astronomical periods (6-, 12- and 24-h) were perfectly well detected. Waves and swell are also likely to be detected. If the validity of our method is already well-established with L1 band [1], the aim of our current study is to analyze the results obtained with the other GNSS frequency band: L2 and L5. L1 band seems to provide the best sea
Sun, Xiaoli; Abshire, James B.
2011-01-01
Integrated path differential absorption (IPDA) lidar can be used to remotely measure the column density of gases in the path to a scattering target [1]. The total column gas molecular density can be derived from the ratio of the laser echo signal power with the laser wavelength on the gas absorption line (on-line) to that off the line (off-line). 80th coherent detection and direct detection IPDA lidar have been used successfully in the past in horizontal path and airborne remote sensing measurements. However, for space based measurements, the signal propagation losses are often orders of magnitude higher and it is important to use the most efficient laser modulation and detection technique to minimize the average laser power and the electrical power from the spacecraft. This paper gives an analysis the receiver signal to noise ratio (SNR) of several laser modulation and detection techniques versus the average received laser power under similar operation environments. Coherent detection [2] can give the best receiver performance when the local oscillator laser is relatively strong and the heterodyne mixing losses are negligible. Coherent detection has a high signal gain and a very narrow bandwidth for the background light and detector dark noise. However, coherent detection must maintain a high degree of coherence between the local oscillator laser and the received signal in both temporal and spatial modes. This often results in a high system complexity and low overall measurement efficiency. For measurements through atmosphere the coherence diameter of the received signal also limits the useful size of the receiver telescope. Direct detection IPDA lidars are simpler to build and have fewer constraints on the transmitter and receiver components. They can use much larger size 'photon-bucket' type telescopes to reduce the demands on the laser transmitter. Here we consider the two most widely used direct detection IPDA lidar techniques. The first technique uses two CW
Sandborg, Michael; Carlsson, G.A. (Linkoeping Univ. (Sweden). Dept. of Radiation Physics)
1992-06-01
A lower limit to patient irradiation in diagnostic radiology is set by the fundamental stochastics of the energy imparted to the image receptor (quantum noise). Image quality is investigated here and expressed in terms of the signal-to-noise ratio due to quantum noise. The Monte Carlo method is used to calculate signal-to-noise ratios (SNR{sub {Delta}S}) and detective quantum efficiencies (DQE{sub {Delta}S}) in imaging thin contrasting details of air, fat, bone and iodine within a water phantom using x-ray spectra (40-140 kV) and detectors of CsI, BaFCl and Gd{sub 2}O{sub 2}S. The atomic composition of the contrasting detail influences considerably the values of SNR{sub {Delta}S} due to the different modulations of the energy spectra of primary photons passing beside and through the contrasting detail. (author).
Michel, T. [Physikalisches Institut, Universitaet Erlangen-Nuernberg, Erwin-Rommel-Strasse 1, 91058 Erlangen (Germany)]. E-mail: thilo.michel@physik.uni-erlangen.de; Anton, G. [Physikalisches Institut, Universitaet Erlangen-Nuernberg, Erwin-Rommel-Strasse 1, 91058 Erlangen (Germany); Boehnel, M. [Physikalisches Institut, Universitaet Erlangen-Nuernberg, Erwin-Rommel-Strasse 1, 91058 Erlangen (Germany); Durst, J. [Physikalisches Institut, Universitaet Erlangen-Nuernberg, Erwin-Rommel-Strasse 1, 91058 Erlangen (Germany); Firsching, M. [Physikalisches Institut, Universitaet Erlangen-Nuernberg, Erwin-Rommel-Strasse 1, 91058 Erlangen (Germany); Korn, A. [Physikalisches Institut, Universitaet Erlangen-Nuernberg, Erwin-Rommel-Strasse 1, 91058 Erlangen (Germany); Kreisler, B. [Physikalisches Institut, Universitaet Erlangen-Nuernberg, Erwin-Rommel-Strasse 1, 91058 Erlangen (Germany); Loehr, A. [Physikalisches Institut, Universitaet Erlangen-Nuernberg, Erwin-Rommel-Strasse 1, 91058 Erlangen (Germany); Nachtrab, F. [Physikalisches Institut, Universitaet Erlangen-Nuernberg, Erwin-Rommel-Strasse 1, 91058 Erlangen (Germany); Niederloehner, D. [Physikalisches Institut, Universitaet Erlangen-Nuernberg, Erwin-Rommel-Strasse 1, 91058 Erlangen (Germany); Sukowski, F. [Physikalisches Institut, Universitaet Erlangen-Nuernberg, Erwin-Rommel-Strasse 1, 91058 Erlangen (Germany); Takoukam Talla, P. [Physikalisches Institut, Universitaet Erlangen-Nuernberg, Erwin-Rommel-Strasse 1, 91058 Erlangen (Germany)
2006-12-01
We outline in this paper that the noise of a photon counting pixel detector depends on the detection efficiency and the average multiplicity of counts per interacting photon. We give a simple expression for the signal-to-noise ratio (SNR) and zero-frequency detective quantum efficiency (DQE). We describe a method to determine the DQE from measured data and to optimize the DQE as a function of energy threshold.
Ming Chang
2015-01-01
Full Text Available The radiation dose reduction without sacrificing the image quality as an important issue has raised the attention of CT manufacturers and different automatic exposure control (AEC strategies have been adopted in their products. In this paper, we focus on the strategy of tube current modulation. It is deduced based on the signal-to-noise (SNR of the sinogram. The main idea behind the proposed modulation strategy is to keep the SNR of the sinogram proximately invariable using the few-view reconstruction as a good reference because it directly affects the noise level of the reconstructions. The numerical experiment results demonstrate that, compared with constant tube current, the noise distribution is more uniform and the SNR and CNR of the reconstruction are better when the proposed strategy is applied. Furthermore it has the potential to distinguish the low-contrast target and to reduce the radiation dose.
Lapert, M; Assémat, E; Glaser, S J; Sugny, D
2015-01-28
We show to which extent the signal to noise ratio per unit time of a spin 1/2 particle can be maximized. We consider a cyclic repetition of experiments made of a measurement followed by a radio-frequency magnetic field excitation of the system, in the case of unbounded amplitude. In the periodic regime, the objective of the control problem is to design the initial state of the system and the pulse sequence which leads to the best signal to noise performance. We focus on two specific issues relevant in nuclear magnetic resonance, the crusher gradient and the radiation damping cases. Optimal control techniques are used to solve this non-standard control problem. We discuss the optimality of the Ernst angle solution, which is commonly applied in spectroscopic and medical imaging applications. In the radiation damping situation, we show that in some cases, the optimal solution differs from the Ernst one.
Lapert, M.; Glaser, S. J. [Department of Chemistry, Technische Universität München, Lichtenbergstrasse 4, D-85747 Garching (Germany); Assémat, E. [Laboratoire Interdisciplinaire Carnot de Bourgogne (ICB), UMR 6303 CNRS-Université de Bourgogne, 9 Ave. A. Savary, BP 47 870, F-21078 Dijon Cedex (France); Department of Chemical Physics, Weizmann Institute of Science, 76100 Rehovot (Israel); Sugny, D., E-mail: dominique.sugny@u-bourgogne.fr [Laboratoire Interdisciplinaire Carnot de Bourgogne (ICB), UMR 6303 CNRS-Université de Bourgogne, 9 Ave. A. Savary, BP 47 870, F-21078 Dijon Cedex (France)
2015-01-28
We show to which extent the signal to noise ratio per unit time of a spin 1/2 particle can be maximized. We consider a cyclic repetition of experiments made of a measurement followed by a radio-frequency magnetic field excitation of the system, in the case of unbounded amplitude. In the periodic regime, the objective of the control problem is to design the initial state of the system and the pulse sequence which leads to the best signal to noise performance. We focus on two specific issues relevant in nuclear magnetic resonance, the crusher gradient and the radiation damping cases. Optimal control techniques are used to solve this non-standard control problem. We discuss the optimality of the Ernst angle solution, which is commonly applied in spectroscopic and medical imaging applications. In the radiation damping situation, we show that in some cases, the optimal solution differs from the Ernst one.
Lee, Young Jae [Dept. of Nuclear Medicine, Seoul National University Hospital, Seoul (Korea, Republic of); Lee, Eul Kyu [Dept. of Radiology, Inje Paik University Hospital Jeo-dong, Seoul (Korea, Republic of); Kim, Ki Won [Dept. of Radiology, Kyung Hee University Hospital at Gang-dong, Seoul (Korea, Republic of); Jeong, Hoi Woun [Dept. of Radiological Technology, The Baekseok Culture University, Cheonan (Korea, Republic of); Lyu, Kwang Yeul; Park, Hoon Hee; Son, Jin Hyun; Min, Jung Whan [Dept. of Radiological Technology, The Shingu University, Sungnam (Korea, Republic of)
2017-03-15
The purpose of this study was to measure contrast to noise ratio (CNR) and signal to noise ratio (SNR) according to change of reconstruction from region of interest (ROI) in breast positron emission tomography- computed tomography (PET-CT), and to analyze the CNR and SNR statically. We examined images of breast PET-CT of 100 patients in a University-affiliated hospital, Seoul, Korea. Each patient's image of breast PET-CT were calculated by using Image J. Differences of CNR and SNR among four reconstruction algorithms were tested by SPSS Statistics21 ANOVA test for there was statistical significance (p<0.05). We have analysis socio-demographical variables, CNR and SNR according to reconstruction images, 95% confidence according to CNR and SNR of reconstruction and difference in a mean of CNR and SNR. SNR results, with the quality of distributions in the order of PSF{sub T}OF, Iterative and Iterative-TOF, FBP-TOF. CNR, with the quality of distributions in the order of PSF{sub T}OF, Iterative and Iterative-TOF, FBP-TOF. CNR and SNR of PET-CT reconstruction methods of the breast would be useful to evaluate breast diseases.
Li, Xin; Huang, Wei; Rooney, William D
2012-11-01
With advances in magnetic resonance imaging (MRI) technology, dynamic contrast-enhanced (DCE)-MRI is approaching the capability to simultaneously deliver both high spatial and high temporal resolutions for clinical applications. However, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) considerations and their impacts regarding pharmacokinetic modeling of the time-course data continue to represent challenges in the design of DCE-MRI acquisitions. Given that many acquisition parameters can affect the nature of DCE-MRI data, minimizing tissue-specific data acquisition discrepancy (among sites and scanner models) is as important as synchronizing pharmacokinetic modeling approaches. For cancer-related DCE-MRI studies where rapid contrast reagent (CR) extravasation is expected, current DCE-MRI protocols often adopt a three-dimensional fast low-angle shot (FLASH) sequence to achieve spatial-temporal resolution requirements. Based on breast and prostate DCE-MRI data acquired with different FLASH sequence parameters, this paper elucidates a number of SNR and CNR considerations for acquisition optimization and pharmacokinetic modeling implications therein. Simulations based on region of interest data further indicate that the effects of intercompartmental water exchange often play an important role in DCE time-course data modeling, especially for protocols optimized for post-CR SNR.
An, Yatong; Liu, Ziping; Zhang, Song
2016-12-01
This paper evaluates the robustness of our recently proposed geometric constraint-based phase-unwrapping method to unwrap a low-signal-to-noise ratio (SNR) phase. Instead of capturing additional images for absolute phase unwrapping, the new phase-unwrapping algorithm uses geometric constraints of the digital fringe projection (DFP) system to create a virtual reference phase map to unwrap the phase pixel by pixel. Both simulation and experimental results demonstrate that this new phase-unwrapping method can even successfully unwrap low-SNR phase maps that bring difficulties for conventional multi-frequency phase-unwrapping methods.
Laor, Ari; Bahcall, John N.; Jannuzi, Buell T.; Schneider, Donald P.; Green, Richard F.; Hartig, George F.
1994-01-01
We analyze the ultraviolet (UV) emission line and continuum properties of five low-redshift active galactic nuclei (four luminous quasars: PKS 0405-123, H1821 + 643, PG 0953 + 414, and 3C 273, and one bright Seyfert 1 galaxy: Mrk 205). The HST spectra have higher signal-to-noise ratios (typically approximately 60 per resolution element) and spectral resolution (R = 1300) than all previously published UV spectra used to study the emission characteristics of active galactic nuclei. We include in the analysis ground-based optical spectra covering H beta and the narrow (O III) lambda lambda 4959, 5007 doublet. New results are obtained and presented.
Nobukawa, Teruyoshi; Nomura, Takanori
2014-06-10
A high-resolution and multilevel designed reference pattern (DRP) is presented for improvement of both light utilization efficiency and the signal-to-noise ratio (SNR) of reconstructed images in coaxial holographic data storage. With a DRP, the desired Fourier power spectrum of a reference beam is obtained. Numerical and experimental results show that the DRP increases the SNR compared with that of a random phase mask (RPM). Moreover, the light utilization efficiency of the DRP is higher than that of a high-resolution RPM. In addition, the effect of the phase level and the pixel pitch of DRPs on the SNR and the light utilization efficiency are investigated.
Beam Space Formulation of the Maximum Signal-to-Noise Ratio Array Processor.
1980-12-01
To investigate the dependance of the beam space gains on the number of input Sbeams used the crosspower spectral matrix was simulated for a number of...environments; in the first example (figure 9) the noise field exhibited only a weak azimuthal dependance whereas in figure 10 the presence of a strong...interference at 06-1 implied a strong azimuthal dependance of tile noise field. Both result, showed an improvement in the beamspace array gain estimates as the
Christopher M. Bentz
2014-03-01
Full Text Available We compare optical time domain reflectometry (OTDR techniques based on conventional single impulse, coding and linear frequency chirps concerning their signal to noise ratio (SNR enhancements by measurements in a passive optical network (PON with a maximum one-way attenuation of 36.6 dB. A total of six subscribers, each represented by a unique mirror pair with narrow reflection bandwidths, are installed within a distance of 14 m. The spatial resolution of the OTDR set-up is 3.0 m.
Triantafyllou, Christina; Polimeni, Jonathan R; Keil, Boris; Wald, Lawrence L
2016-12-01
Physiological nuisance fluctuations ("physiological noise") are a major contribution to the time-series signal-to-noise ratio (tSNR) of functional imaging. While thermal noise correlations between array coil elements have a well-characterized effect on the image Signal to Noise Ratio (SNR0 ), the element-to-element covariance matrix of the time-series fluctuations has not yet been analyzed. We examine this effect with a goal of ultimately improving the combination of multichannel array data. We extend the theoretical relationship between tSNR and SNR0 to include a time-series noise covariance matrix Ψt , distinct from the thermal noise covariance matrix Ψ0 , and compare its structure to Ψ0 and the signal coupling matrix SS(H) formed from the signal intensity vectors S. Inclusion of the measured time-series noise covariance matrix into the model relating tSNR and SNR0 improves the fit of experimental multichannel data and is shown to be distinct from Ψ0 or SS(H) . Time-series noise covariances in array coils are found to differ from Ψ0 and more surprisingly, from the signal coupling matrix SS(H) . Correct characterization of the time-series noise has implications for the analysis of time-series data and for improving the coil element combination process. Magn Reson Med 76:1708-1719, 2016. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Servin, Manuel
2012-01-01
We analyze the nonlinear Carr\\'e 4-steps algorithm including its frequency response, signal-to-noise ratio, and harmonics rejection using linear systems theory. At first sight the previous statement as well as the title of this paper seems paradoxical. How can we analyze the 4-step non-linear Carr\\'e Phase Shifting Algorithm (PSA) using linear system theory? The short answer is that the non-linear Carr\\'e algorithm may be decomposed into two building blocks. The first block is a tunable linear 4-step PSA, and the second one is a non-linear phase-step estimator. Although this fact is well known from the derivation of the Carr\\'e algorithm, nobody has properly exploited it. In other words, to this day, we do not have explicit mathematical formulae for a) the spectrum, b) the harmonics rejection, and c) the signal-to-noise ratio of the non-linear Carr\\'e algorithm. These are the properties of the Carr\\'e's PSA that we show here with novel and explicit mathematical formulae.
Date, Arisa; Maeda, Tomoko; Watanabe, Mikio; Hidaka, Yoh; Iwatani, Yoshinori; Takano, Toru
2014-07-01
We established a method to analyze cells collected by fluorescence-activated cell sorting (FACS) named mRNA quantification after FACS (FACS-mQ), in which cells are labeled with a fluorescent dye in a manner that minimizes RNA degradation, and then cells sorted by FACS are examined by analyzing their gene expression profile. In this study, we established a modified protocol to analyze molecules with a low expression level, such as N-cadherin and thyroid transcription factor, by improving the signal to noise ratio in flow cytometry. Use of a fluorophore-conjugated second antibody and the appropriate choice of a fluorescence dye showed a marked increase in the signal to noise ratio. Use of the Can Get Signal Immunostain in diluting antibodies shortened the reaction time. In real-time reverse transcription-PCR, a significant decrease in the copy number of intracellular mRNAs was not observed after in-tube immunostaining. These results indicated that the present protocol is useful for separating and analyzing cells by FACS-mQ, targeting a molecule with a low expression level.
Servin, Manuel; Garnica, Guillermo
2016-01-01
Synthesis of single-wavelength temporal phase-shifting algorithms (PSA) for interferometry is well-known and firmly based on the frequency transfer function (FTF) paradigm. Here we extend the single-wavelength FTF-theory to dual and multi-wavelength PSA-synthesis when several simultaneous laser-colors are present. The FTF-based synthesis for dual-wavelength PSA (DW-PSA) is optimized for high signal-to-noise ratio and minimum number of temporal phase-shifted interferograms. The DW-PSA synthesis herein presented may be used for interferometric contouring of discontinuous industrial objects. Also DW-PSA may be useful for DW shop-testing of deep free-form aspheres. As shown here, using the FTF-based synthesis one may easily find explicit DW-PSA formulae optimized for high signal-to-noise and high detuning robustness. To this date, no general synthesis and analysis for temporal DW-PSAs has been given; only had-hoc DW-PSAs formulas have been reported. Consequently, no explicit formulae for their spectra, their sign...
Dabelsteen, T.; Larsen, O N; Pedersen, Simon Boel
1993-01-01
The habitat-induced degradation of the full song of the blackbird (Turdus merula) was quantified by measuring excess attenuation, reduction of the signal-to-noise ratio, and blur ratio, the latter measure representing the degree of blurring of amplitude and frequency patterns over time. All three...
Jørgensen, Søren; Dau, Torsten
2011-01-01
A model for predicting the intelligibility of processed noisy speech is proposed. The speech-based envelope power spectrum model has a similar structure as the model of Ewert and Dau [(2000). J. Acoust. Soc. Am. 108, 1181-1196], developed to account for modulation detection and masking data....... The model estimates the speech-to-noise envelope power ratio, SNR env, at the output of a modulation filterbank and relates this metric to speech intelligibility using the concept of an ideal observer. Predictions were compared to data on the intelligibility of speech presented in stationary speech......-shaped noise. The model was further tested in conditions with noisy speech subjected to reverberation and spectral subtraction. Good agreement between predictions and data was found in all cases. For spectral subtraction, an analysis of the model's internal representation of the stimuli revealed...
Shivaram, Niranjan; Champenois, Elio G.; Cryan, James P.; Wright, Travis; Wingard, Taylor; Belkacem, Ali
2016-12-01
We demonstrate a technique in velocity map imaging (VMI) that allows spatial gating of the laser focal overlap region in time resolved pump-probe experiments. This significantly enhances signal-to-noise ratio by eliminating background signal arising outside the region of spatial overlap of pump and probe beams. This enhancement is achieved by tilting the laser beams with respect to the surface of the VMI electrodes which creates a gradient in flight time for particles born at different points along the beam. By suitably pulsing our microchannel plate detector, we can select particles born only where the laser beams overlap. This spatial gating in velocity map imaging can benefit nearly all photo-ion pump-probe VMI experiments especially when extreme-ultraviolet light or X-rays are involved which produce large background signals on their own.
Shivaram, Niranjan; Cryan, James P; Wright, Travis W; Wingard, Taylor; Belkacem, Ali
2016-01-01
We demonstrate a new technique in velocity map imaging (VMI) that allows spatial gating of the laser focal overlap region in time resolved pump-probe experiments. This significantly enhances signal-to-noise ratio by eliminating background signal arising outside the region of spatial overlap of pump and probe beams. This enhancement is achieved by tilting the laser beams with respect to the surface of the VMI electrodes which creates a gradient in flight time for particles born at different points along the beam. By suitably pulsing our microchannel plate detector, we can select particles born only where the laser beams overlap. This spatial gating in velocity map imaging can benefit nearly all photoion pump-probe VMI experiments especially when extreme-ultraviolet (XUV) light or X-rays are involved which produce large background signals on their own.
Jørgensen, Søren; Dau, Torsten
2011-01-01
rarely been evaluated perceptually in terms of speech intelligibility. This study analyzed the effects of the spectral subtraction strategy proposed by Berouti at al. [ICASSP 4 (1979), 208-211] on the speech recognition threshold (SRT) obtained with sentences presented in stationary speech-shaped noise....... The SRT was measured in five normal-hearing listeners in six conditions of spectral subtraction. The results showed an increase of the SRT after processing, i.e. a decreased speech intelligibility, in contrast to what is predicted by the Speech Transmission Index (STI). Here, another approach is proposed......, denoted the speech-based envelope power spectrum model (sEPSM) which predicts the intelligibility based on the signal-to-noise ratio in the envelope domain. In contrast to the STI, the sEPSM is sensitive to the increased amount of the noise envelope power as a consequence of the spectral subtraction...
Liang, Dandan; Hui, Hon Tat; Yeo, Tat Soon
2013-05-01
A multilayered surface coil array for magnetic resonance imaging with an improved signal-to-noise ratio (SNR) performance is introduced and investigated by a simulation study. By using an effective decoupling method, the strong mutual coupling effect between the coil layers can be accurately removed, leading to a coherent combination of the signals of the individual coils. This results in a much stronger received signal power which increases with the number of coil layers in the array. This, together with a smaller rate of increase of noise power with the number of coil layers, leads to a net increase in the SNR of array output with the number of coil layers in the array. Rigorous numerical simulation examples have been carried out to confirm and verify the performance of the new array.
Jørgensen, Søren; Dau, Torsten
2011-01-01
. The SRT was measured in five normal-hearing listeners in six conditions of spectral subtraction. The results showed an increase of the SRT after processing, i.e. a decreased speech intelligibility, in contrast to what is predicted by the Speech Transmission Index (STI). Here, another approach is proposed...... rarely been evaluated perceptually in terms of speech intelligibility. This study analyzed the effects of the spectral subtraction strategy proposed by Berouti at al. [ICASSP 4 (1979), 208-211] on the speech recognition threshold (SRT) obtained with sentences presented in stationary speech-shaped noise......, denoted the speech-based envelope power spectrum model (sEPSM) which predicts the intelligibility based on the signal-to-noise ratio in the envelope domain. In contrast to the STI, the sEPSM is sensitive to the increased amount of the noise envelope power as a consequence of the spectral subtraction...
He, Lian; Lin, Yu; Shang, Yu; Shelton, Brent J.; Yu, Guoqiang
2013-03-01
The dual-wavelength diffuse correlation spectroscopy (DCS) flow-oximeter is an emerging technique enabling simultaneous measurements of blood flow and blood oxygenation changes in deep tissues. High signal-to-noise ratio (SNR) is crucial when applying DCS technologies in the study of human tissues where the detected signals are usually very weak. In this study, single-mode, few-mode, and multimode fibers are compared to explore the possibility of improving the SNR of DCS flow-oximeter measurements. Experiments on liquid phantom solutions and in vivo muscle tissues show only slight improvements in flow measurements when using the few-mode fiber compared with using the single-mode fiber. However, light intensities detected by the few-mode and multimode fibers are increased, leading to significant SNR improvements in detections of phantom optical property and tissue blood oxygenation. The outcomes from this study provide useful guidance for the selection of optical fibers to improve DCS flow-oximeter measurements.
Kiani, M A; Sim, K S; Nia, M E; Tso, C P
2015-05-01
A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time.
Lee, Eul Kyu [Inje Paik University Hospital Jeo-dong, Seoul (Korea, Republic of); Choi, Kwan Woo [Asan Medical Center, Seoul (Korea, Republic of); Jeong, Hoi Woun [The Baekseok Culture University, Cheonan (Korea, Republic of); Jang, Seo Goo [The Soonchunhyang University, Asan (Korea, Republic of); Kim, Ki Won [Kyung Hee University Hospital at Gang-dong, Seoul (Korea, Republic of); Son, Soon Yong [The Wonkwang Health Science University, Iksan (Korea, Republic of); Min, Jung Whan; Son, Jin Hyun [The Shingu University, Sungnam (Korea, Republic of)
2016-09-15
The purpose of this study was to needed basis of measure MRI CAD development for signal to noise ratio (SNR) by pulse sequence analysis from region of interest (ROI) in brain magnetic resonance imaging (MRI) contrast. We examined images of brain MRI contrast enhancement of 117 patients, from January 2005 to December 2015 in a University-affiliated hospital, Seoul, Korea. Diagnosed as one of two brain diseases such as meningioma and cysts SNR for each patient's image of brain MRI were calculated by using Image J. Differences of SNR among two brain diseases were tested by SPSS Statistics21 ANOVA test for there was statistical significance (p < 0.05). We have analysis socio-demographical variables, SNR according to sequence disease, 95% confidence according to SNR of sequence and difference in a mean of SNR. Meningioma results, with the quality of distributions in the order of T1CE, T2 and T1, FLAIR. Cysts results, with the quality of distributions in the order of T2 and T1, T1CE and FLAIR. SNR of MRI sequences of the brain would be useful to classify disease. Therefore, this study will contribute to evaluate brain diseases, and be a fundamental to enhancing the accuracy of CAD development.
D'Odorico, V; Pomante, E; Carswell, R F; Viel, M; Barai, P; Becker, G D; Calura, F; Cupani, G; Fontanot, F; Haehnelt, M G; Kim, T-S; Miralda-Escude, J; Rorai, A; Tescari, E; Vanzella, E
2016-01-01
In this work, we investigate the abundance and distribution of metals in the intergalactic medium (IGM) at $\\langle z \\rangle \\simeq 2.8$ through the analysis of an ultra-high signal-to-noise ratio UVES spectrum of the quasar HE0940-1050. In the CIV forest, our deep spectrum is sensitive at $3\\,\\sigma$ to lines with column density down to $\\log N_{\\rm CIV} \\simeq 11.4$ and in 60 percent of the considered redshift range down to $\\simeq11.1$. In our sample, all HI lines with $\\log N_{\\rm HI} \\ge 14.8$ show an associated CIV absorption. In the range $14.0 \\le \\log N_{\\rm HI} <14.8$, 43 percent of HI lines has an associated CIV absorption. At $\\log N_{\\rm HI} < 14.0$, the detection rates drop to $<10$ percent, possibly due to our sensitivity limits and not to an actual variation of the gas abundance properties. In the range $\\log N_{\\rm HI} \\ge 14$, we observe a fraction of HI lines with detected CIV a factor of 2 larger than the fraction of HI lines lying in the circum-galactic medium (CGM) of relativel...
Reetzke, Rachel; Lam, Boji Pak-Wing; Xie, Zilong; Sheng, Li; Chandrasekaran, Bharath
2016-01-01
Recognizing speech in adverse listening conditions is a significant cognitive, perceptual, and linguistic challenge, especially for children. Prior studies have yielded mixed results on the impact of bilingualism on speech perception in noise. Methodological variations across studies make it difficult to converge on a conclusion regarding the effect of bilingualism on speech-in-noise performance. Moreover, there is a dearth of speech-in-noise evidence for bilingual children who learn two languages simultaneously. The aim of the present study was to examine the extent to which various adverse listening conditions modulate differences in speech-in-noise performance between monolingual and simultaneous bilingual children. To that end, sentence recognition was assessed in twenty-four school-aged children (12 monolinguals; 12 simultaneous bilinguals, age of English acquisition ≤ 3 yrs.). We implemented a comprehensive speech-in-noise battery to examine recognition of English sentences across different modalities (audio-only, audiovisual), masker types (steady-state pink noise, two-talker babble), and a range of signal-to-noise ratios (SNRs; 0 to -16 dB). Results revealed no difference in performance between monolingual and simultaneous bilingual children across each combination of modality, masker, and SNR. Our findings suggest that when English age of acquisition and socioeconomic status is similar between groups, monolingual and bilingual children exhibit comparable speech-in-noise performance across a range of conditions analogous to everyday listening environments. PMID:27936212
Banas, Krzysztof; Banas, Agnieszka M; Heussler, Sascha P; Breese, Mark B H
2018-01-05
In the contemporary spectroscopy there is a trend to record spectra with the highest possible spectral resolution. This is clearly justified if the spectral features in the spectrum are very narrow (for example infra-red spectra of gas samples). However there is a plethora of samples (in the liquid and especially in the solid form) where there is a natural spectral peak broadening due to collisions and proximity predominately. Additionally there is a number of portable devices (spectrometers) with inherently restricted spectral resolution, spectral range or both, which are extremely useful in some field applications (archaeology, agriculture, food industry, cultural heritage, forensic science). In this paper the investigation of the influence of spectral resolution, spectral range and signal-to-noise ratio on the identification of high explosive substances by applying multivariate statistical methods on the Fourier transform infra-red spectral data sets is studied. All mathematical procedures on spectral data for dimension reduction, clustering and validation were implemented within R open source environment. Copyright © 2017 Elsevier B.V. All rights reserved.
Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P.; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik
2016-01-01
Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1) and second-language (L2). A total of 72 children (10 years old) participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 s) crossed with two different SNRs (+3 dBA and +12 dBA). Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA) improved recall compared to a low SNR (+3 dBA). Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time) did not interact with language. PMID:26834665
Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik
2015-01-01
Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants' first- (L1) and second-language (L2). A total of 72 children (10 years old) participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 s) crossed with two different SNRs (+3 dBA and +12 dBA). Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA) improved recall compared to a low SNR (+3 dBA). Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time) did not interact with language.
Wang, Bo; Goodpaster, Aaron M; Kennedy, Michael A
2013-10-15
A primary goal of metabonomics research is biomarker discovery for human diseases based on differences in metabolic profiles between healthy and diseased patient populations. One of the most significant challenges in biomarker discovery is validation, which implicitly depends on the coefficient of variation (CV) associated with the measurement technique. This paper investigates how the CV of metabolite resonances measured by nuclear magnetic resonance spectroscopy (NMR) depends on signal-to-noise ratio (SNR) and normalization method. CVs were calculated for NMR resonance peaks in a series of NMR spectra of five synthetic urine samples collected over an eight-month period. An inverse correlation was detected between SNR and CV for all normalization methods. Small peaks with SNR150, which typically had smaller CVs (5-10%). The inverse relationship between CV and SNR roughly obeyed a log10 dependence. Quotient normalization (QN) tended to produce smaller CVs for smaller peaks, but larger CVs for the strongest peaks in the data, compared to no normalization, normalization to total intensity (NTI) or normalization to an internal standard (NIS). Consequently, quotient normalization appears optimal for validating low concentration metabolites. NTI or NIS appear superior to QN for samples that have very small variation in total signal intensity. While the inverse relationship between CV and log10(SNR) did not strictly hold for all metabolites, weaker concentration metabolites will likely require more rigorous validation as potential biomarkers since they tend to have poorer reproducibility.
Olsson, Per-Ivar; Dahlin, Torleif; Fiandaca, Gianluca; Auken, Esben
2015-12-01
Combined resistivity and time-domain direct current induced polarization (DCIP) measurements are traditionally carried out with a 50% duty cycle current waveform, taking the resistivity measurements during the on-time and the IP measurements during the off-time. One drawback with this method is that only half of the acquisition time is available for resistivity and IP measurements, respectively. In this paper, this limitation is solved by using a current injection with 100% duty cycle and also taking the IP measurements in the on-time. With numerical modelling of current waveforms with 50% and 100% duty cycles we show that the waveforms have comparable sensitivity for the spectral Cole-Cole parameters and that signal level is increased up to a factor of 2 if the 100% duty cycle waveform is used. The inversion of field data acquired with both waveforms confirms the modelling results and shows that it is possible to retrieve similar inversion models with either of the waveforms when inverting for the spectral Cole-Cole parameters with the waveform of the injected current included in the forward computations. Consequently, our results show that on-time measurements of IP can reduce the acquisition time by up to 50% and increase the signal-to-noise ratio by up to 100% almost without information loss. Our findings can contribute and have a large impact for DCIP surveys in general and especially for surveys where time and reliable data quality are important factors. Specifically, the findings are of value for DCIP surveys conducted in urban areas where anthropogenic noise is an issue and the heterogeneous subsurface demands time-consuming 3D acquisitions.
Soares, Edward J.; Gifford, Howard C.; Glick, Stephen J.
2003-05-01
We investigated the estimation of the ensemble channelized Hotelling observer (CHO) signal-to-noise ratio (SNR) for ordered-subset (OS) image reconstruction using noisy projection data. Previously, we computed the ensemble CHO SNR using a method for approximating the channelized covariance of OS reconstruction, which requires knowledge of the noise-free projection data. Here, we use a "plug-in" approach, in which noisy data is used in place of the noise-free data in the aforementioned channelized covariance approximation. Additionally, we evaluated the use of smoothing of the noisy projections before use in the covariance approximation. Additionally, we evaluated the use of smoothing of the noisy projections before use in the covariance calculation. The task was detection of a 10% contrast Gaussian signal within a slice of the MCAT phantom. Simulated projections of the MCAT phantom were scaled and Poisson noise was added to create 100 noisy signal-absent data sets. Simulated projections of the scaled signal were then added to the noisy background projections to create 100 noisy signal-present data set. These noisy data sets were then used to generate 100 estimates of the ensemble CHO SNR for reconstructions at various iterates. For comparison purposes, the same calculation was repeated with the noise-free data. The results, reported as plots of the average CHO SNR generated in this fashion, along with 95% confidence intervals, demonstrate that this approach works very well, and would allow optimization of imaging systems and reconstruction methods using a more accurate object model (i.e., real patient data).
Wang, Xin; Wang, Xiang Jiang; Song, Hui Sheng; Chen, Long Hua
2015-05-01
The aim of this study was to evaluate the diagnostic performance of the use of total choline signal-to-noise ratio (tCho SNR) criteria in MRS studies for benign/malignant discrimination of focal breast lesions. We conducted (1) a meta-analysis based on 10 studies including 480 malignant breast lesions and 312 benign breast lesions and (2) a subgroup meta-analysis of tCho SNR ≥ 2 as cutoff for malignancy based on 7 studies including 371 malignant breast lesions and 239 benign breast lesions. (1) The pooled sensitivity and specificity of proton MRS with tCho SNR were 0.74 (95 % CI 0.69-0.77) and 0.76 (95 % CI 0.71-0.81), respectively. The PLR and NLR were 3.67 (95 % CI 2.30-5.83) and 0.25 (95 % CI 0.14-0.42), respectively. From the fitted SROC, the AUC and Q* index were 0.89 and 0.82. Publication bias was present (t = 2.46, P = 0.039). (2) Meta-regression analysis suggested that neither threshold effect nor evaluated covariates including strength of field, pulse sequence, TR and TE were sources of heterogeneity (all P value >0.05). (3) Subgroup meta-analysis: The pooled sensitivity and specificity were 0.79 and 0.72, respectively. The PLR and NLR were 3.49 and 0.20, respectively. The AUC and Q* index were 0.92 and 0.85. The use of tCho SNR criteria in MRS studies was helpful for differentiation between malignant and benign breast lesions. However, pooled diagnostic measures might be overestimated due to publication bias. A tCho SNR ≥ 2 as cutoff for malignancy resulted in higher diagnostic accuracy.
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Molodtsov, Dmitriy Y.; Rodin, Vladislav G.; Shifrina, Anna V.
2016-04-01
The majority of existing methods of optical encryption use not only light intensity distribution, easily registered with photosensors, but also its phase distribution. This provides best encryption strength for fixed quantities of elements and phase levels in a mask. Downsides are holographic registration scheme used in order to register not only light intensity distribution but also its phase distribution and speckle noise occurring due to coherent illumination. That factors lead to very poor decryption quality when it comes from computer simulations to optical implementations. Method of optical encryption with spatially incoherent illumination does not have drawbacks inherent to coherent systems, however, as only light intensity distribution is considered, mean value of image to be encrypted is always above zero which leads to intensive zero spatial frequency peak in image spectrum. Therefore, in case of spatially incoherent illumination, image spectrum, as well as encryption key spectrum, cannot be white. If encryption is based on convolution operation, no matter coherent light used or not, Fourier spectrum amplitude distribution of encryption key should overlap Fourier spectrum amplitude distribution of image to be encrypted otherwise loss of information is unavoidable. Another factor affecting decrypted image quality is original image spectrum. Usually, most part of image energy is concentrated in area of low frequencies. Consequently, only this area in encrypted image contains information about original image, while other areas contain only noise. We propose to use additional encoding of input scene to increase size of the area containing useful information. This provides increase of signal-to-noise ratio in encrypted image and consequentially increases quality of decrypted images. Results of computer simulations of test images optical encryption with spatially incoherent illumination and additional input amplitude masks are presented.
Hertel, Dirk
2009-01-01
In the emerging field of automotive vision, video capture is the critical front-end of driver assistance and active safety systems. Previous photospace measurements have shown that light levels in natural traffic scenes may contain an extremely wide intra-scene intensity range. This requires the camera to have a wide dynamic range (WDR) for it to adapt quickly to changing lighting conditions and to reliably capture all scene detail. Multiple-slope CMOS technology offers a cost-effective way of adaptively extending dynamic range by partially resetting (recharging) the CMOS pixel once or more often within each frame time. This avoids saturation and leads to a response curve with piecewise linear slopes of progressively increasing compression. It was observed that the image quality from multiple-slope image capture is strongly dependent on the control (height and time) of each reset barrier. As compression and thus dynamic range increase there is a trade-off against contrast and detail loss. Incremental signal-to-noise ratio (iSNR) is proposed in ISO 15739 for determining dynamic range. Measurements and computer simulations revealed that the observed trade-off between WDR extension and the loss of local detail could be explained by a drop in iSNR at each reset point. If a reset barrier is not optimally placed then iSNR may drop below the detection limit so that an 'iSNR hole' appears in the dynamic range. Thus ISO 15739 iSNR has gained extended utility: it not only measures the dynamic range limits but also defines dynamic range as the intensity range where detail detection is reliable. It has become a critical criterion when designing adaptive barrier control algorithms that maximize dynamic range while maintaining the minimum necessary level of detection reliability.
Viumdal, Håkon; Mylvaganam, Saba
2014-03-01
Buffer rods (BR) as waveguides in ultrasonic time domain reflectometry (TDR) can somewhat extend the range of industrial applications of ultrasonics. Level, temperature and flow measurements involving elevated temperatures, corrosive fluids and generally harsh environments are some of the applications in which conventional ultrasonic transducers cannot be used directly in contact with the media. In such cases, BRs with some design modifications can make ultrasonic TDR measurements possible with limited success. This paper deals with TDR in conjunction with distance measurements in extremely hot fluids, using conventional ultrasonic transducers in combination with BRs. When using BRs in the ultrasonic measurement systems in extreme temperatures, problems associated with size and the material of the buffer, have to be addressed. The resonant frequency of the transducer and the relative size of the transducer with respect to the diameter of BR are also important parameters influencing the signal to noise ratio (SNR) of the signal processing system used in the ultrasonic TDR. This paper gives an overview of design aspects related to the BRs with special emphasis on tapers and cladding used on BRs. As protective cladding, zirconium oxide-yttrium oxide composite was used, with its proven thermal stability in withstanding temperatures in rocket and jet engines up to 1650 °C. In general a BR should guide the signals through to the medium and from and back to the transducer without excessive attenuation and at the same time not exacerbate the noise in the measurement system. The SNR is the decisive performance indicator to consider in the design of BR based ultrasonic TDR, along with appropriate transducer, with suitable size and operating frequency. This work presents and analyses results from extensive experiments related to fine-tuning both geometry of and signals in cladded/uncladded BRs used in high temperature ultrasonic TDR with focus on overall performance based on
Giassi, Davide; Long, Marshall B.
2016-08-01
Two alternative image readout approaches are demonstrated to improve the signal-to-noise ratio (SNR) in temporally resolved laser-based imaging experiments of turbulent phenomena. The first method exploits the temporal decay characteristics of the phosphor screens of image intensifiers when coupled to an interline-transfer CCD camera operated in double-frame mode. Specifically, the light emitted by the phosphor screen, which has a finite decay constant, is equally distributed and recorded over the two sequential frames of the detector so that an averaged image can be reconstructed. The characterization of both detector and image intensifier showed that the technique preserves the correct quantitative information, and its applicability to reactive flows was verified using planar Rayleigh scattering and tested with the acquisition of images of both steady and turbulent partially premixed methane/air flames. The comparison between conventional Rayleigh results and the averaged ones showed that the SNR of the averaged image is higher than the conventional one; with the setup used in this work, the gain in SNR was seen to approach 30 %, for both the steady and turbulent cases. The second technique uses the two-frame readout of an interline-transfer CCD to increase the image SNR based on high dynamic range imaging, and it was tested in an unsteady non-reactive flow of Freon-12 injected in air. The result showed a 15 % increase in the SNR of the low-pixel-count regions of an image, when compared to the pixels of a conventionally averaged one.
Farahi, Morteza; Rojas, Monica; Mañanas, Miguel Angel; Farina, Dario
2016-01-01
Knowledge of the location of muscle Innervation Zones (IZs) is important in many applications, e.g. for minimizing the quantity of injected botulinum toxin for the treatment of spasticity or for deciding on the type of episiotomy during child delivery. Surface EMG (sEMG) can be noninvasively recorded to assess physiological and morphological characteristics of contracting muscles. However, it is not often possible to record signals of high quality. Moreover, muscles could have multiple IZs, which should all be identified. We designed a fully-automatic algorithm based on the enhanced image Graph-Cut segmentation and morphological image processing methods to identify up to five IZs in 60-ms intervals of very-low to moderate quality sEMG signal detected with multi-channel electrodes (20 bipolar channels with Inter Electrode Distance (IED) of 5 mm). An anisotropic multilayered cylinder model was used to simulate 750 sEMG signals with signal-to-noise ratio ranging from -5 to 15 dB (using Gaussian noise) and in each 60-ms signal frame, 1 to 5 IZs were included. The micro- and macro- averaged performance indices were then reported for the proposed IZ detection algorithm. In the micro-averaging procedure, the number of True Positives, False Positives and False Negatives in each frame were summed up to generate cumulative measures. In the macro-averaging, on the other hand, precision and recall were calculated for each frame and their averages are used to determine F1-score. Overall, the micro (macro)-averaged sensitivity, precision and F1-score of the algorithm for IZ channel identification were 82.7% (87.5%), 92.9% (94.0%) and 87.5% (90.6%), respectively. For the correctly identified IZ locations, the average bias error was of 0.02±0.10 IED ratio. Also, the average absolute conduction velocity estimation error was 0.41±0.40 m/s for such frames. The sensitivity analysis including increasing IED and reducing interpolation coefficient for time samples was performed
Babiloni, Fabio; Babiloni, Claudio; Carducci, Filippo; Romani, Gian Luca; Rossini, Paolo M; Angelone, Leonardo M; Cincotti, Febo
2004-05-01
Previous simulation studies have stressed the importance of the multimodal integration of electroencephalography (EEG) and magnetoencephalography (MEG) data in the estimation of cortical current density. In such studies, no systematic variations of the signal-to-noise ratio (SNR) and of the number of sensors were explicitly taken into account in the estimation process. We investigated effects of variable SNR and number of sensors on the accuracy of current density estimate by using multimodal EEG and MEG data. This was done by using as the dependent variable both the correlation coefficient (CC) and the relative error (RE) between imposed and estimated waveforms at the level of cortical region of interests (ROI). A realistic head and cortical surface model was used. Factors used in the simulations were: (1). the SNR of the simulated scalp data (with seven levels: infinite, 30, 20, 10, 5, 3, 1); (2). the particular inverse operator used to estimate the cortical source activity from the simulated scalp data (INVERSE, with two levels, including minimum norm and weighted minimum norm); and (3). the number of EEG or MEG sensors employed in the analysis (SENSORS, with three levels: 128, 61, 29 for EEG and 153, 61, or 38 in MEG). Analysis of variance demonstrated that all the considered factors significantly affect the CC and the RE indexes. Combined EEG-MEG data produced statistically significant lower RE and higher CC in source current density reconstructions compared to that estimated by the EEG and MEG data considered separately. These observations hold for the range of SNR values presented by the analyzed data. The superiority of current density estimation by multimodal integration of EEG and MEG was not due to differences in number of sensors between unimodal (EEG, MEG) and combined (EEG-MEG) inverse estimates. In fact, the current density estimate relative to the EEG-MEG multimodal integration involved 61 EEG plus 63 MEG sensors, whereas estimations carried out
Jiahong Zhang
2016-10-01
Full Text Available In order to meet the requirement of high sensitivity and signal-to-noise ratios (SNR, this study develops and optimizes a piezoresistive pressure sensor by using double silicon nanowire (SiNW as the piezoresistive sensing element. First of all, ANSYS finite element method and voltage noise models are adopted to optimize the sensor size and the sensor output (such as sensitivity, voltage noise and SNR. As a result, the sensor of the released double SiNW has 1.2 times more sensitivity than that of single SiNW sensor, which is consistent with the experimental result. Our result also displays that both the sensitivity and SNR are closely related to the geometry parameters of SiNW and its doping concentration. To achieve high performance, a p-type implantation of 5 × 1018 cm−3 and geometry of 10 µm long SiNW piezoresistor of 1400 nm × 100 nm cross area and 6 µm thick diaphragm of 200 µm × 200 µm are required. Then, the proposed SiNW pressure sensor is fabricated by using the standard complementary metal-oxide-semiconductor (CMOS lithography process as well as wet-etch release process. This SiNW pressure sensor produces a change in the voltage output when the external pressure is applied. The involved experimental results show that the pressure sensor has a high sensitivity of 495 mV/V·MPa in the range of 0–100 kPa. Nevertheless, the performance of the pressure sensor is influenced by the temperature drift. Finally, for the sake of obtaining accurate and complete information over wide temperature and pressure ranges, the data fusion technique is proposed based on the back-propagation (BP neural network, which is improved by the particle swarm optimization (PSO algorithm. The particle swarm optimization–back-propagation (PSO–BP model is implemented in hardware using a 32-bit STMicroelectronics (STM32 microcontroller. The results of calibration and test experiments clearly prove that the PSO–BP neural network can be effectively applied
Bharadwaj, P.
2013-01-10
The theory of supervirtual interferometry is modified so that free-surface related multiple refractions can be used to enhance the signal-to-noise ratio (SNR) of primary refraction events by a factor proportional to√Ns, where Ns is the number of post-critical sources for a specified refraction multiple. We also show that refraction multiples can be transformed into primary refraction events recorded at virtual hydrophones located between the actual hydrophones. Thus, data recorded by a coarse sampling of ocean bottom seismic (OBS) stations can be transformed, in principle, into a virtual survey with P times more OBS stations, where P is the order of the visible free-surface related multiple refractions. The key assumption is that the refraction arrivals are those of head waves, not pure diving waves. The effectiveness of this method is validated with both synthetic OBS data and an OBS data set recorded offshore from Taiwan. Results show the successful reconstruction of far-offset traces out to a source-receiver offset of 120 km. The primary supervirtual traces increase the number of pickable first arrivals from approximately 1600 to more than 3100 for a subset of the OBS data set where the source is only on one side of the recording stations. In addition, the head waves associated with the first-order free-surface refraction multiples allow for the creation of six new common receiver gathers recorded at virtual OBS station located about half way between the actual OBS stations. This doubles the number of OBS stations compared to the original survey and increases the total number of pickable traces from approximately 1600 to more than 6200. In summary, our results with the OBS data demonstrate that refraction interferometry can sometimes more than quadruple the number of usable traces, increase the source-receiver offsets, fill in the receiver line with a denser distribution of OBS stations, and provide more reliable picking of first arrivals. Apotential liability
Min, Eungi [Department of IT Convergence, Korea University, Seoul (Korea, Republic of); School of Biomedical Engineering, Korea University, Seoul (Korea, Republic of); Ko, Mincheol [School of Biomedical Engineering, Korea University, Seoul (Korea, Republic of); Department of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Lee, Hakjae [School of Biomedical Engineering, Korea University, Seoul (Korea, Republic of); Research Institute of Health Science, Korea University, Seoul (Korea, Republic of); Kim, Yongkwon [NuCare Medical Systems, Incheon (Korea, Republic of); Research Institute of Health Science, Korea University, Seoul (Korea, Republic of); Joung, Jinhun [NuCare Medical Systems, Incheon (Korea, Republic of); Joo, Sung-Kwan [School of Electrical Engineering, Korea University, Seoul (Korea, Republic of); Lee, Kisung, E-mail: kisung@korea.ac.kr [School of Biomedical Engineering, Korea University, Seoul (Korea, Republic of); Department of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Department of Medical Devices, Korea University Guro Hospital, Seoul (Korea, Republic of)
2014-09-11
The spectroscopic radiation portal monitor (SPM) is widely used for homeland security. Many research groups are studying the radionuclide identification method which is one of the most important factors in the performance of the SPM using the large size of a thallium activated sodium iodide (NaI(Tl)) detector. In this study, we developed the radionuclide identification method for the SPM for pedestrian screening using a single NaI(Tl) detector that is small in size (2 in.), which is much smaller than those in the existing studies under the low signal-to-noise-ratio (SNR) condition. From the anomalous radionuclide spectrum, the noise component was effectively reduced by the wavelet decomposition and the proposed background subtraction method, and the signal component was enhanced by the principal component analysis. Finally, peak locations which have been determined by the peak search algorithm with a valley check method were compared with a pre-calibrated and constructed radionuclide database. To verify the radiation identification performance of the proposed method, experiments with various kinds of sources ({sup 137}Cs, {sup 133}Ba, {sup 22}Na, and {sup 57}Co) and different SNR values (from distances of 10–150 cm and for scan times of 1–5 s) were performed. Although the high-SNR condition was explored as well, most experiments were conducted under the low-SNR condition to verify the robustness and reproducibility of the proposed algorithm. The results showed that over 98.3% of the single radionuclide detection rate was achieved regardless of which radionuclides were used, up to 50 cm under the worst SNR condition (1 s of scan time) and up to 90 cm under the best SNR condition (5 s of scan time). Furthermore we achieved accurate identification of multiple radionuclides at 40 cm with only 1 s of scan time. The results show that the proposed algorithm is competitive with the commercial method and our radionuclide identification method can be successfully applied
Buret, J.L.; Vuillaume, D.
1995-02-10
The process for increasing the signal to noise ratio in inspection of metallic tubes which have been cold pigger rolled consists to subject the tube to one or more drawing passes to reduce its external diameter.
Tu, Tsang-Wei; Budde, Matthew D; Xie, Mingqiang; Chen, Ying-Jr; Wang, Qing; Quirk, James D; Song, Sheng-Kwei
2014-12-01
To improve signal-noise-ratio of in vivo mouse spinal cord diffusion tensor imaging using-phase aligned multiple spin-echo technique. In vivo mouse spinal cord diffusion tensor imaging maps generated by multiple spin-echo and conventional spin-echo diffusion weighting were examined to demonstrate the efficacy of multiple spin-echo diffusion sequence to improve image quality and throughput. Effects of signal averaging using complex, magnitude and phased images from multiple spin-echo diffusion weighting were also assessed. Bayesian probability theory was used to generate phased images by moving the coherent signals to the real channel to eliminate the effect of phase variation between echoes while preserving the Gaussian noise distribution. Signal averaging of phased multiple spin-echo images potentially solves both the phase incoherence problem and the bias of the elevated Rician noise distribution in magnitude image. The proposed signal averaging with Bayesian phase-aligned multiple spin-echo images approach was compared to the conventional spin-echo data acquired with doubling the scan time. The diffusion tensor imaging parameters were compared in the mouse contusion spinal cord injury. Significance level (p-value) and effect size (Cohen's d) were reported between the control and contused spinal cord to inspect the sensitivity of each approach in detecting white matter pathology. Compared to the spin-echo image, the signal-noise-ratio increased to 1.84-fold using the phased image averaging and to 1.30-fold using magnitude image averaging in the spinal cord white matter. Multiple spin-echo phased image averaging showed improved image quality of the mouse spinal cord among the tested methods. Diffusion tensor imaging metrics obtained from multiple spin-echo phased images using three echoes and two averages closely agreed with those derived by spin-echo magnitude data with four averages (two times more in acquisition time). The phased image averaging correctly
Chitgarha, Mohammad Reza; Khaleghi, Salman; Daab, Wajih; Almaiman, Ahmed; Ziyadi, Morteza; Mohajerin-Ariaei, Amirhossein; Rogawski, Devora; Tur, Moshe; Touch, Joseph D; Vusirikala, Vijay; Zhao, Wendy; Willner, Alan E
2014-03-15
We demonstrated a delay-line interferometer (DLI)-based, optical-signal-to-noise ratio (OSNR) monitoring scheme of 100 Gbit/s polarization multiplexed quadrature-phase-shift-keying (PM-QPSK) four-channel WDM at 50-GHz International Telecommunication Union (ITU) grid with data format transparency and baud rate tunability of the OSNR monitor by measuring the OSNR for a 200 Gbit/s PM-16-QAM (25-Gbaud) signal and a 200 Gbit/s PM-QPSK (50-Gbaud) signal. We also explored and studied different monitor parameters, including the shape of the filter spectrum, the bandwidth of the filter, DLI delay, and DLI phase-detuning to determine the design guidelines for a desired level of accuracy for the OSNR monitor in an optical network.
王一枫; 何秀凤; 季君
2014-01-01
采用右旋圆极化（ RHCP）天线和左旋圆极化（ LHCP ）天线分别接收GPS直接信号与反射信号，跟踪反射点并探测土壤粗糙度，并用小波分析方法对土壤干燥区域与湿润区域反射信号的信噪比进行去噪分析，验证GPS信号对土壤湿度变化敏感的特性。试验结果表明，利用GPS反射信号可精确跟踪地面反射点，并且GPS反射信号信噪比的大小可以反映土壤湿度变化。%The right hand circular polarization ( RHCP ) antenna and left hand circular polarization ( LHCP ) antenna were used to receive direct GPS signals and reflected GPS signals , respectively .The reflection points were tracked and the soil roughness was detected .The GPS signals were verified to be sensitive to the variation of the soil moisture through de n-oising analysis of the signal to noise ratio of the reflected GPS signals with the wavelet method in the dry region and wet region .The results show that the reflection points can be precisely tracked with the reflected GPS signals , and the signal to noise ratio of the reflected GPS signals can reflect the variation of the soil moisture.
Paul, Jijo, E-mail: jijopaul1980@gmail.com [Department of Diagnostic Radiology, Goethe University Hospital, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Department of Biophysics, Goethe University, Max von Laue-Str.1, 60438 Frankfurt am Main (Germany); Bauer, Ralf W. [Department of Diagnostic Radiology, Goethe University Hospital, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany); Maentele, Werner [Department of Biophysics, Goethe University, Max von Laue-Str.1, 60438 Frankfurt am Main (Germany); Vogl, Thomas J. [Department of Diagnostic Radiology, Goethe University Hospital, Theodor-Stern-Kai 7, 60590 Frankfurt am Main (Germany)
2011-11-15
Objective: The purpose of this study was to evaluate image fusion in dual energy computed tomography for detecting various anatomic structures based on the effect on contrast enhancement, contrast-to-noise ratio, signal-to-noise ratio and image quality. Material and methods: Forty patients underwent a CT neck with dual energy mode (DECT under a Somatom Definition flash Dual Source CT scanner (Siemens, Forchheim, Germany)). Tube voltage: 80-kV and Sn140-kV; tube current: 110 and 290 mA s; collimation-2 x 32 x 0.6 mm. Raw data were reconstructed using a soft convolution kernel (D30f). Fused images were calculated using a spectrum of weighting factors (0.0, 0.3, 0.6 0.8 and 1.0) generating different ratios between the 80- and Sn140-kV images (e.g. factor 0.6 corresponds to 60% of their information from the 80-kV image, and 40% from the Sn140-kV image). CT values and SNRs measured in the ascending aorta, thyroid gland, fat, muscle, CSF, spinal cord, bone marrow and brain. In addition, CNR values calculated for aorta, thyroid, muscle and brain. Subjective image quality evaluated using a 5-point grading scale. Results compared using paired t-tests and nonparametric-paired Wilcoxon-Wilcox-test. Results: Statistically significant increases in mean CT values noted in anatomic structures when increasing weighting factors used (all P {<=} 0.001). For example, mean CT values derived from the contrast enhanced aorta were 149.2 {+-} 12.8 Hounsfield Units (HU), 204.8 {+-} 14.4 HU, 267.5 {+-} 18.6 HU, 311.9 {+-} 22.3 HU, 347.3 {+-} 24.7 HU, when the weighting factors 0.0, 0.3, 0.6, 0.8 and 1.0 were used. The highest SNR and CNR values were found in materials when the weighting factor 0.6 used. The difference CNR between the weighting factors 0.6 and 0.3 was statistically significant in the contrast enhanced aorta and thyroid gland (P = 0.012 and P = 0.016, respectively). Visual image assessment for image quality showed the highest score for the data reconstructed using the
Razifar, Pasha [Molecular Imaging and CT Research, GE Healthcare, WI 53188, Waukesha (United States); Engler, Henry [Department of Medical Science, Uppsala University, SE-751 85 Uppsala (Sweden); Blomquist, Gunnar [Department of Oncology, Radiology and Clinical Immunology, Uppsala University, SE-751 85 Uppsala (Sweden); Ringheim, Anna; Estrada, Sergio [Uppsala Imanet AB, GE Healthcare, Box 967, SE-751 09, Uppsala (Sweden); Laangstroem, Bengt [Department of Biochemistry and Organic Chemistry, Uppsala University, SE-751 24 Uppsala (Sweden); Bergstroem, Mats [Department of Pharmaceutical Biosciences, Uppsala University, SE-751 24 Uppsala (Sweden)
2009-06-07
This study introduces a new approach for the application of principal component analysis (PCA) with pre-normalization on dynamic positron emission tomography (PET) images. These images are generated using the amyloid imaging agent N-methyl [{sup 11}C]2-(4'-methylaminophenyl)-6-hydroxy-benzothiazole ([{sup 11}C]PIB) in patients with Alzheimer's disease (AD) and healthy volunteers (HVs). The aim was to introduce a method which, by using the whole dataset and without assuming a specific kinetic model, could generate images with improved signal-to-noise and detect, extract and illustrate changes in kinetic behavior between different regions in the brain. Eight AD patients and eight HVs from a previously published study with [{sup 11}C]PIB were used. The approach includes enhancement of brain regions where the kinetics of the radiotracer are different from what is seen in the reference region, pre-normalization for differences in noise levels and removal of negative values. This is followed by slice-wise application of PCA (SW-PCA) on the dynamic PET images. Results obtained using the new approach were compared with results obtained using reference Patlak and summed images. The new approach generated images with good quality in which cortical brain regions in AD patients showed high uptake, compared to cerebellum and white matter. Cortical structures in HVs showed low uptake as expected and in good agreement with data generated using kinetic modeling. The introduced approach generated images with enhanced contrast and improved signal-to-noise ratio (SNR) and discrimination power (DP) compared to summed images and parametric images. This method is expected to be an important clinical tool in the diagnosis and differential diagnosis of dementia.
Tetsuya Haruyama
2012-01-01
Full Text Available Cell-based biosensing is a “smart” way to obtain efficacy-information on the effect of applied chemical on cellular biological cascade. We have proposed an engineered post-synapse model cell-based biosensors to investigate the effects of chemicals on ionotropic glutamate receptor (GluR, which is a focus of attention as a molecular target for clinical neural drug discovery. The engineered model cell has several advantages over native cells, including improved ease of handling and better reproducibility in the application of cell-based biosensors. However, in general, cell-based biosensors often have low signal-to-noise (S/N ratios due to the low level of cellular responses. In order to obtain a higher S/N ratio in model cells, we have attempted to design a tactic model cell with elevated cellular response. We have revealed that the increase GluR expression level is not directly connected to the amplification of cellular responses because the saturation of surface expression of GluR, leading to a limit on the total ion influx. Furthermore, coexpression of GluR with a voltage-gated potassium channel increased Ca2+ ion influx beyond levels obtained with saturating amounts of GluR alone. The construction of model cells based on strategy of amplifying ion flux per individual receptors can be used to perform smart cell-based biosensing with an improved S/N ratio.
Mena-Werth, Jose
1998-01-01
The Vulcan Photometric Planet Search is the ground-based counterpart of Kepler Mission Proposal. The Kepler Proposal calls for the launch of telescope to look intently at a small patch of sky for four year. The mission is designed to look for extra-solar planets that transit sun-like stars. The Kepler Mission should be able to detect Earth-size planets. This goal requires an instrument and software capable of detecting photometric changes of several parts per hundred thousand in the flux of a star. The goal also requires the continuous monitoring of about a hundred thousand stars. The Kepler Mission is a NASA Discovery Class proposal similar in cost to the Lunar Prospector. The Vulcan Search is also a NASA project but based at Lick Observatory. A small wide-field telescope monitors various star fields successively during the year. Dozens of images, each containing tens of thousands of stars, are taken any night that weather permits. The images are then monitored for photometric changes of the order of one part in a thousand. These changes would reveal the transit of an inner-orbit Jupiter-size planet similar to those discovered recently in spectroscopic searches. In order to achieve a one part in one thousand photometric precision even the choice of a filter used in taking an exposure can be critical. The ultimate purpose of an filter is to increase the signal-to-noise ratio (S/N) of one's observation. Ideally, filters reduce the sky glow cause by street lights and, thereby, make the star images more distinct. The higher the S/N, the higher is the chance to observe a transit signal that indicates the presence of a new planet. It is, therefore, important to select the filter that maximizes the S/N.
王静; 封洲燕; 郑晓静
2011-01-01
There are usually a large number of small amplitude pulses included in extracellular action potential pulse (i. e. spike) recordings. In order to accurately detect these small spikes from recording signals with low signal-to-noise ratio (SNR) and thereby to increase the number of identified neurons from a single experiment, the present work developed a new spike detection algorithm based on the features of tetrode recording signals. The method firstly extracted the first component of the four channel signals by using principal component analysis (PCA). Then, the nonlinear energy operator (NEO) was applied on the first component to obtain the signals with low noises and enhanced spikes for spike detection by using a threshold method. The detection threshold was determined by a type of two step method to decrease the influences from varied spike firing densities and from large amplitude spikes. The results obtained from both synthetic datasets and experimental recordings demonstrate that the PCA-NEO threshold method can be used to processing signals recorded by microelectrode arrays with tetrode-like high density spacings. It is able to significantly increase the accurate detection ratio of small spikes with low SNR. Especially, the method can identify overlapped spikes effectively. Therefore, the new spike detection method can provide more information for neuronal signal decoding and neural network analysis.%细胞外神经元锋电位记录中经常包含许多小幅值信号,为了正确检测这些小幅值低信噪比的锋电位,增加单次实验的神经元检出数量,设计一种针对四极电极阵列记录信号的锋电位检测算法.提取4通道信号主成分分析的第一分量,计算该分量的非线性能量算子,从而减小噪声并增强锋电位.检测阈值的设定采用一种两步法,用于减小锋电位发放密度变化以及大幅值锋电位对于阈值的影响.仿真数据和实验记录数据的验证结果表明,这种主成
Coherent Dual Comb Spectroscopy at High Signal to Noise
Coddington, I; Newbury, N R
2010-01-01
Two frequency combs can be used to measure the full complex response of a sample in a configuration which can be alternatively viewed as the equivalent of a dispersive Fourier transform spectrometer, infrared time domain spectrometer, or a multiheterodyne laser spectrometer. This dual comb spectrometer retains the frequency accuracy and resolution inherent to the comb sources. We discuss, in detail, the specific design of our coherent dual-comb spectrometer and demonstrate the potential of this technique by measuring the first overtone vibration of hydrogen cyanide, centered at 194 THz (1545 nm). We measure the fully normalized, complex response of the gas over a 9 THz bandwidth at 220 MHz frequency resolution yielding 41,000 resolution elements. The average spectral signal-to-noise ratio (SNR) is 2,500 for both the fractional absorption and the phase, with a peak SNR of 4,000 corresponding to a fractional absorption sensitivity of 0.025% and phase sensitivity of 250 microradians. As the spectral coverage of ...
王倩; 杨忠东; 毕研盟
2014-01-01
detector such as spectral resolution,sampling ratio and sign-to-noise ratio (SNR)on CO2 detection are analyzed. Typical characteristics of hyper spectral CO2 detector on TANSAT are grating spectrometer and array-based detector.To achieve the column averaged atmospheric CO2 dry air mole fraction (X CO 2 )precision re-quirements of 1×10 -6 -4×10 -6 ,hyper spectral CO2 detector should provide high resolution at first to re-solve CO2 absorption lines from continuous spectra of reflected sunlight.Compared to a variety of simula-ted spectral resolutions,the spectral resolution of hyper spectral CO2 detector on TANSAT can resolve CO2 spectral features and maintain the moderate radiance sensitivity.Since small size array detector-based instruments may suffer from undersampling of the spectra,influences of spectral undersampling to CO2 ab-sorption spectra are studied,indicating that sampling ratio should exceed 2 pixels/FWHM to ensure the accuracy of CO2 spectrum. SNR is one of the most important parameters of hyper spectral CO2 detectors to ensure the reliability. SNR requirements of CO2 detector to different detection precisions are explored based on the radiance sen-sitivity factors.Results show that it is difficult to achieve SNR to detect 1×10 -6 -4×10 -6 CO2 concentra-tion change in the boundary layer by solar shortwave infrared passive remote sensing,limited by the in-strument development condition and level at present.However,the instrument SNR to detect 1% change in the CO2 column concentration is attainable.These results are not only conductive to universal applica-tions and guides on developing grating spectrometer,but also helpful to better understand the complexity of CO2 retrieval.
Explicit signal to noise ratio in reproducing kernel Hilbert spaces
Gomez-Chova, Luis; Nielsen, Allan Aasbjerg; Camps-Valls, Gustavo
2011-01-01
This paper introduces a nonlinear feature extraction method based on kernels for remote sensing data analysis. The proposed approach is based on the minimum noise fraction (MNF) transform, which maximizes the signal variance while also minimizing the estimated noise variance. We here propose...... an alternative kernel MNF (KMNF) in which the noise is explicitly estimated in the reproducing kernel Hilbert space. This enables KMNF dealing with non-linear relations between the noise and the signal features jointly. Results show that the proposed KMNF provides the most noise-free features when confronted...
Signal-to-noise ratio of phase sensing telescope interferometers
Henault, Francois
2008-01-01
This paper is the third part of a trilogy dealing with the principles, performance and limitations of what I named "Telescope-Interferometers" (TIs). The basic idea consists in transforming one telescope into a Wavefront Error (WFE) sensing device. This can be achieved in two different ways, namely the off axis and phase-shifting TIs. In both cases the Point-Spread Function (PSF) measured in the focal plane of the telescope carries information about the transmitted WFE, which is retrieved by fast and simple algorithms suitable to an Adaptive Optics (AO) regime. Herein are evaluated the uncertainties of both types of TIs, in terms of noise and systematic errors. Numerical models are developed in order to establish the dependence of driving parameters such as useful spectral range, angular size of the observed star, or detector noise on the total WFE measurement error. The latter is found particularly sensitive to photon noise, which rapidly governs the achieved accuracy for telescope diameters higher than 10 m...
Increasing the Signal to Noise Ratio in a Chemistry Laboratory ...
concentrating on mastering the controls of a car and do not manage to observe the .... law which had been in use with the mainstream first year classes for some time.12,13 This ..... 'Definitely had its merits, probably don't realize how impor- tant it was in terms of .... ing with people who enjoy it as well and when you've got a.
Signal-to-noise-optimal scaling of heterogenous population codes.
Leibold, Christian
2013-01-01
Similarity measures for neuronal population responses that are based on scalar products can be little informative if the neurons have different firing statistics. Based on signal-to-noise optimality, this paper derives positive weighting factors for the individual neurons' response rates in a heterogeneous neuronal population. The weights only depend on empirical statistics. If firing follows Poisson statistics, the weights can be interpreted as mutual information per spike. The scaling is shown to improve linear separability and clustering as compared to unscaled inputs.
Signal-to-noise issues in measuring nitrous oxide fluxes by the eddy covariance method
Cowan, Nicholas; Levy, Peter; Langford, Ben; Skiba, Ute
2016-04-01
Recently-developed fast-response gas analysers capable of measuring atmospheric N2O with high precision (agricultural sites across the UK were investigated for potential uncertainties. Our presentation highlights some of these uncertainties when analysing eddy covariance data and offers suggestions as to how these issues may be minimised. Langford, B., Acton, W., Ammann, C., Valach, A. and Nemitz, E.: Eddy-covariance data with low signal-to-noise ratio: time-lag determination, uncertainties and limit of detection, Atmos Meas Tech, 8(10), 4197-4213, doi:10.5194/amt-8-4197-2015, 2015.
Signal to Noise Studies on Thermographic Data with Fabricated Defects for Defense Structures
Zalameda, Joseph N.; Rajic, Nik; Genest, Marc
2006-01-01
There is a growing international interest in thermal inspection systems for asset life assessment and management of defense platforms. The efficacy of flash thermography is generally enhanced by applying image processing algorithms to the observations of raw temperature. Improving the defect signal to noise ratio (SNR) is of primary interest to reduce false calls and allow for easier interpretation of a thermal inspection image. Several factors affecting defect SNR were studied such as data compression and reconstruction using principal component analysis and time window processing.
DESI systems engineering: throughput and signal-to-noise
Besuner, Robert W.; Sholl, Michael J.
2016-08-01
The Dark Energy Spectroscopic Instrument (DESI) is a fiber-fed multi-object spectroscopic instrument under construction to measure the expansion history of the Universe using the Baryon Acoustic Oscillation technique. Management of light throughput and noise in all elements of the instrument is key to achieving the high-level DESI science requirements over the planned survey area and depth within the planned survey duration. The DESI high-level science requirements flow down to instrument performance requirements on system throughput and operational efficiency. Signal-to-noise requirements directly affect minimum required exposure time per field, which dictates the pace and duration of the entire survey. The need to maximize signal (light throughput) and to minimize noise contributions and time overhead due to reconfigurations between exposures drives the instrument subsystem requirements and technical implementation. Throughput losses, noise contributors, and interexposure reconfiguration time are budgeted, tracked, and managed as DESI Systems Engineering resources. Current best estimates of throughput losses and noise contributions from each individual element of the instrument are tracked together in a master budget to calculate overall margin on completing the survey within the allotted time. That budget is a spreadsheet accessible to the entire DESI project.
A genetically encoded, high-signal-to-noise maltose sensor
Marvin, Jonathan S.; Schreiter, Eric R.; Echevarría, Ileabett M.; Looger, Loren L. (Puerto Rico); (HHMI)
2012-10-23
We describe the generation of a family of high-signal-to-noise single-wavelength genetically encoded indicators for maltose. This was achieved by insertion of circularly permuted fluorescent proteins into a bacterial periplasmic binding protein (PBP), Escherichia coli maltodextrin-binding protein, resulting in a four-color family of maltose indicators. The sensors were iteratively optimized to have sufficient brightness and maltose-dependent fluorescence increases for imaging, under both one- and two-photon illumination. We demonstrate that maltose affinity of the sensors can be tuned in a fashion largely independent of the fluorescent readout mechanism. Using literature mutations, the binding specificity could be altered to moderate sucrose preference, but with a significant loss of affinity. We use the soluble sensors in individual E. coli bacteria to observe rapid maltose transport across the plasma membrane, and membrane fusion versions of the sensors on mammalian cells to visualize the addition of maltose to extracellular media. The PBP superfamily includes scaffolds specific for a number of analytes whose visualization would be critical to the reverse engineering of complex systems such as neural networks, biosynthetic pathways, and signal transduction cascades. We expect the methodology outlined here to be useful in the development of indicators for many such analytes.
Benkhelifa, Fatma
2013-04-01
In this letter, we study the ergodic capacity of a maximum ratio combining (MRC) Rician fading channel with full channel state information (CSI) at the transmitter and at the receiver. We focus on the low Signal-to-Noise Ratio (SNR) regime and we show that the capacity scales as L ΩK+L SNRx log(1SNR), where Ω is the expected channel gain per branch, K is the Rician fading factor, and L is the number of diversity branches. We show that one-bit CSI feedback at the transmitter is enough to achieve this capacity using an on-off power control scheme. Our framework can be seen as a generalization of recently established results regarding the fading-channels capacity characterization in the low-SNR regime. © 2012 IEEE.
Improving signal-to-noise in the direct imaging of exoplanets and circumstellar disks with MLOCI
Wahhaj, Zahed; Cieza, Lucas A.; Mawet, Dimitri; Yang, Bin; Canovas, Hector; de Boer, Jozua; Casassus, Simon; Ménard, François; Schreiber, Matthias R.; Liu, Michael C.; Biller, Beth A.; Nielsen, Eric L.; Hayward, Thomas L.
2015-09-01
We present a new algorithm designed to improve the signal-to-noise ratio (S/N) of point and extended source detections around bright stars in direct imaging data.One of our innovations is that we insert simulated point sources into the science images, which we then try to recover with maximum S/N. This improves the S/N of real point sources elsewhere in the field. The algorithm, based on the locally optimized combination of images (LOCI) method, is called Matched LOCI or MLOCI. We show with Gemini Planet Imager (GPI) data on HD 135344 B and Near-Infrared Coronagraphic Imager (NICI) data on several stars that the new algorithm can improve the S/N of point source detections by 30-400% over past methods. We also find no increase in false detections rates. No prior knowledge of candidate companion locations is required to use MLOCI. On the other hand, while non-blind applications may yield linear combinations of science images that seem to increase the S/N of true sources by a factor >2, they can also yield false detections at high rates. This is a potential pitfall when trying to confirm marginal detections or to redetect point sources found in previous epochs. These findings are relevant to any method where the coefficients of the linear combination are considered tunable, e.g., LOCI and principal component analysis (PCA). Thus we recommend that false detection rates be analyzed when using these techniques. Based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (USA), the Science and Technology Facilities Council (UK), the National Research Council (Canada), CONICYT (Chile), the Australian Research Council (Australia), Ministério da Ciência e Tecnologia (Brazil) and Ministerio de Ciencia, Tecnología e Innovación Productiva (Argentina).
王凤飞; 罗阿理; 赵永恒
2014-01-01
The radial velocity of the star is very important for the study of the dynamics structure and chemistry evolution of the Milky Way ,is also an useful tool for looking for variable or special objects .In the present work ,we focus on calculating the ra-dial velocity of different spectral types of low-resolution stellar spectra by adopting a template matching method ,so as to provide effective and reliable reference to the different aspects of scientific research .We choose high signal-to-noise ratio (SNR) spectra of different spectral type stellar from the Sloan Digital Sky Survey (SDSS) ,and add different noise to simulate the stellar spectra with different SNR .Then we obtain the radial velocity measurement accuracy of different spectral type stellar spectra at different SNR by employing a template matching method .Meanwhile ,the radial velocity measurement accuracy of white dwarf stars is an-alyzed as well .We concluded that the accuracy of radial velocity measurements of early-type stars is much higher than late-type ones .For example ,the 1-sigma standard error of radial velocity measurements of A-type stars is 5~8 times as large as K-type and M-type stars .We discuss the reason and suggest that the very narrow lines of late-type stars ensure the accuracy of measure-ment of radial velocities ,while the early-type stars with very wide Balmer lines ,such as A-type stars ,become sensitive to noise and obtain low accuracy of radial velocities .For the spectra of white dwarfs stars ,the standard error of radial velocity measure-ment could be over 50 km · s-1 because of their extremely wide Balmer lines .The above conclusion will provide a good reference for stellar scientific study .%恒星的视向速度对于研究银河系的演化结构和动力学有很重要的意义，同时也是寻找变源和特殊天体的一种手段。不同的研究对其测量精度有不一样的要求。使用模板匹配的方法计算不同类型的低分辨率可见光波段恒
Therapy imaging: a signal-to-noise analysis of metal plate/film detectors.
Munro, P; Rawlinson, J A; Fenster, A
1987-01-01
We have measured the modulation transfer functions [MTF (f)'s] and the noise power spectra [NPS (f)] of therapy x-ray detectors irradiated by 60Co, 6- and 18-MV radiotherapy beams. Using these quantities, we have calculated the noise-equivalent quanta [NEQ (f)] and the detective quantum efficiency [DQE (f)] to quantitate the limitations of therapy detectors. The detectors consisted of film or fluorescent screen-film combinations in contact with copper, lead, or tungsten metal plates. The resolution of the detectors was found to be comparable to fluorescent screen-film combinations used in diagnostic radiology, however, the signal-to-noise ratio [SNR (f)] of the detectors was limited due to film granularity. We conclude that improved images can be obtained by using alternative detector systems which have less noise or film granularity.
Huang, Xiaojing; Miao, Huijie; Steinbrener, Jan; Nelson, Johanna; Shapiro, David; Stewart, Andrew; Turner, Joshua; Jacobsen, Chris
2009-08-03
Using a signal-to-noise ratio estimation based on correlations between multiple simulated images, we compare the dose efficiency of two soft x-ray imaging systems: incoherent brightfield imaging using zone plate optics in a transmission x-ray microscope (TXM), and x-ray diffraction microscopy (XDM) where an image is reconstructed from the far-field coherent diffraction pattern. In XDM one must computationally phase weak diffraction signals; in TXM one suffers signal losses due to the finite numerical aperture and efficiency of the optics. In simulations with objects representing isolated cells such as yeast, we find that XDM has the potential for delivering equivalent resolution images using fewer photons. This can be an important advantage for studying radiation-sensitive biological and soft matter specimens.
Yuan, T -T; Rich, J
2013-01-01
With the rapid progress in metallicity gradient studies at high-redshift, it is imperative that we thoroughly understand the systematics in these measurements. This work investigates how the [NII]/Halpha ratio based metallicity gradients change with angular resolution, signal-to-noise (S/N), and annular binning parameters. Two approaches are used: 1. We downgrade the high angular resolution integral-field data of a gravitationally lensed galaxy and re-derive the metallicity gradients at different angular resolution; 2. We simulate high-redshift integral field spectroscopy (IFS) observations under different angular resolution and S/N conditions using a local galaxy with a known gradient. We find that the measured metallicity gradient changes systematically with angular resolution and annular binning. Seeing-limited observations produce significantly flatter gradients than higher angular resolution observations. There is a critical angular resolution limit beyond which the measured metallicity gradient is subst...
Advanced study of video signal processing in low signal to noise environments
Carden, F.
1973-01-01
Conventional analytical techniques used to determine and optimize phase-lock loop (PLL) characteristics are most often based on a model which is valid only if the intermediate frequency (IF) filter bandwidth is large compared to the PLL bandwidth and the phase error is small. An improved model (called the quasi-linear model) is developed which takes into account small IF filter bandwidths and nonlinear effects associated with large phase errors. By comparison of theoretical and experimental results it is demonstrated that the quasi-linear model accurately predicts PLL characteristics. This is true even for small IF filter bandwidths and large phase errors where the conventional model is invalid. The theoretical and experimental results are used to draw conclusions concerning threshold, multiplier output variance, phase error variance, output signal-to-noise ratio, and signal distortion. The relationship between these characteristics and IF filter bandwidth, modulating signal spectrum, and rms deviation is also determined.
Sparse maximum harmonics-to-noise-ratio deconvolution for weak fault signature detection in bearings
Miao, Yonghao; Zhao, Ming; Lin, Jing; Xu, Xiaoqiang
2016-10-01
De-noising and enhancement of the weak fault signature from the noisy signal are crucial for fault diagnosis, as features are often very weak and masked by the background noise. Deconvolution methods have a significant advantage in counteracting the influence of the transmission path and enhancing the fault impulses. However, the performance of traditional deconvolution methods is greatly affected by some limitations, which restrict the application range. Therefore, this paper proposes a new deconvolution method, named sparse maximum harmonics-noise-ratio deconvolution (SMHD), that employs a novel index, the harmonics-to-noise ratio (HNR), to be the objective function for iteratively choosing the optimum filter coefficients to maximize HNR. SMHD is designed to enhance latent periodic impulse faults from heavy noise signals by calculating the HNR to estimate the period. A sparse factor is utilized to further suppress the noise and improve the signal-to-noise ratio of the filtered signal in every iteration step. In addition, the updating process of the sparse threshold value and the period guarantees the robustness of SMHD. On this basis, the new method not only overcomes the limitations associated with traditional deconvolution methods, minimum entropy deconvolution (MED) and maximum correlated kurtosis deconvolution (MCKD), but visual inspection is also better, even if the fault period is not provided in advance. Moreover, the efficiency of the proposed method is verified by simulations and bearing data from different test rigs. The results show that the proposed method is effective in the detection of various bearing faults compared with the original MED and MCKD.
Spatial resolution, signal-to-noise and information capacity of linear imaging systems
Gureyev, Timur
2015-01-01
A simple model for image formation in linear shift-invariant systems is considered, in which both the detected signal and the noise variance are almost constant over distances comparable with the width of the point-spread function of the system. It is shown that within the constraints of this model, the square of the signal-to-noise ratio is always proportional to the "volume" of the spatial resolution unit. The ratio of these two quantities divided by the incident density of the imaging particles (e.g. photons) represents a dimensionless invariant of the imaging system, which was previously termed the intrinsic imaging quality. This invariant is related to the notion of information capacity of imaging and communication systems as previously considered by Shannon, Gabor and others. It is demonstrated that the information capacity expressed in bits cannot exceed the total number of imaging particles utilised in the system. These results are then applied to a simple generic model of quantitative imaging and ana...
Experimental study on prediction model for maximum rebound ratio
LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong
2007-01-01
The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.
Improving Signal to Noise in the Direct Imaging of Exoplanets and Circumstellar Disks
Wahhaj, Zahed; Mawet, Dimitri; Yang, Bin; Canovas, Hector; De Boer, Jos; Casassus, Simon; Menard, Francois; Schreiber, Matthias R; Liu, Michael C; Biller, Beth A; Nielsen, Eric L; Hayward, Thomas L
2015-01-01
We present a new algorithm designed to improve the signal to noise ratio (SNR) of point and extended source detections in direct imaging data. The novel part of our method is that it finds the linear combination of the science images that best match counterpart images with signal removed from suspected source regions. The algorithm, based on the Locally Optimized Combination of Images (LOCI) method, is called Matched LOCI or MLOCI. We show using data obtained with the Gemini Planet Imager (GPI) and Near-Infrared Coronagraphic Imager (NICI) that the new algorithm can improve the SNR of point source detections by 30-400% over past methods. We also find no increase in false detections rates. No prior knowledge of candidate companion locations is required to use MLOCI. While non-blind applications may yield linear combinations of science images which seem to increase the SNR of true sources by a factor > 2, they can also yield false detections at high rates. This is a potential pitfall when trying to confirm marg...
Cluster signal-to-noise analysis for evaluation of the information content in an image.
Weerawanich, Warangkana; Shimizu, Mayumi; Takeshita, Yohei; Okamura, Kazutoshi; Yoshida, Shoko; Yoshiura, Kazunori
2017-07-27
To develop an observer-free method of analyzing image quality related to the observer performance in the detection task and 2) to analyze observer behavior patterns in the detection of small mass changes in CBCT images. Thirteen observers detected holes in a Teflon phantom in CBCT images. Using the same images, we developed a new method, cluster signal-to-noise analysis, to detect the holes by applying various cut-off values using ImageJ and reconstructing cluster signal-to-noise curves. We then evaluated the correlation between cluster signal-to-noise analysis and the observer performance test. We measured the background noise in each image to evaluate the relationship with false positive rates (FPRs) of the observers. Correlations between mean FPRs and intra- and inter-observer variations were also evaluated. Moreover, we calculated true positive rates (TPRs) and accuracies from background noise and evaluated their correlations with TPRs from observers. Cluster signal-to-noise curves were derived in cluster signal-to-noise analysis. They yield the detection of signals (true holes) related to noise (false holes). This method correlated highly with the observer performance test (R(2) = 0.9296). In noisy images, increasing background noise resulted in higher FPRs and larger intra- and inter-observer variations. TPRs and accuracies calculated from background noise had high correlation with actual TPRs from observers; R(2) was 0.9244 and 0.9338, respectively. Cluster signal-to-noise analysis can simulate the detection performance of observers and thus replace the observer performance test in the evaluation of image quality. Erroneous decision-making increased with increasing background noise.
Signal-to-Noise Measurements on Irradiated CMS Tracker Detector Modules in an Electron Testbeam
Bleyl, Mark; Steinbruck, G; Stoye, M; Dragicevica, M; Hrubec, Josef; Krammer, M; Frey, M; Hartmann, F; Weiler, T; Hegner, B
2006-01-01
The CMS experiment at the Large Hadron Collider at CERN is in the last phase of its construction. The harsh radiation environment at LHC will put strong demands in radiation hardness to the innermost parts of the detector. To assess the performance of irradiated microstrip detector modules, a testbeam was conducted at the Testbeam 22 facility of the DESY research center. The primary objective was the signal-to-noise measurement of irradiated CMS Tracker modules to ensure their functionality up to 10 years of LHC operation. The paper briefly summarises the basic setup at the facility and the hardware and software used to collect and analyse the data. Some interesting subsidiary results are shown, which confirm the expected behaviour of the detector with respect to the signal-to-noise performance over the active detector area and for different electron energies. The main focus of the paper are the results of the signal-to-noise measurements for CMS Tracker Modules which were exposed to different radiation doses...
Testing for Near I(2) Trends When the Signal-to-Noise Ratio Is Small
Juselius, Katarina
2014-01-01
Researchers seldom find evidence of I(2) in exchange rates, prices, and other macroeconomics time series when they test the order of integration using univariate Dickey-Fuller tests. In contrast, when using the multivariate ML trace test we frequently find double unit roots in the data. Our paper...
Signal-to-Noise Ratio Gains and Synchronization Requirements of a Distributed Radar Network
2006-06-01
60 15. Halliday , D., Resnick , R., and Walker , J., Fundamentals of Physics, John Wiley and Sons, New York, 1997. 16. Skolnik, M.L., Introduction to...CDR Owens Walker , who helped me to focus my research and writing; and James Calusdian, who helped me put my ideas into Matlab. This work was...Gordon and Breach Science Publishers, Canada, 1993. 14. Walker , T.O., Tummala, M., and Michael, J.B., “Pulse Transmission Scheduling for a Distributed
The Effect of Signal-to-Noise Ratio on Visual Acuity Through Night Vision Goggles
1991-02-01
subjects in visuLal acuity performance with NVGs, it was concluded that further research should be conducted to examine the correlation between visual...the image intensifier tuho. Tile image intensifier tube is basically a light amplifier that is sensitive over tho spectral region of about 600nm to... excellent means of getting a sensitive measure of visual acuity. 2 Method 2.1 Subje;cts Twelve male volunteers participated in this study. ’he subjects
Novel approach for improving signal to noise ratio of seismic images
陈凤; 李金宗; 李冬冬
2004-01-01
A novel approach of digital image processing technology is applied to improve SNR of seismic images. At first,we analyze the characters of line-like texture in seismic images, and then a preprocessing method named 2 D tracing horizon filtering is designed. After that, the technology of optical flow analysis is adopted to calculate the displacement vectors of adjacent pixels between neighboring seismic images. At last, the novel image accumulation algorithms are proposed, which are applied to greatly improve SNR and definition of seismic images. The experimental results show that SNR of seismic section images with SNR of about 20 dB and 17 dB are increased 8 dB～9 dB under keeping signal energy 67%～80% by processing section images and 3dB～4dB under keeping signalenergy 80～90% by processing horizontal slice images. Thereby, the proposed novel approaches are very helpful to the correct seismic interpretation and have very important significance for seismic exploring.
Using technical noise to increase the signal to noise ratio in weak measurements
Kedem, Yaron
2011-01-01
The advantages of weak measurements, and especially measurements of imaginary weak values, for precision enhancement, are discussed. A situation is considered in which the initial state of the measurement device varies randomly on each run, and is shown to be in fact beneficial when imaginary weak values are used. The result is supported by numerical calculation and also provides an explanation for the reduction of technical noise in some recent experimental results. A connection to quantum metrology formalism is made.
Signal-to-Noise Ratio Effects on Aperture Synthesis for Digital Holographic Ladar
2014-08-01
invention that may relate to them. This report was cleared for public release by the USAF 88th Air Base Wing (88 ABW) Public Affairs Office (PAO) and is...fringes, as can be seen below. a) Laser Splitter C C D TX LO BS Afocal Telescope Target R0 Pupil Plane Target Plane b...Laser Splitter C C D TX LO Afocal Telescope Target R0 Pupil Plane Target Plane 4 Approved for public release; distribution
Signal-to-noise ratio of FT-IR CO gas spectra
Bak, J.; Clausen, Sønnik
1999-01-01
that the SNR was at a local minimum at a spectral resolution of 4 cm(-1). As a result of the investigations, we suggest that the specific spectral resolution which smears out the vibrational-rotational line structure of the smaller molecules should be considered to be low (4 cm(-1) for CO)....... simulated signals, and (3) determine the SNR of CO from high to low spectral resolutions related to the molecular linewidth and vibrational-rotational lines spacing. In addition, SNR values representing different spectral resolutions but scaled to equal measurement times were compared. It was found...
Phased array technique for low signal-to-noise ratio wind tunnels Project
National Aeronautics and Space Administration — Closed wind tunnel beamforming for aeroacoustics has become more and more prevalent in recent years. Still, there are major drawbacks as current microphone arrays...
High signal-to-noise ratio observations and the ultimate limits of precision pulsar timing
Oslowski, Stefan; Hobbs, George; Bailes, Matthew; Demorest, Paul
2011-01-01
We demonstrate that the sensitivity of high-precision pulsar timing experiments will be ultimately limited by the broadband intensity modulation that is intrinsic to the pulsar's stochastic radio signal. That is, as the peak flux of the pulsar approaches that of the system equivalent flux density, neither greater antenna gain nor increased instrumental bandwidth will improve timing precision. These conclusions proceed from an analysis of the covariance matrix used to characterise residual pulse profile fluctuations following the template matching procedure for arrival time estimation. We perform such an analysis on 25 hours of high-precision timing observations of the closest and brightest millisecond pulsar, PSR J0437-4715. In these data, the standard deviation of the post-fit arrival time residuals is approximately four times greater than that predicted by considering the system equivalent flux density, mean pulsar flux and the effective width of the pulsed emission. We develop a technique based on principa...
Silke Neumann
Full Text Available Cellular signaling systems show astonishing precision in their response to external stimuli despite strong fluctuations in the molecular components that determine pathway activity. To control the effects of noise on signaling most efficiently, living cells employ compensatory mechanisms that reach from simple negative feedback loops to robustly designed signaling architectures. Here, we report on a novel control mechanism that allows living cells to keep precision in their signaling characteristics - stationary pathway output, response amplitude, and relaxation time - in the presence of strong intracellular perturbations. The concept relies on the surprising fact that for systems showing perfect adaptation an exponential signal amplification at the receptor level suffices to eliminate slowly varying multiplicative noise. To show this mechanism at work in living systems, we quantified the response dynamics of the E. coli chemotaxis network after genetically perturbing the information flux between upstream and downstream signaling components. We give strong evidence that this signaling system results in dynamic invariance of the activated response regulator against multiplicative intracellular noise. We further demonstrate that for environmental conditions, for which precision in chemosensing is crucial, the invariant response behavior results in highest chemotactic efficiency. Our results resolve several puzzling features of the chemotaxis pathway that are widely conserved across prokaryotes but so far could not be attributed any functional role.
Signal to Noise Ratio Characterization of Coherent Doppler Lidar Backscattered Signals
Abdelazim Sameh
2016-01-01
Full Text Available An eye-safe coherent Doppler Lidar (CDL system for wind measurement was developed and tested at the Remote Sensing Laboratory of the City College of New York (CCNY. The system employs a 1542 nm fiber laser to leverage components’ availability and affordability of the telecommunication industry. A balanced detector with a bandwidth extending from dc to 125 MHz is used to eliminate the common mode relative intensity noise (RIN. The system is shot noise limited i.e., the dominant component of received signals’ noise is the shot noise. Wind velocity can be measured under nominal aerosol loading and atmospheric turbulence conditions for ranges up to 3 km while pointing vertically with 0.08 m/s precision.
Signal to Noise Ratio Characterization of Coherent Doppler Lidar Backscattered Signals
Abdelazim, Sameh; Santoro, David; Arend, Mark; Moshary, Fred; Ahmed, Sam
2016-06-01
An eye-safe coherent Doppler Lidar (CDL) system for wind measurement was developed and tested at the Remote Sensing Laboratory of the City College of New York (CCNY). The system employs a 1542 nm fiber laser to leverage components' availability and affordability of the telecommunication industry. A balanced detector with a bandwidth extending from dc to 125 MHz is used to eliminate the common mode relative intensity noise (RIN). The system is shot noise limited i.e., the dominant component of received signals' noise is the shot noise. Wind velocity can be measured under nominal aerosol loading and atmospheric turbulence conditions for ranges up to 3 km while pointing vertically with 0.08 m/s precision.
Modeling speech intelligibility based on the signal-to-noise envelope power ratio
Jørgensen, Søren
through three commercially available mobile phones. The model successfully accounts for the performance across the phones in conditions with a stationary speech-shaped background noise, whereas deviations were observed in conditions with “Traffic” and “Pub” noise. Overall, the results of this thesis......The intelligibility of speech depends on factors related to the auditory processes involved in sound perception as well as on the acoustic properties of the sound entering the ear. However, a clear understanding of speech perception in complex acoustic conditions and, in particular, a quantitative...... description of the involved auditory processes provides a major challenge in speech and hearing research. This thesis presents a computational model that attempts to predict the speech intelligibility obtained by normal-hearing listeners in various adverse conditions. The model combines the concept...
A method for improving the signal-to-noise ratio in IUE high-dispersion spectra
Welty, Daniel E.
1988-01-01
The flat-fielding technique was used to reduce fixed pattern noise in high dispersion IUE spectra, producing improvements in S/N of typically 40 percent compared with un-flat-fielded summed spectra. Weak spectral features may be more reliably identified. Such improvements are noted for specially obtained multiply-exposed images and for singly-exposed images taken from the IUE archives. However it is unclear if the technique is usable or as effective for all spectra.
Parks, Nathan A.; Gannon, Matthew A.; Long, Stephanie M.; Young, Madeleine E.
2016-01-01
Analysis of event-related potential (ERP) data includes several steps to ensure that ERPs meet an appropriate level of signal quality. One such step, subject exclusion, rejects subject data if ERP waveforms fail to meet an appropriate level of signal quality. Subject exclusion is an important quality control step in the ERP analysis pipeline as it ensures that statistical inference is based only upon those subjects exhibiting clear evoked brain responses. This critical quality control step is most often performed simply through visual inspection of subject-level ERPs by investigators. Such an approach is qualitative, subjective, and susceptible to investigator bias, as there are no standards as to what constitutes an ERP of sufficient signal quality. Here, we describe a standardized and objective method for quantifying waveform quality in individual subjects and establishing criteria for subject exclusion. The approach uses bootstrap resampling of ERP waveforms (from a pool of all available trials) to compute a signal-to-noise ratio confidence interval (SNR-CI) for individual subject waveforms. The lower bound of this SNR-CI (SNRLB) yields an effective and objective measure of signal quality as it ensures that ERP waveforms statistically exceed a desired signal-to-noise criterion. SNRLB provides a quantifiable metric of individual subject ERP quality and eliminates the need for subjective evaluation of waveform quality by the investigator. We detail the SNR-CI methodology, establish the efficacy of employing this approach with Monte Carlo simulations, and demonstrate its utility in practice when applied to ERP datasets. PMID:26903849
Design considerations for a LORAN-C timing receiver in a hostile signal to noise environment
Porter, J. W.; Bowell, J. R.; Price, G. E.
1981-01-01
The environment in which a LORAN-C Timing Receiver may function effectively depends to a large extent on the techniques utilized to insure that interfering signals within the pass band of the unit are neutralized. The baseline performance manually operated timing receivers is discussed and the basic design considerations and necessary parameters for an automatic unit utilizing today's technology are established. Actual performance data is presented comparing the results obtained from a present generation timing receiver against a new generation microprocessor controlled automatic acquisition receiver. The achievements possible in a wide range of signal to noise situations are demonstrated.
Maximum likelihood estimation for semiparametric density ratio model.
Diao, Guoqing; Ning, Jing; Qin, Jing
2012-06-27
In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.
Signal-to-noise performance analysis of streak tube imaging lidar systems. I. Cascaded model.
Yang, Hongru; Wu, Lei; Wang, Xiaopeng; Chen, Chao; Yu, Bing; Yang, Bin; Yuan, Liang; Wu, Lipeng; Xue, Zhanli; Li, Gaoping; Wu, Baoning
2012-12-20
Streak tube imaging lidar (STIL) is an active imaging system using a pulsed laser transmitter and a streak tube receiver to produce 3D range and intensity imagery. The STIL has recently attracted a great deal of interest and attention due to its advantages of wide azimuth field-of-view, high range and angle resolution, and high frame rate. This work investigates the signal-to-noise performance of STIL systems. A theoretical model for characterizing the signal-to-noise performance of the STIL system with an internal or external intensified streak tube receiver is presented, based on the linear cascaded systems theory of signal and noise propagation. The STIL system is decomposed into a series of cascaded imaging chains whose signal and noise transfer properties are described by the general (or the spatial-frequency dependent) noise factors (NFs). Expressions for the general NFs of the cascaded chains (or the main components) in the STIL system are derived. The work presented here is useful for the design and evaluation of STIL systems.
Shouno, Hayaru; Kido, Shoji; Okada, Masato
2004-09-01
Bidirectional associative memory (BAM) is a kind of an artificial neural network used to memorize and retrieve heterogeneous pattern pairs. Many efforts have been made to improve BAM from the the viewpoint of computer application, and few theoretical studies have been done. We investigated the theoretical characteristics of BAM using a framework of statistical-mechanical analysis. To investigate the equilibrium state of BAM, we applied self-consistent signal to noise analysis (SCSNA) and obtained a macroscopic parameter equations and relative capacity. Moreover, to investigate not only the equilibrium state but also the retrieval process of reaching the equilibrium state, we applied statistical neurodynamics to the update rule of BAM and obtained evolution equations for the macroscopic parameters. These evolution equations are consistent with the results of SCSNA in the equilibrium state.
Physical Layer Authentication Enhancement Using Maximum SNR Ratio Based Cooperative AF Relaying
Jiazi Liu
2017-01-01
Full Text Available Physical layer authentication techniques developed in conventional macrocell wireless networks face challenges when applied in the future fifth-generation (5G wireless communications, due to the deployment of dense small cells in a hierarchical network architecture. In this paper, we propose a novel physical layer authentication scheme by exploiting the advantages of amplify-and-forward (AF cooperative relaying, which can increase the coverage and convergence of the heterogeneous networks. The essence of the proposed scheme is to select the best relay among multiple AF relays for cooperation between legitimate transmitter and intended receiver in the presence of a spoofer. To achieve this goal, two best relay selection schemes are developed by maximizing the signal-to-noise ratio (SNR of the legitimate link to the spoofing link at the destination and relays, respectively. In the sequel, we derive closed-form expressions for the outage probabilities of the effective SNR ratios at the destination. With the help of the best relay, a new test statistic is developed for making an authentication decision, based on normalized channel difference between adjacent end-to-end channel estimates at the destination. The performance of the proposed authentication scheme is compared with that in a direct transmission in terms of outage and spoofing detection.
Laor, A; Jannuzi, B T; Schneider, D P; Green, R F; Hartig, G F; Laor, Ari; Bahcall, John N.; Jannuzi, Buell T.; Schneider, Donald P.; Green, IAS; Richard F.; Hartig, NOAO; George F.; ScI, ST
1994-01-01
We analyze the ultraviolet (UV) emission line and continuum properties of five low-redshift active galactic nuclei (four luminous quasars: PKS~0405$-$123, H1821+643, PG~0953+414, and 3C273, and one bright Seyfert 1 galaxy: Mrk~205). The HST spectra have higher signal-to-noise ratios (typically $\\sim 60$ per resolution element) and spectral resolution ($R = 1300$) than all previously- published UV spectra used to study the emission characteristics of active galactic nuclei. We include in the analysis ground-based optical spectra covering \\hb\\ and the narrow [O~III]~$\\lambda\\lambda$4959,5007 doublet. The following new results are obtained: \\lyb/\\lya=0.03$-$0.12 for the four quasars, which is the first accurate measurement of the long-predicted \\lyb\\ intensity in QSOs. The cores of \\lya\\ and C~IV are symmetric to an accuracy of better than 2.5\\% within about 2000~km~s$^{-1}$ of the line peak. This high degree of symmetry of \\lya\\ argues against models in which the broad line cloud velocity field has a significan...
Flanagan, E E; Flanagan, Eanna E.; Hughes, Scott A.
1998-01-01
We estimate the expected signal-to-noise ratios (SNRs) from the three phases (inspiral,merger,ringdown) of coalescing binary black holes (BBHs) for initial and advanced ground-based interferometers (LIGO/VIRGO) and for space-based interferometers (LISA). LIGO/VIRGO can do moderate SNR (a few tens), moderate accuracy studies of BBH coalescences in the mass range of a few to about 2000 solar masses; LISA can do high SNR (of order 10^4) high accuracy studies in the mass range of about 10^5 to 10^8 solar masses. BBHs might well be the first sources detected by LIGO/VIRGO: they are visible to much larger distances(up to 500 Mpc by initial interferometers) than coalescing neutron star binaries (heretofore regarded as the "bread and butter" workhorse source for LIGO/VIRGO, visible to about 30 Mpc by initial interferometers). Low-mass BBHs (up to 50 solar masses for initial LIGO interferometers; 100 for advanced; 10^6 for LISA) are best searched for via their well-understood inspiral waves; higher mass BBHs must be s...
Borkowski, Robert; Johannisson, Pontus; Wymeersch, Henk;
2014-01-01
We perform an experimental investigation of a maximum likelihood-based (ML-based) algorithm for bulk chromatic dispersion estimation for digital coherent receivers operating in uncompensated optical networks. We demonstrate the robustness of the method at low optical signal-to-noise ratio (OSNR) ...
Signal-to-noise improvements with a new far-IR rapid scan Michelson interferometer
Hirschmugl, C.J.; Williams, G.P. (National Synchrotron Light Source, Brookhaven National Laboratory, Upton, New York 11973 (United States))
1995-02-01
In this paper signal-to-noise issues in the infrared spectral region are discussed, presenting an update on instrumentation developments that have focused on this topic. Reproducibilities in the 0.01% range for spectra measured in around 1 min on samples with an area of 1 mm[sup 2] illuminated with an [ital f]/10 beam were achieved. It is shown how this result is consistent with the synchrotron source intensity and detector noise and a comparison with a conventional globar source is also shown. For these new studies, a Nicolet[sup TM] Impact 400 rapid scan Michelson interferometer was modified by Pike Technologies and installed in vacuum at the U4IR infrared beamline at the NSLS. The instrument is capable of scanning at an optical retardation rate of 3.2 cm/s, and of a data-collection frequency of 50 kHz triggered by the colinear reference beam of a HeNe laser. A proprietary Nicolet[sup TM] solid-state beam splitter was used to cover the range from 10 to 2500 cm[sup [minus]1]. Spectra were taken in reflection at grazing incidence off a single-crystal Cu surface in ultrahigh vacuum using liquid helium cooled detectors of the photoconductive type (Cu/Ge) or bolometric type (B/Si). The sample throughput for this system was 0.05 mm[sup 2] sr.
Signal to noise improvements with a new Far-IR rapid-scan Michelson Interferometer
Hirschmugl, C.J.; Williams, G.P.
1994-11-01
In this paper we discuss signal to noise issues in the infrared spectral region, presenting an update on instrumentation developments that have focused on this topic. We have been able to achieve reproducibilities in the 0.01 % range for spectra measured in around 1 minute on samples with an area of 1 mm{sup 2} illuminated with an fl 10 beam. We show how this result is consistent with the synchrotron source intensity and detector noise and we also show a comparison with a conventional globar source. For these new studies, a Nicolet{trademark} Impact 400 rapid scan Michelson Interferometer was modified by Pike Technologies and installed in vacuum at the U41R infrared beamline at the NSLS. The instrument is capable of scanning at an optical retardation rate of 3.2 cm/sec, and of a data collection frequency of 50 kHz triggered by the co-linear reference beam of a HeNe laser. A proprietary Nicolet{trademark} solid-state beamsplitter was used to cover the range from 10--2500 cm{sup {minus}1}. Spectra were taken in reflection at grazing incidence off a single crystal Cu surface in ultra-high vacuum using liquid helium cooled detectors of the photoconductive type (Cu/Ge) or bolometric type (B/Si). The sample throughput for this system was 0.01 mm{sup 2} steradians.
The search for IR excess in low signal to noise sources
Zink, Jonathon K
2016-01-01
We present sources selected from their Wide-field Infrared Survey Explorer (WISE) colors that merit future observations to image for disks and possible exoplanet companions. Introducing a weighted detection method, we eliminated the enormous number of specious excess seen in low signal to noise objects by requiring greater excess for fainter stars. This is achieved by sorting through the 747 million sources of the ALLWISE database. In examining these dim stars, it can be shown that a non-Gaussian distribution best describes the spread around the main-sequence polynomial fit function. Using a gamma Probability Density Function (PDF), we can best mimic the main sequence distribution and exclude natural fluctuations in IR excess. With this new methodology we re-discover 25 IR excesses and present 14 new candidates. One source (J053010.20-010140.9), suggests a 8.40 $\\pm$ 0.73 AU disk, a likely candidate for possible direct imagining of planets that are likely fully formed. Although all of these sources are well w...
Gureyev, Timur E; Nesterets, Yakov I; Stevenson, Andrew W; Miller, Peter R; Pogany, Andrew; Wilkins, Stephen W
2008-03-03
Simple analytical expressions are derived for the spatial resolution, contrast and signal-to-noise in X-ray projection images of a generic phase edge. The obtained expressions take into account the maximum phase shift generated by the sample and the sharpness of the edge, as well as such parameters of the imaging set-up as the wavelength spectrum and the size of the incoherent source, the source-to-object and object-to-detector distances and the detector resolution. Different asymptotic behavior of the expressions in the cases of large and small Fresnel numbers is demonstrated. The analytical expressions are compared with the results of numerical simulations using Kirchhoff diffraction theory, as well as with experimental X-ray measurements.
30 CFR 7.87 - Test to determine the maximum fuel-air ratio.
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Test to determine the maximum fuel-air ratio. 7... Use in Underground Coal Mines § 7.87 Test to determine the maximum fuel-air ratio. (a) Test procedure... several speed/torque conditions to determine the concentrations of CO and NOX, dry basis, in the...
Chen, Y; Mu, C; Intes, X; Chance, B
2001-08-13
Previous studies have suggested that the phased-array detection can achieve high sensitivity in detecting and localizing inhomogeneities embedded in turbid media by illuminating with dual interfering sources. In this paper, we analyze the sensitivity of single-source and dual-interfering-source (phased array) systems with signal-to-noise ratio criteria. Analytical solutions are presented to investigate the sensitivity of detection using different degrees of absorption perturbation by varying the size and contrast of the object under similar configurations for single- and dual-source systems. The results suggest that dual-source configuration can provide higher detection sensitivity. The relation between the amplitude and phase signals for both systems is also analyzed using a vector model. The results can be helpful for optimizing the experimental design by combining the advantages of both single- and dual-source systems in object detection and localization.
Vaughan, Timothy E; Weaver, James C
2005-05-01
We describe an approach to aiding the design and interpretation of experiments involving biological effects of weakly interacting electromagnetic fields that range from steady (dc) to microwave frequencies. We propose that if known biophysical mechanisms cannot account for an inferred, underlying molecular change signal-to-noise ratio, (S/N)gen, of a observed result, then there are two interpretation choices: (1) there is an unknown biophysical mechanism with stronger coupling between the field exposure and the ongoing biochemical process, or (2) the experiment is responding to something other than the field exposure. Our approach is based on classical detection theory, the recognition that weakly interacting fields cannot break chemical bonds, and the consequence that such fields can only alter rates of ongoing, metabolically driven biochemical reactions, and transport processes. The approach includes both fundamental chemical noise (molecular shot noise) and other sources of competing chemical change, to be compared quantitatively to the field induced change for the basic case that the field alters a single step in a biochemical network. Consistent with pharmacology and toxicology, we estimate the molecular dose (mass associated with field induced molecular change per mass tissue) resulting from illustrative low frequency field exposures for the biophysical mechanism of voltage gated channels. For perspective, we then consider electric field-mediated delivery of small molecules across human skin and into individual cells. Specifically, we consider the examples of iontophoretic and electroporative delivery of fentanyl through skin and electroporative delivery of bleomycin into individual cells. The total delivered amount corresponds to a molecular change signal and the delivery variability corresponds to generalized chemical noise. Viewed broadly, biological effects due to nonionizing fields may include animal navigation, medical applications, and environmental
Effect of consolidation ratios on maximum dynamic shear modulus of sands
Yuan Xiaoming; Sun Jing; Sun Rui
2005-01-01
The dynamic shear modulus (DSM) is the most basic soil parameter in earthquake or other dynamic loading conditions and can be obtained through testing in the field or in the laboratory. The effect of consolidation ratios on the maximum DSM for two types of sand is investigated by using resonant column tests. And, an increment formula to obtain the maximum DSM for cases of consolidation ratio kc＞1 is presented. The results indicate that the maximum DSM rises rapidly when kc is near 1 and then slows down, which means that the power function of the consolidation ratio increment kc-1 can be used to describe the variation of the maximum DSM due to kc＞1. The results also indicate that the increase in the maximum DSM due to kc＞1 is significantly larger than that predicted by Hardin and Black's formula.
Tao Shang; Jianping Chen; Xinwan Li; Junhe Zhou
2006-01-01
@@ A numerical design on the triangular photonic crystal fiber (PCF) based backward multi-pump Raman amplifier is presented. It is demonstrated that high flat Raman gain can be reached based on PCF.Influences of different geometric parameters and germanium doping concentrations on the Raman net gain, amplified spontaneous emission (ASE) noise and double Rayleigh backscattering (DRBS) of the signal have been analyzed. For optimizing crystal fiber Raman amplifier (FRA), there is tradeoff between the geometric parameter and germanium doping concentration of triangular PCF. The results show that PCF is an appropriate candidate for high gain Raman amplifiers.
Rangelov, Dragan; Müller, Hermann J; Zehetleitner, Michael
2017-05-01
Pop-out search implies that the target is always the first item selected, no matter how many distractors are presented. However, increasing evidence indicates that search is not entirely independent of display density even for pop-out targets: search is slower with sparse (few distractors) than with dense displays (many distractors). Despite its significance, the cause of this anomaly remains unclear. We investigated several mechanisms that could slow down search for pop-out targets. Consistent with the assumption that pop-out targets frequently fail to pop out in sparse displays, we observed greater variability of search duration for sparse displays relative to dense. Computational modeling of the response time distributions also supported the view that pop-out targets fail to pop out in sparse displays. Our findings strongly question the classical assumption that early processing of pop-out targets is independent of the distractors. Rather, the density of distractors critically influences whether or not a stimulus pops out. These results call for new, more reliable measures of pop-out search and potentially a reinterpretation of studies that used relatively sparse displays. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Chabot-Leclerc, Alexandre; MacDonald, Ewen; Dau, Torsten
2016-01-01
time difference of the target and masker. The Pearson correlation coefficient between the simulated speech reception thresholds and the data across all experiments was 0.91. A model version that considered only BE processing performed similarly (correlation coefficient of 0.86) to the complete model...
Increase in signal-to-noise ratio of > 10,000 times in liquid-state NMR
Ardenkjær-Larsen, Jan H.; Fridlund, Björn; Gram, Andreas; Hansson, Georg; Hansson, Lennart; Lerche, Mathilde H.; Servin, Rolf; Thaning, Mikkel; Golman, Klaes
2003-09-01
A method for obtaining strongly polarized nuclear spins in solution has been developed. The method uses low temperature, high magnetic field, and dynamic nuclear polarization (DNP) to strongly polarize nuclear spins in the solid state. The solid sample is subsequently dissolved rapidly in a suitable solvent to create a solution of molecules with hyperpolarized nuclear spins. The polarization is performed in a DNP polarizer, consisting of a super-conducting magnet (3.35 T) and a liquid-helium cooled sample space. The sample is irradiated with microwaves at 94 GHz. Subsequent to polarization, the sample is dissolved by an injection system inside the DNP magnet. The dissolution process effectively preserves the nuclear polarization. The resulting hyperpolarized liquid sample can be transferred to a high-resolution NMR spectrometer, where an enhanced NMR signal can be acquired, or it may be used as an agent for in vivo imaging or spectroscopy. In this article we describe the use of the method on aqueous solutions of [13C]urea. Polarizations of 37% for 13C and 7.8% for 15N, respectively, were obtained after the dissolution. These polarizations correspond to an enhancement of 44,400 for 13C and 23,500 for 15N, respectively, compared with thermal equilibrium at 9.4 T and room temperature. The method can be used generally for signal enhancement and reduction of measurement time in liquid-state NMR and opens up for a variety of in vitro and in vivo applications of DNP-enhanced NMR. HR ALIGN=LEFT WIDTH=50% NOSHADE SIZE=1>
Sven Kroener
Full Text Available BACKGROUND: The importance of dopamine (DA for prefrontal cortical (PFC cognitive functions is widely recognized, but its mechanisms of action remain controversial. DA is thought to increase signal gain in active networks according to an inverted U dose-response curve, and these effects may depend on both tonic and phasic release of DA from midbrain ventral tegmental area (VTA neurons. METHODOLOGY/PRINCIPAL FINDINGS: We used patch-clamp recordings in organotypic co-cultures of the PFC, hippocampus and VTA to study DA modulation of spontaneous network activity in the form of Up-states and signals in the form of synchronous EPSP trains. These cultures possessed a tonic DA level and stimulation of the VTA evoked DA transients within the PFC. The addition of high (> or = 1 microM concentrations of exogenous DA to the cultures reduced Up-states and diminished excitatory synaptic inputs (EPSPs evoked during the Down-state. Increasing endogenous DA via bath application of cocaine also reduced Up-states. Lower concentrations of exogenous DA (0.1 microM had no effect on the up-state itself, but they selectively increased the efficiency of a train of EPSPs to evoke spikes during the Up-state. When the background DA was eliminated by depleting DA with reserpine and alpha-methyl-p-tyrosine, or by preparing corticolimbic co-cultures without the VTA slice, Up-states could be enhanced by low concentrations (0.1-1 microM of DA that had no effect in the VTA containing cultures. Finally, in spite of the concentration-dependent effects on Up-states, exogenous DA at all but the lowest concentrations increased intracellular current-pulse evoked firing in all cultures underlining the complexity of DA's effects in an active network. CONCLUSIONS/SIGNIFICANCE: Taken together, these data show concentration-dependent effects of DA on global PFC network activity and they demonstrate a mechanism through which optimal levels of DA can modulate signal gain to support cognitive functioning.
Agostini, Valentina; Knaflitz, Marco
2012-01-01
In many applications requiring the study of the surface myoelectric signal (SMES) acquired in dynamic conditions, it is essential to have a quantitative evaluation of the quality of the collected signals. When the activation pattern of a muscle has to be obtained by means of single- or double-threshold statistical detectors, the background noise level e (noise) of the signal is a necessary input parameter. Moreover, the detection strategy of double-threshold detectors may be properly tuned when the SNR and the duty cycle (DC) of the signal are known. The aim of this paper is to present an algorithm for the estimation of e (noise), SNR, and DC of an SMES collected during cyclic movements. The algorithm is validated on synthetic signals with statistical properties similar to those of SMES, as well as on more than 100 real signals.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum Deformation Ratio of Droplets of Water-Based Paint Impact on a Flat Surface
Weiwei Xu
2017-06-01
Full Text Available In this research, the maximum deformation ratio of water-based paint droplets impacting and spreading onto a flat solid surface was investigated numerically based on the Navier–Stokes equation coupled with the level set method. The effects of droplet size, impact velocity, and equilibrium contact angle are taken into account. The maximum deformation ratio increases as droplet size and impact velocity increase, and can scale as We1/4, where We is the Weber number, for the case of the effect of the droplet size. Finally, the effect of equilibrium contact angle is investigated, and the result shows that spreading radius decreases with the increase in equilibrium contact angle, whereas the height increases. When the dimensionless time t* < 0.3, there is a linear relationship between the dimensionless spreading radius and the dimensionless time to the 1/2 power. For the case of 80° ≤ θe ≤ 120°, where θe is the equilibrium contact angle, the simulation result of the maximum deformation ratio follows the fitting result. The research on the maximum deformation ratio of water-based paint is useful for water-based paint applications in the automobile industry, as well as in the biomedical industry and the real estate industry. Please check all the part in the whole passage that highlighted in blue whether retains meaning before.
Kissick, David J.; Muir, Ryan D.; Sullivan, Shane Z.; Oglesbee, Robert A.; Simpson, Garth J.
2014-01-01
Despite the ubiquitous use of multi-photon and confocal microscopy measurements in biology, the core techniques typically suffer from fundamental compromises between signal to noise (S/N) and linear dynamic range (LDR). In this study, direct synchronous digitization of voltage transients coupled with statistical analysis is shown to allow S/N approaching the theoretical maximum throughout an LDR spanning more than 8 decades, limited only by the dark counts of the detector on the low end and by the intrinsic nonlinearities of the photomultiplier tube (PMT) detector on the high end. Synchronous digitization of each voltage transient represents a fundamental departure from established methods in confocal/multi-photon imaging, which are currently based on either photon counting or signal averaging. High information-density data acquisition (up to 3.2 GB/s of raw data) enables the smooth transition between the two modalities on a pixel-by-pixel basis and the ultimate writing of much smaller files (few kB/s). Modeling of the PMT response allows extraction of key sensor parameters from the histogram of voltage peak-heights. Applications in second harmonic generation (SHG) microscopy are described demonstrating S/N approaching the shot-noise limit of the detector over large dynamic ranges. PMID:24817799
GUAN Hsin; WANG Bo; LU Pingping; XU Liang
2014-01-01
The identification of maximum road friction coefficient and optimal slip ratio is crucial to vehicle dynamics and control. However, it is always not easy to identify the maximum road friction coefficient with high robustness and good adaptability to various vehicle operating conditions. The existing investigations on robust identification of maximum road friction coefficient are unsatisfactory. In this paper, an identification approach based on road type recognition is proposed for the robust identification of maximum road friction coefficient and optimal slip ratio. The instantaneous road friction coefficient is estimated through the recursive least square with a forgetting factor method based on the single wheel model, and the estimated road friction coefficient and slip ratio are grouped in a set of samples in a small time interval before the current time, which are updated with time progressing. The current road type is recognized by comparing the samples of the estimated road friction coefficient with the standard road friction coefficient of each typical road, and the minimum statistical error is used as the recognition principle to improve identification robustness. Once the road type is recognized, the maximum road friction coefficient and optimal slip ratio are determined. The numerical simulation tests are conducted on two typical road friction conditions(single-friction and joint-friction) by using CarSim software. The test results show that there is little identification error between the identified maximum road friction coefficient and the pre-set value in CarSim. The proposed identification method has good robustness performance to external disturbances and good adaptability to various vehicle operating conditions and road variations, and the identification results can be used for the adjustment of vehicle active safety control strategies.
Nickerson, N. R.; Risk, D. A.
2012-12-01
In order to fulfill a role in demonstrating containment, surface monitoring for Carbon Capture and Geologic Storage (CCS) sites must be able to clearly discriminate between natural, and leakage-source CO2. The CCS community lacks a clear metric for quantifying the degree of discrimination, for successful inter-comparison of monitoring approaches. This study illustrates the utility of Signal-to-Noise Ratio (SNR) to compare the relative performance of three commonly used soil gas monitoring approaches, including bulk CO2, δ13CO2, and Δ14CO2. For inter-comparisons, we used a simulated northern temperate landscape similar to that of Weyburn, Saskatchewan (home of the IEAGHG Weyburn-Midale CO2 Monitoring and Storage Project), in which realistic spatial and temporal CO2 and isotopic variation is simulated for periods of one year or more. Results indicate, that, for this particular ecosystem, Δ14C signatures have the best overall SNR at all simulated seepage rates, and for all points across the synthetic landscape. We then apply this same SNR based approach to data collected during a 6-month sampling campaign at three locations on the Weyburn oil field. This study emphasizes both the importance of developing clear metrics for monitoring performance, and the benefit of modeling for decision support in CCS monitoring design.
Zhang, H X
2008-01-01
An innovative approach for total maximum daily load (TMDL) allocation and implementation is the watershed-based pollutant trading. Given the inherent scientific uncertainty for the tradeoffs between point and nonpoint sources, setting of trading ratios can be a contentious issue and was already listed as an obstacle by several pollutant trading programs. One of the fundamental reasons that a trading ratio is often set higher (e.g. greater than 2) is to allow for uncertainty in the level of control needed to attain water quality standards, and to provide a buffer in case traded reductions are less effective than expected. However, most of the available studies did not provide an approach to explicitly address the determination of trading ratio. Uncertainty analysis has rarely been linked to determination of trading ratio.This paper presents a practical methodology in estimating "equivalent trading ratio (ETR)" and links uncertainty analysis with trading ratio determination from TMDL allocation process. Determination of ETR can provide a preliminary evaluation of "tradeoffs" between various combination of point and nonpoint source control strategies on ambient water quality improvement. A greater portion of NPS load reduction in overall TMDL load reduction generally correlates with greater uncertainty and thus requires greater trading ratio. The rigorous quantification of trading ratio will enhance the scientific basis and thus public perception for more informed decision in overall watershed-based pollutant trading program.
Optimum air-demand ratio for maximum aeration efficiency in high-head gated circular conduits.
Ozkan, Fahri; Tuna, M Cihat; Baylar, Ahmet; Ozturk, Mualla
2014-01-01
Oxygen is an important component of water quality and its ability to sustain life. Water aeration is the process of introducing air into a body of water to increase its oxygen saturation. Water aeration can be accomplished in a variety of ways, for instance, closed-conduit aeration. High-speed flow in a closed conduit involves air-water mixture flow. The air flow results from the subatmospheric pressure downstream of the gate. The air entrained by the high-speed flow is supplied by the air vent. The air entrained into the flow in the form of a large number of bubbles accelerates oxygen transfer and hence also increases aeration efficiency. In the present work, the optimum air-demand ratio for maximum aeration efficiency in high-head gated circular conduits was studied experimentally. Results showed that aeration efficiency increased with the air-demand ratio to a certain point and then aeration efficiency did not change with a further increase of the air-demand ratio. Thus, there was an optimum value for the air-demand ratio, depending on the Froude number, which provides maximum aeration efficiency. Furthermore, a design formula for aeration efficiency was presented relating aeration efficiency to the air-demand ratio and Froude number.
A High Signal-to-Noise UV Spectrum of NGC 7469 New Support for Reprocessing of Continuum Radiation
Kriss, G A; Crenshaw, D M; Zheng, W; Kriss, Gerard A.; Peterson, Bradley M.; Zheng, Wei
2000-01-01
From 1996 June 10 to 1996 July 29 the International AGN Watch monitored the Seyfert 1 galaxy NGC 7469 using IUE, RXTE, and a network of ground-based observatories. On 1996 June 18, in the midst of this intensive monitoring period, we obtained a high signal-to-noise snapshot of the UV spectrum from 1150-3300 A using the FOS on HST. This spectrum allows us to disentangle the UV continuum more accurately from the broad wings of the emission lines, to identify clean continuum windows free of contaminating emission and absorption, and to deblend line complexes such as Lya+NV, CIV+HeII+OIII], SiIII]+CIII], and MgII+FeII. Using the FOS spectrum as a template, we have fit and extracted line and continuum fluxes from the IUE monitoring data. The cleaner continuum extractions confirm the discovery of time delays between the different UV continuum bands by Wanders et al. Our new measurements show delays increasing with wavelength for continuum bands centered at 1485 A, 1740 A and 1825 A relative to 1315 A with delays of...
Ertas, Gokhan; Gulcur, H Ozcan; Tunaci, Mehtap
2008-05-01
Effectiveness of morphological descriptors based on normalized maximum intensity-time ratio (nMITR) maps generated using a 3 x 3 pixel moving mask on dynamic contrast-enhanced magnetoresistance (MR) mammograms are studied for assessment of malignancy. After a rough indication of volume of interest on the nMITR maps, lesions are automatically segmented. Two-dimensional (2D) convexity, normalized complexity, extent, and eccentricity as well as three-dimensional (3D) versions of these descriptors and contact surface area ratio are computed. On a data set consisting of dynamic contrast-enhanced MR DCE-MR mammograms from 51 women that contain 26 benign and 32 malignant lesions, 3D convexity, complexity, and extent are found to reflect aggressiveness of malignancy better than 2D descriptors. Contact surface area ratio which is easily adaptable to different imaging resolutions is found to be the most significant and accurate descriptor (75% sensitivity, 88% specificity, 89% positive predictive values, and 74% negative predictive values).
Describing Adequacy of cure with maximum hardness ratios and non-linear regression.
Bouschlicher, Murray; Berning, Kristen; Qian, Fang
2008-01-01
Knoop Hardness (KH) ratios (HR) > or = 80% are commonly used as criteria for the adequate cure of a composite. These per-specimen HRs can be misleading, as both numerator and denominator may increase concurrently, prior to reaching an asymptotic, top-surface maximum hardness value (H(MAX)). Extended cure times were used to establish H(MAX) and descriptive statistics, and non-linear regression analysis were used to describe the relationship between exposure duration and HR and predict the time required for HR-H(MAX) = 80%. Composite samples 2.00 x 5.00 mm diameter (n = 5/grp) were cured for 10 seconds, 20 seconds, 40 seconds, 60 seconds, 90 seconds, 120 seconds, 180 seconds and 240 seconds in a 2-composite x 2-light curing unit design. A microhybrid (Point 4, P4) or microfill resin (Heliomolar, HM) composite was cured with a QTH or LED light curing unit and then stored in the dark for 24 hours prior to KH testing. Non-linear regression was calculated with: H = (H(MAX)-c)(1-e(-kt)) +c, H(MAX) = maximum hardness (a theoretical asymptotic value), c = constant (t = 0), k = rate constant and t = exposure duration describes the relationship between radiant exposure (irradiance x time) and HRs. Exposure durations for HR-H(MAX) = 80% were calculated. Two-sample t-tests for pairwise comparisons evaluated relative performance of the light curing units for similar surface x composite x exposure (10-90s). A good measure of goodness-of-fit of the non-linear regression, r2, ranged from 0.68-0.95. (mean = 0.82). Microhybrid (P4) exposure to achieve HR-H(MAX = 80% was 21 seconds for QTH and 34 seconds for the LED light curing unit. Corresponding values for microfill (HM) were 71 and 74 seconds, respectively. P4 HR-H(MAX) of LED vs QTH was statistically similar for 10 to 40 seconds, while HM HR-H(MAX) of LED was significantly lower than QTH for 10 to 40 seconds. It was concluded that redefined hardness ratios based on maximum hardness used in conjunction with non-linear regression
Impact and Mitigation of Multiantenna Analog Front-End Mismatch in Transmit Maximum Ratio Combining
Liu, Jian; Khaled, Nadia; Petré, Frederik; Bourdoux, André; Barel, Alain
2006-12-01
Transmit maximum ratio combining (MRC) allows to extend the range of wireless local area networks (WLANs) by exploiting spatial diversity and array gains. These gains, however, depend on the availability of the channel state information (CSI). In this perspective, an open-loop approach in time-division-duplex (TDD) systems relies on channel reciprocity between up- and downlink to acquire the CSI. Although the propagation channel can be assumed to be reciprocal, the radio-frequency (RF) transceivers may exhibit amplitude and phase mismatches between the up- and downlink. In this contribution, we present a statistical analysis to assess the impact of these mismatches on the performance of transmit-MRC. Furthermore, we propose a novel mixed-signal calibration scheme to mitigate these mismatches, which allows to reduce the implementation loss to as little as a few tenths of a dB. Finally, we also demonstrate the feasibility of the proposed calibration scheme in a real-time wireless MIMO-OFDM prototyping platform.
Overlap maximum matching ratio (OMMR)：a new measure to evaluate overlaps of essential modules
Xiao-xia ZHANG; Qiang-hua XIAO; Bin LI; Sai HU; Hui-jun XIONG; Bi-hai ZHAO
2015-01-01
Protein complexes are the basic units of macro-molecular organizations and help us to understand the cell’s mechanism. The development of the yeast two-hybrid, tandem affinity purification, and mass spectrometry high-throughput proteomic techniques supplies a large amount of protein-protein interaction data, which make it possible to predict overlapping complexes through computational methods. Research shows that overlapping complexes can contribute to identifying essential proteins, which are necessary for the organism to survive and reproduce, and for life’s activities. Scholars pay more attention to the evaluation of protein complexes. However, few of them focus on predicted overlaps. In this paper, an evaluation criterion called overlap maximum matching ratio (OMMR) is proposed to analyze the similarity between the identified overlaps and the benchmark overlap modules. Comparison of essential proteins and gene ontology (GO) analysis are also used to assess the quality of overlaps. We perform a comprehensive comparison of serveral overlapping complexes prediction approaches, using three yeast protein-protein interaction (PPI) networks. We focus on the analysis of overlaps identified by these algorithms. Experimental results indicate the important of overlaps and reveal the relationship between overlaps and identification of essential proteins.
Arbutina Bojan
2011-01-01
Full Text Available AM CVn-type stars and ultra-compact X-ray binaries are extremely interesting semi-detached close binary systems in which the Roche lobe filling component is a white dwarf transferring mass to another white dwarf, neutron star or a black hole. Earlier theoretical considerations show that there is a maximum mass ratio of AM CVn-type binary systems (qmax ≈ 2/3 below which the mass transfer is stable. In this paper we derive slightly different value for qmax and more interestingly, by applying the same procedure, we find the maximum expected white dwarf mass in ultra-compact X-ray binaries.
Body Fineness Ratio as a Predictor of Maximum Prolonged-Swimming Speed in Coral Reef Fishes
Walker, Jeffrey A.; Alfaro, Michael E.; Noble, Mae M.; Fulton, Christopher J.
2013-01-01
The ability to sustain high swimming speeds is believed to be an important factor affecting resource acquisition in fishes. While we have gained insights into how fin morphology and motion influences swimming performance in coral reef fishes, the role of other traits, such as body shape, remains poorly understood. We explore the ability of two mechanistic models of the causal relationship between body fineness ratio and endurance swimming-performance to predict maximum prolonged-swimming speed (Umax) among 84 fish species from the Great Barrier Reef, Australia. A drag model, based on semi-empirical data on the drag of rigid, submerged bodies of revolution, was applied to species that employ pectoral-fin propulsion with a rigid body at Umax. An alternative model, based on the results of computer simulations of optimal shape in self-propelled undulating bodies, was applied to the species that swim by body-caudal-fin propulsion at Umax. For pectoral-fin swimmers, Umax increased with fineness, and the rate of increase decreased with fineness, as predicted by the drag model. While the mechanistic and statistical models of the relationship between fineness and Umax were very similar, the mechanistic (and statistical) model explained only a small fraction of the variance in Umax. For body-caudal-fin swimmers, we found a non-linear relationship between fineness and Umax, which was largely negative over most of the range of fineness. This pattern fails to support either predictions from the computational models or standard functional interpretations of body shape variation in fishes. Our results suggest that the widespread hypothesis that a more optimal fineness increases endurance-swimming performance via reduced drag should be limited to fishes that swim with rigid bodies. PMID:24204575
Body fineness ratio as a predictor of maximum prolonged-swimming speed in coral reef fishes.
Walker, Jeffrey A; Alfaro, Michael E; Noble, Mae M; Fulton, Christopher J
2013-01-01
The ability to sustain high swimming speeds is believed to be an important factor affecting resource acquisition in fishes. While we have gained insights into how fin morphology and motion influences swimming performance in coral reef fishes, the role of other traits, such as body shape, remains poorly understood. We explore the ability of two mechanistic models of the causal relationship between body fineness ratio and endurance swimming-performance to predict maximum prolonged-swimming speed (Umax ) among 84 fish species from the Great Barrier Reef, Australia. A drag model, based on semi-empirical data on the drag of rigid, submerged bodies of revolution, was applied to species that employ pectoral-fin propulsion with a rigid body at U max. An alternative model, based on the results of computer simulations of optimal shape in self-propelled undulating bodies, was applied to the species that swim by body-caudal-fin propulsion at Umax . For pectoral-fin swimmers, Umax increased with fineness, and the rate of increase decreased with fineness, as predicted by the drag model. While the mechanistic and statistical models of the relationship between fineness and Umax were very similar, the mechanistic (and statistical) model explained only a small fraction of the variance in Umax . For body-caudal-fin swimmers, we found a non-linear relationship between fineness and Umax , which was largely negative over most of the range of fineness. This pattern fails to support either predictions from the computational models or standard functional interpretations of body shape variation in fishes. Our results suggest that the widespread hypothesis that a more optimal fineness increases endurance-swimming performance via reduced drag should be limited to fishes that swim with rigid bodies.
A Family of Maximum SNR Filters for Noise Reduction
Huang, Gongping; Benesty, Jacob; Long, Tao;
2014-01-01
This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....
Ramachandra, Ranjan; Bouwer, James C; Mackey, Mason R; Bushong, Eric; Peltier, Steven T; Xuong, Nguyen-Huu; Ellisman, Mark H
2014-06-01
Energy filtered transmission electron microscopy techniques are regularly used to build elemental maps of spatially distributed nanoparticles in materials and biological specimens. When working with thick biological sections, electron energy loss spectroscopy techniques involving core-loss electrons often require exposures exceeding several minutes to provide sufficient signal to noise. Image quality with these long exposures is often compromised by specimen drift, which results in blurring and reduced resolution. To mitigate drift artifacts, a series of short exposure images can be acquired, aligned, and merged to form a single image. For samples where the target elements have extremely low signal yields, the use of charge coupled device (CCD)-based detectors for this purpose can be problematic. At short acquisition times, the images produced by CCDs can be noisy and may contain fixed pattern artifacts that impact subsequent correlative alignment. Here we report on the use of direct electron detection devices (DDD's) to increase the signal to noise as compared with CCD's. A 3× improvement in signal is reported with a DDD versus a comparably formatted CCD, with equivalent dose on each detector. With the fast rolling-readout design of the DDD, the duty cycle provides a major benefit, as there is no dead time between successive frames.
Rannama, Indrek; Port, Kristjan; Bazanov, Boriss
2012-01-01
Maximum gears for youth category riders are limited. As a result, youth category riders are regularly compelled to ride in a high cadence regime. The aim of this study was to investigate if regular work at high cadence regime due to limited transmission in youth category riders reflects in effectual cadence at the point of maximal power generation during the 10 second sprint effort. 24 junior and youth national team cyclist’s average maximal peak power at various cadence regimes was registere...
Danieli, Matteo; Forchhammer, Søren; Andersen, Jakob Dahl
2010-01-01
-likelihood ratios (LLR) in order to combine information sent across different transmissions due to requests. To mitigate the effects of ever-increasing data rates that call for larger HARQ memory, vector quantization (VQ) is investigated as a technique for temporary compression of LLRs on the terminal. A capacity...
D. L. Bricker
1997-01-01
Full Text Available The problem of assigning cell probabilities to maximize a multinomial likelihood with order restrictions on the probabilies and/or restrictions on the local odds ratios is modeled as a posynomial geometric program (GP, a class of nonlinear optimization problems with a well-developed duality theory and collection of algorithms. (Local odds ratios provide a measure of association between categorical random variables. A constrained multinomial MLE example from the literature is solved, and the quality of the solution is compared with that obtained by the iterative method of El Barmi and Dykstra, which is based upon Fenchel duality. Exploiting the proximity of the GP model of MLE problems to linear programming (LP problems, we also describe as an alternative, in the absence of special-purpose GP software, an easily implemented successive LP approximation method for solving this class of MLE problems using one of the readily available LP solvers.
Lupiani Castellanos, J.; Quinones Rodriguez, L. A.; Richarte Reina, J. M.; Ramos Caballero, L. J.; Angulo Pain, E.; Castro Ramierez, I. J.; Iborra Oquendo, M. A.; Urena Llinares, A.
2011-07-01
The ESTRO Booklet 6 gives the numerical data collected in four different sizes and different accelerators for different beam qualities. Although the end of this guide is the calculation and verification of monitor units, the data we have used Siemens Primus accelerator Mevatron 6 MV photons to perform quality control of the experimental measurements for the tissue-maximum ratio (TMR) and the output factor (OF) in air yen dummy.
Horner, Piers; Aguirre, James; Bock, Jamie; Egami, Eiichi; Glenn, Jason; Golwala, Sunil; Laurent, Glenn; Nguyen, Hien; Sayers, Jack
2010-01-01
We present an analysis of an 8 arcminute diameter map of the area around the galaxy cluster Abell 1835 from jiggle map observations at a wavelength of 1.1 mm using the Bolometric Camera (Bolocam) mounted on the Caltech Submillimeter Observatory (CSO). The data is well described by a model including an extended Sunyaev-Zel'dovich (SZ) signal from the cluster gas plus emission from two bright background submm galaxies magnified by the gravitational lensing of the cluster. The best-fit values for the central Compton value for the cluster and the fluxes of the two main point sources in the field: SMM J140104+0252, and SMM J14009+0252 are found to be $y_{0}=(4.34\\pm0.52\\pm0.69)\\times10^{-4}$, 6.5$\\pm{2.0}\\pm0.7$ mJy and 11.3$\\pm{1.9}\\pm1.1$ mJy, where the first error represents the statistical measurement error and the second error represents the estimated systematic error in the result. This measurement assumes the presence of dust emission from the cluster's central cD galaxy of $1.8\\pm0.5$ mJy, based on higher ...
Vyshnevyy, Andrey A.; Fedyanin, Dmitry Yu.
2016-12-01
Incorporation of gain media in plasmonic nanostructures can give the possibility to compensate for high Ohmic losses in the metal and design truly nanoscale optical components for diverse applications ranging from biosensing to on-chip data communication. However, the process of stimulated emission in the gain medium is inevitably accompanied by spontaneous emission. This spontaneous emission greatly impacts the performance characteristics of deep-subwavelength active plasmonic devices and casts doubt on their practical use. Here we develop a theoretical framework to evaluate the influence of spontaneous emission, which can be applied to waveguide structures of any shape and level of mode confinement. In contrast to the previously developed theories, we take into account that the spectrum of spontaneous emission can be very broad and nonuniform, which is typical for deep-subwavelength structures, where a high optical gain (approximately 1000 cm-1 ) in the active medium is required to compensate for strong absorption in the metal. We also present a detailed study of the spontaneous emission noise in metal-semiconductor active plasmonic nanowaveguides and demonstrate that by using both optical and electrical filtering techniques, it is possible to decrease the noise to a level sufficient for practical applications at telecom and midinfrared wavelengths.
2014-03-27
security-conference/. [Accessed 10 January 2014]. [3] "The Economic Impact of Cybercrime and Cyber Espionage," 2013. [4] "Department of Homeland...Technology in Automation, Control and Intelligent Systems (CYBER), Bangkok , 2012. [38] M. Panda and M. R. Patra, "Network Intrusion Detection
De Pauw, B.; Lamberti, A.; Vanlanduit, S.; Van Tichelen, K.; Geernaert, T.; Berghmans, F.
2014-05-01
Measuring strain at the surface of a structure can help to estimate the dynamical properties of the structure under test. Such a structure can be a fuel assembly of a nuclear reactor consisting of fuel pins. In this paper we demonstrate a method to integrate draw tower gratings (DTGs) in a fuel pin and we subject this pin to conditions close to those encountered in a heavy liquid metal (HLM) reactor. More specifically, we report on the performance of DTGs used as a strain sensor when immersed in HLM during thermal cycles (up to 300_C) for up to 700 hours.
2007-06-01
that occurs because of the optical system due to diffraction can ultimately be characterized through the application of linear systems theory to yield a...tool called the modulation transfer function. Since linear systems theory is the basis for the MTF, the MTF’s construction will be centered around the...PSF is simply the optical system’s response to a point source, which is analogous to the impulse response as defined by linear systems theory . Once the
2012-02-13
experimental results,” Appl. Opt. 44(3), 412–422 (2005). 51. I. M. Levin and E. Levina , “Effect of atmospheric interference and sensor noise in...consistent with the results obtained by Levin and Levina [51]. 3.3.2. Estimates from semi-analytical algorithms The semi-analytical algorithms
Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik
2015-01-01
...) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants' first- (L1...
SA[paragraph]rqvist, Patrik; Ljung, Robert; Van de Poll, Marijke Keus; Hygge, Staffan; Pekkola, Elina P; Hurtig, Anders
2016-01-01
...) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants' first- (L1...
Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik
2016-01-01
...) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants first- (L1...
Hurtig, Anders; Keus van de Poll, Marijke; Pekkola, Elina P; Hygge, Staffan; Ljung, Robert; Sörqvist, Patrik
2016-01-01
...) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants’ first- (L1...
ZHANG Xu-ping; YU Yue-qing
2005-01-01
Optimization of structural parameters aimed at improving the load carrying capacity of spatial flexible redundant manipulators is presented in this paper. In order to increase the ratio of load to mass of robots, the cross-sectional parameters and constructional parameters are optimized respectively. The cross-sectional and configurational parameters are optimized simultaneously. The numerical simulation of a 4R spatial manipulator is performed. The results show that the load capacity of robots has been greatly improved through the optimization strategies proposed in this paper.
信息动态%Analysis of the Noise and Signal-to-Noise of AOTF Imaging Spectrometer Based on EMCCD
2012-01-01
Imaging spectrometer based on acousto-optic tunable filter (AOTF) is a novel hyperspectral imaging system. In order to rectify the non-uniformity of radiation sensitivity on different waveband, especially the low signal-to-noise (SNR) in low-light conditions, the electron-multiplying CCD (EMCCD) sensor was proposed. The noise of AOTF imaging spectrometer was analyzed in both normal and EM modes of the CCD sensor with derived SNR calculating model which has been experimentally validated. On that basis, a new evaluation method of the dynamic range in EM mode and a novel method of calculating spectral radiance at the entrance aperture were adopted. The experimental result shows that the theoretic SNR models are fit, and better selection of EM mode is effective to improve the SNR and non-uniformity of radiation sensitivity in low light level conditions.
Norris, J E; Beers, T C; Norris, John E.; Ryan, Sean G.; Beers, Timothy C.
2001-01-01
High-resolution, high-signal-to-noise ( = 85) spectra have been obtained for five stars -- CD-24:17504, CD-38:245, CS 22172-002, CS 22885-096, and CS 22949-037 -- having [Fe/H] < -3.5 according to previous lower S/N material. LTE model-atmosphere techniques are used to determine [Fe/H] and relative abundances, or their limits, for some 18 elements, and to constrain more tightly the early enrichment history of the Galaxy than is possible based on previous analyses. We compare our results with high-quality higher-abundance literature data for other metal-poor stars and with the canonical Galactic chemical enrichment results of Timmes et al. (1995) and obtain the following basic results: (1) Large supersolar values of [C/Fe] and [N/Fe], not predicted by the canonical models, exist at lowest abundance. For C at least, the result is difficult to attribute to internal mixing effects; (2) We confirm that there is {\\it no upward trend} in [$\\alpha$/Fe] as a function of [Fe/H], in contradistinction to some reports ...
Rorai, A; Haehnelt, M G; Carswell, R F; Bolton, J S; Cristiani, S; D'Odorico, V; Cupani, G; Barai, P; Calura, F; Kim, T -S; Pomante, E; Tescari, E; Viel, M
2016-01-01
At low densities the standard ionisation history of the intergalactic medium (IGM) predicts a decreasing temperature of the IGM with decreasing density once hydrogen (and helium) reionisation is complete. Heating the high-redshift, low-density IGM above the temperature expected from photo-heating is difficult, and previous claims of high/rising temperatures in low density regions of the Universe based on the probability density function (PDF) of the opacity in Lyman-$\\alpha$ forest data at $2
S. Gannouni
2016-01-01
Full Text Available In a tunnel fire, the production of smoke and toxic gases remains the principal prejudicial factors to users. The heat is not considered as a major direct danger to users since temperatures up to man level do not reach tenable situations that after a relatively long time except near the fire source. However, the temperatures under ceiling can exceed the thresholds conditions and can thus cause structural collapse of infrastructure. This paper presents a numerical analysis of smoke hazard in tunnel fires with different aspect ratio by large eddy simulation. Results show that the CO concentration increases as the aspect ratio decreases and decreases with the longitudinal ventilation velocity. CFD predicted maximum smoke temperatures are compared to the calculated values using the model of Li et al. and then compared with those given by the empirical equation proposed by kurioka et al. A reasonable good agreement has been obtained. The backlayering length decreases as the ventilation velocity increases and this decrease fell into good exponential decay. The dimensionless interface height and the region of bad visibility increases with the aspect ratio of the tunnel cross-sectional geometry.
Francescon, Paolo; Beddar, Sam; Satariano, Ninfa; Das, Indra J.
2014-01-01
Purpose: Evaluate the ability of different dosimeters to correctly measure the dosimetric parameters percentage depth dose (PDD), tissue-maximum ratio (TMR), and off-axis ratio (OAR) in water for small fields. Methods: Monte Carlo (MC) simulations were used to estimate the variation of kQclin,Qmsrfclin,fmsr for several types of microdetectors as a function of depth and distance from the central axis for PDD, TMR, and OAR measurements. The variation of kQclin,Qmsrfclin,fmsr enables one to evaluate the ability of a detector to reproduce the PDD, TMR, and OAR in water and consequently determine whether it is necessary to apply correction factors. The correctness of the simulations was verified by assessing the ratios between the PDDs and OARs of 5- and 25-mm circular collimators used with a linear accelerator measured with two different types of dosimeters (the PTW 60012 diode and PTW PinPoint 31014 microchamber) and the PDDs and the OARs measured with the Exradin W1 plastic scintillator detector (PSD) and comparing those ratios with the corresponding ratios predicted by the MC simulations. Results: MC simulations reproduced results with acceptable accuracy compared to the experimental results; therefore, MC simulations can be used to successfully predict the behavior of different dosimeters in small fields. The Exradin W1 PSD was the only dosimeter that reproduced the PDDs, TMRs, and OARs in water with high accuracy. With the exception of the EDGE diode, the stereotactic diodes reproduced the PDDs and the TMRs in water with a systematic error of less than 2% at depths of up to 25 cm; however, they produced OAR values that were significantly different from those in water, especially in the tail region (lower than 20% in some cases). The microchambers could be used for PDD measurements for fields greater than those produced using a 10-mm collimator. However, with the detector stem parallel to the beam axis, the microchambers could be used for TMR measurements for all
Using the Maximum X-ray Flux Ratio and X-ray Background to Predict Solar Flare Class
Winter, Lisa M
2015-01-01
We present the discovery of a relationship between the maximum ratio of the flare flux (namely, 0.5-4 Ang to the 1-8 Ang flux) and non-flare background (namely, the 1-8 Ang background flux), which clearly separates flares into classes by peak flux level. We established this relationship based on an analysis of the Geostationary Operational Environmental Satellites (GOES) X-ray observations of ~ 50,000 X, M, C, and B flares derived from the NOAA/SWPC flares catalog. Employing a combination of machine learning techniques (K-nearest neighbors and nearest-centroid algorithms) we show a separation of the observed parameters for the different peak flaring energies. This analysis is validated by successfully predicting the flare classes for 100% of the X-class flares, 76% of the M-class flares, 80% of the C-class flares and 81% of the B-class flares for solar cycle 24, based on the training of the parametric extracts for solar flares in cycles 22-23.
Combined simplified maximum likelihood and sphere decoding algorithm for MIMO system
ZHANG Lei; YUAN Ting-ting; ZHANG Xin; YANG Da-cheng
2008-01-01
In this article, a new system model for sphere decoding (SD) algorithm is introduced. For the multiple- input multiple-out (MIMO) system, a simplified maximum likelihood (SML) decoding algorithm is proposed based on the new model. The SML algorithm achieves optimal maximum likelihood (ML) performance, and drastically reduces the complexity as compared to the conventional SD algorithm. The improved algorithm is presented by combining the sphere decoding algorithm based on Schnorr-Euchner strategy (SE-SD) with the SML algorithm when the number of transmit antennas exceeds 2. Compared to conventional SD, the proposed algorithm has low complexity especially at low signal to noise ratio (SNR). It is shown by simulation that the proposed algorithm has performance very close to conventional SD.
Coplen, T.B.; Hopple, J.A.; Böhlke, J.K.; Peiser, H.S.; Rieder, S.E.; Krouse, H.R.; Rosman, K.J.R.; Ding, T.; Vocke, R.D.; Revesz, K.M.; Lamberty, A.; Taylor, P.; De Bievre, P.
2002-01-01
Documented variations in the isotopic compositions of some chemical elements are responsible for expanded uncertainties in the standard atomic weights published by the Commission on Atomic Weights and Isotopic Abundances of the International Union of Pure and Applied Chemistry. This report summarizes reported variations in the isotopic compositions of 20 elements that are due to physical and chemical fractionation processes (not due to radioactive decay) and their effects on the standard atomic weight uncertainties. For 11 of those elements (hydrogen, lithium, boron, carbon, nitrogen, oxygen, silicon, sulfur, chlorine, copper, and selenium), standard atomic weight uncertainties have been assigned values that are substantially larger than analytical uncertainties because of common isotope abundance variations in materials of natural terrestrial origin. For 2 elements (chromium and thallium), recently reported isotope abundance variations potentially are large enough to result in future expansion of their atomic weight uncertainties. For 7 elements (magnesium, calcium, iron, zinc, molybdenum, palladium, and tellurium), documented isotope-abundance variations in materials of natural terrestrial origin are too small to have a significant effect on their standard atomic weight uncertainties. This compilation indicates the extent to which the atomic weight of an element in a given material may differ from the standard atomic weight of the element. For most elements given above, data are graphically illustrated by a diagram in which the materials are specified in the ordinate and the compositional ranges are plotted along the abscissa in scales of (1) atomic weight, (2) mole fraction of a selected isotope, and (3) delta value of a selected isotope ratio. There are no internationally distributed isotopic reference materials for the elements zinc, selenium, molybdenum, palladium, and tellurium. Preparation of such materials will help to make isotope ratio measurements among
Maximum precision closed-form solution for localizing diffraction-limited spots in noisy images.
Larkin, Joshua D; Cook, Peter R
2012-07-30
Super-resolution techniques like PALM and STORM require accurate localization of single fluorophores detected using a CCD. Popular localization algorithms inefficiently assume each photon registered by a pixel can only come from an area in the specimen corresponding to that pixel (not from neighboring areas), before iteratively (slowly) fitting a Gaussian to pixel intensity; they fail with noisy images. We present an alternative; a probability distribution extending over many pixels is assigned to each photon, and independent distributions are joined to describe emitter location. We compare algorithms, and recommend which serves best under different conditions. At low signal-to-noise ratios, ours is 2-fold more precise than others, and 2 orders of magnitude faster; at high ratios, it closely approximates the maximum likelihood estimate.
Held, Louis F.; Pritchard, Ernest I.
1946-01-01
An investigation was conducted to evaluate the possibilities of utilizing the high-performance characteristics of triptane and xylidines blended with 28-R fuel in order to increase fuel economy by the use of high compression ratios and maximum-economy spark setting. Full-scale single-cylinder knock tests were run with 20 deg B.T.C. and maximum-economy spark settings at compression ratios of 6.9, 8.0, and 10.0, and with two inlet-air temperatures. The fuels tested consisted of triptane, four triptane and one xylidines blend with 28-R, and 28-R fuel alone. Indicated specific fuel consumption at lean mixtures was decreased approximately 17 percent at a compression ratio of 10.0 and maximum-economy spark setting, as compared to that obtained with a compression ratio of 6.9 and normal spark setting. When compression ratio was increased from 6.9 to 10.0 at an inlet-air temperature of 150 F, normal spark setting, and a fuel-air ratio of 0.065, 55-percent triptane was required with 28-R fuel to maintain the knock-limited brake power level obtained with 28-R fuel at a compression ratio of 6.9. Brake specific fuel consumption was decreased 17.5 percent at a compression ratio of 10.0 relative to that obtained at a compression ratio of 6.9. Approximately similar results were noted at an inlet-air temperature of 250 F. For concentrations up through at least 20 percent, triptane can be more efficiently used at normal than at maximum-economy spark setting to maintain a constant knock-limited power output over the range of compression ratios tested.
A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation
Shu Cai
2016-12-01
Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.
Maximum Likelihood Timing and Carrier Synchronization in Burst-Mode Satellite Transmissions
Morelli Michele
2007-01-01
Full Text Available This paper investigates the joint maximum likelihood (ML estimation of the carrier frequency offset, timing error, and carrier phase in burst-mode satellite transmissions over an AWGN channel. The synchronization process is assisted by a training sequence appended in front of each burst and composed of alternating binary symbols. The use of this particular pilot pattern results into an estimation algorithm of affordable complexity that operates in a decoupled fashion. In particular, the frequency offset is measured first and independently of the other parameters. Timing and phase estimates are subsequently computed through simple closed-form expressions. The performance of the proposed scheme is investigated by computer simulation and compared with Cramer-Rao bounds. It turns out that the estimation accuracy is very close to the theoretical limits up to relatively low signal-to-noise ratios. This makes the algorithm well suited for turbo-coded transmissions operating near the Shannon limit.
Maximum Likelihood Timing and Carrier Synchronization in Burst-Mode Satellite Transmissions
Michele Morelli
2007-06-01
Full Text Available This paper investigates the joint maximum likelihood (ML estimation of the carrier frequency offset, timing error, and carrier phase in burst-mode satellite transmissions over an AWGN channel. The synchronization process is assisted by a training sequence appended in front of each burst and composed of alternating binary symbols. The use of this particular pilot pattern results into an estimation algorithm of affordable complexity that operates in a decoupled fashion. In particular, the frequency offset is measured first and independently of the other parameters. Timing and phase estimates are subsequently computed through simple closed-form expressions. The performance of the proposed scheme is investigated by computer simulation and compared with Cramer-Rao bounds. It turns out that the estimation accuracy is very close to the theoretical limits up to relatively low signal-to-noise ratios. This makes the algorithm well suited for turbo-coded transmissions operating near the Shannon limit.
Li, M.; Jiang, Y. S.
2014-11-01
Micro-Doppler effect is induced by the micro-motion dynamics of the radar target itself or any structure on the target. In this paper, a simplified cone-shaped model for ballistic missile warhead with micro-nutation is established, followed by the theoretical formula of micro-nutation is derived. It is confirmed that the theoretical results are identical to simulation results by using short-time Fourier transform. Then we propose a new method for nutation period extraction via signature maximum energy fitting based on empirical mode decomposition and short-time Fourier transform. The maximum wobble angle is also extracted by distance approximate approach in a small range of wobble angle, which is combined with the maximum likelihood estimation. By the simulation studies, it is shown that these two feature extraction methods are both valid even with low signal-to-noise ratio.
Tropical Atlantic SSTS at the Last Glacial Maximum derived from Sr/Ca ratios of fossil coral
Cohen, A. L.; Saenger, C. P.
2006-12-01
The sensitivity of the tropics to climate change is a particularly controversial issue in paleoclimatology. At the heart of this controversy are disagreements amongst different proxy datasets regarding the amplitude of glacial-interglacial changes in temperature, particularly at the sea surface. Data obtained from the aragonitic skeletons of massive reef corals have contributed in no small measure to the debate, yielding LGM and deglacial SSTs 5-6°C cooler than today (Guilderson et al., 1994; McCulloch et al., 1999; Correge et al., 2004), that imply a high sensitivity of Earth's climate to changes in boundary conditions (Crowley, 2000). We used SIMS ion microprobe to analyze Sr/Ca ratios of small pieces of Montastrea coral retrieved from a Barbados drillcore (Guilderson et al., 2001). U/Th dates place the samples between 22 and 24 kyr BP. Localized areas of dissolution and re-growth of secondary (diagenetic) aragonite crystals were identified at centers of septa. Sr/Ca ratios of these crystals were higher than Sr/Ca ratios of original coral crystals preserved in adjacent fasciculi and yielded relatively cooler derived SSTs. The original coral crystals, recognized by their size and orientation, were selectively targeted for analysis using a 20 micron-diameter sample spot. Our calibration study using modern corals from Bermuda, St Croix (USVI) and Barbados indicates that Montastrea Sr/Ca is strongly correlated with SST and with annual extension (growth) rate (Saenger et al., 2006). Growth rate of the fossil corals was determined from measurement of daily growth bands identified in petrographic thin-sections. Application of a growth-dependent Sr/Ca-T calibration yielded Barbados SSTs that were, on average, 2.5°C cooler than today during the LGM and ~1°C cooler than today during Heinrich Event 2. Our LGM SSTs are consistent with the original CLIMAP estimates (CLIMAP, 1976) and with more recent Mg/Ca-based SSTs derived from calcitic foraminifera in the Caribbean
Arbutina B.
2012-01-01
Full Text Available We recalculated the maximum white dwarf mass in ultra-compact X-ray binaries obtained in an earlier paper (Arbutina 2011, by taking the effects of super-Eddington accretion rate on the stability of mass transfer into account. It is found that, although the value formally remains the same (under the assumed approximations, for white dwarf masses M2 >~0.1MCh mass ratios are extremely low, implying that the result for Mmax is likely to have little if any practical relevance.
Bruce T. Milne
2017-05-01
Full Text Available Stream networks are branched structures wherein water and energy move between land and atmosphere, modulated by evapotranspiration and its interaction with the gravitational dissipation of potential energy as runoff. These actions vary among climates characterized by Budyko theory, yet have not been integrated with Horton scaling, the ubiquitous pattern of eco-hydrological variation among Strahler streams that populate river basins. From Budyko theory, we reveal optimum entropy coincident with high biodiversity. Basins on either side of optimum respond in opposite ways to precipitation, which we evaluated for the classic Hubbard Brook experiment in New Hampshire and for the Whitewater River basin in Kansas. We demonstrate that Horton ratios are equivalent to Lagrange multipliers used in the extremum function leading to Shannon information entropy being maximal, subject to constraints. Properties of stream networks vary with constraints and inter-annual variation in water balance that challenge vegetation to match expected resource supply throughout the network. The entropy-Horton framework informs questions of biodiversity, resilience to perturbations in water supply, changes in potential evapotranspiration, and land use changes that move ecosystems away from optimal entropy with concomitant loss of productivity and biodiversity.
Maximum detection range limitation of pulse laser radar with Geiger-mode avalanche photodiode array
Luo, Hanjun; Xu, Benlian; Xu, Huigang; Chen, Jingbo; Fu, Yadan
2015-05-01
When designing and evaluating the performance of laser radar system, maximum detection range achievable is an essential parameter. The purpose of this paper is to propose a theoretical model of maximum detection range for simulating the Geiger-mode laser radar's ranging performance. Based on the laser radar equation and the requirement of the minimum acceptable detection probability, and assuming the primary electrons triggered by the echo photons obey Poisson statistics, the maximum range theoretical model is established. By using the system design parameters, the influence of five main factors, namely emitted pulse energy, noise, echo position, atmospheric attenuation coefficient, and target reflectivity on the maximum detection range are investigated. The results show that stronger emitted pulse energy, lower noise level, more front echo position in the range gate, higher atmospheric attenuation coefficient, and higher target reflectivity can result in greater maximum detection range. It is also shown that it's important to select the minimum acceptable detection probability, which is equivalent to the system signal-to-noise ratio for producing greater maximum detection range and lower false-alarm probability.
Noise and physical limits to maximum resolution of PET images
Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2007-10-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.
Uchiyama, Takanori; Minamitani, Haruyuki; Sakata, Makoto
1990-01-01
The complex maximum entropy method and complex autoregressive model fitting with the singular value decomposition method (SVD) were applied to the free induction decay signal data obtained with a Fourier transform nuclear magnetic resonance spectrometer to estimate superresolved NMR spectra. The practical estimation of superresolved NMR spectra are shown on the data of phosphorus-31 nuclear magnetic resonance spectra. These methods provide sharp peaks and high signal-to-noise ratio compared with conventional fast Fourier transform. The SVD method was more suitable for estimating superresolved NMR spectra than the MEM because the SVD method allowed high-order estimation without spurious peaks, and it was easy to determine the order and the rank.
Saveljev, Vladimir; Kim, Sung-Kyu; Lee, Hyoung; Kim, Hyun-Woo; Lee, Byoungho
2016-02-08
The amplitude of the moiré patterns is estimated in relation to the opening ratio in line gratings and square grids. The theory is developed; the experimental measurements are performed. The minimum and the maximum of the amplitude are found. There is a good agreement between the theoretical and experimental data. This is additionally confirmed by the visual observation. The results can be applied to the image quality improvement in autostereoscopic 3D displays, to the measurements, and to the moiré displays.
Solomon eTesfamariam
2015-10-01
Full Text Available This paper presents a seismic performance evaluation framework using two engineering demand parameters, i.e. maximum and residual inter-story drift ratios, and with consideration of mainshock-aftershock (MSAS earthquake sequences. The evaluation is undertaken within a performance-based earthquake engineering framework in which seismic demand limits are defined with respect to the earthquake return period. A set of 2-, 4-, 8-, and 12-story non-ductile reinforced concrete buildings, located in Victoria, British Colombia, Canada, is considered as a case study. Using 50 mainshock and MSAS earthquake records (two horizontal components per record, incremental dynamic analysis is performed, and the joint probability distribution of maximum and residual inter-story drift ratios is modeled using a novel copula technique. The results are assessed both for collapse and non-collapse limit states. From the results, it can be shown that the collapse assessment of 4- to 12-story buildings is not sensitive to the consideration of MSAS seismic input, whereas for the 2-story building, a 13% difference in the median collapse capacity is caused by the MSAS. For unconditional probability of unsatisfactory seismic performance, which accounts for both collapse and non-collapse limit states, the life safety performance objective is achieved, but it fails to satisfy the collapse prevention performance objective. The results highlight the need for the consideration of seismic retrofitting for the non-ductile reinforced concrete structures.
Han, Xianglu; Price, Paul S
2011-12-01
The maximum cumulative ratio (MCR) developed in previous work is a tool to evaluate the need to perform cumulative risk assessments. MCR is the ratio of the cumulative exposures to multiple chemicals to the maximum exposure from one of the chemicals when exposures are described using a common metric. This tool is used to evaluate mixtures of chemicals measured in samples of untreated ground water as source for drinking water systems in the United States. The mixtures of chemicals in this dataset differ from those examined in our previous work both in terms of the predicted toxicity and compounds measured. Despite these differences, MCR values in this study follow patterns similar to those seen earlier. MCR values for the mixtures have a mean (range) of 2.2 (1.03-5.4) that is much smaller than the mean (range) of 16 (5-34) in the mixtures in previous study. The MCR values of the mixtures decline as Hazard Index (HI) values increase. MCR values for mixtures with larger HI values are not affected by possible contributions from chemicals that may occur at levels below the detection limits. This work provides a second example of use of the MCR tool in the evaluation of mixtures that occur in the environment.
Xianglu Han
2011-12-01
Full Text Available The maximum cumulative ratio (MCR developed in previous work is a tool to evaluate the need to perform cumulative risk assessments. MCR is the ratio of the cumulative exposures to multiple chemicals to the maximum exposure from one of the chemicals when exposures are described using a common metric. This tool is used to evaluate mixtures of chemicals measured in samples of untreated ground water as source for drinking water systems in the United States. The mixtures of chemicals in this dataset differ from those examined in our previous work both in terms of the predicted toxicity and compounds measured. Despite these differences, MCR values in this study follow patterns similar to those seen earlier. MCR values for the mixtures have a mean (range of 2.2 (1.03–5.4 that is much smaller than the mean (range of 16 (5–34 in the mixtures in previous study. The MCR values of the mixtures decline as Hazard Index (HI values increase. MCR values for mixtures with larger HI values are not affected by possible contributions from chemicals that may occur at levels below the detection limits. This work provides a second example of use of the MCR tool in the evaluation of mixtures that occur in the environment.
Relative azimuth inversion by way of damped maximum correlation estimates
Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.
2012-01-01
Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.
Leggio, Luca; de Varona, Omar E.; Escudero, Pedro; Carpintero del Barrio, Guillermo; Osiński, Marek; Lamela Rivera, Horacio
2015-07-01
Optoacoustic (OA) imaging is a rising biomedical technique that has attracted much interest over the last 15 years. This technique permits to visualize the internal soft tissues in depth by using short laser pulses, able to generate ultrasonic signals in a large frequency range. It combines the high contrast of optical imaging with the high resolution of ultrasound systems. The OA signals detected from the whole surface of the body serve to reconstruct in detail the image of the internal tissues, where the absorbed optical energy distribution outlines the regions of interest. In fact, the use of contrast agents could improve the detection of growing anomalies in soft tissues, such as carcinomas. This work proposes the use of double-walled carbon nanotubes (DWCNTs) as a potential nontoxic biodegradable contrast agent applicable in OA to reveal the presence of malignant in-depth tissues in near infrared (NIR) wavelength range (0.75-1.4 μm), where the biological tissues are fairly transparent to optical radiation. A dual-wavelength (870 and 905 nm) OA system is presented, based on arrays of high power diode lasers (HPDLs) that generate ultrasound signals from a DWCNT solution embedded within a biological phantom. The OA signals generated by DWCNTs are compared with those obtained using black ink, considered to be a very good absorber at these wavelengths. The experiments prove that DWCNTs are a potential contrast agent for optoacoustic spectroscopy (OAS).
祁玉生; 张银华
2003-01-01
在Turbo译码中,需要精确的信噪比信息来计算子译码器生成的外信息,而在无线衰落信道中,Nakagami衰落信道具有通用性,文中对Summer信噪比估计算法进行了改进,使之能够应用于Nak-agami衰落信道.最后,对应用了改进Summer信噪比的Turbo译码进行了性能仿真.从结果可以看出,改进的Summer信噪比估计算法能够成功应用到Turbo译码中,使得Turbo译码获得良好的性能.
Performance of MIMO-OFDM system using Linear Maximum Likelihood Alamouti Decoder
Monika Aggarwal
2012-06-01
Full Text Available A MIMO-OFDM wireless communication system is a combination of MIMO and OFDM Technology. The combination of MIMO and OFDM produces a powerful technique for providing high data rates over frequency-selective fading channels. MIMO-OFDM system has been currently recognized as one of the most competitive technology for 4G mobile wireless systems. MIMO-OFDM system can compensate for the lacks of MIMO systems and give play to the advantages of OFDM system.In this paper , the bit error rate (BER performance using linear maximum likelihood alamouti combiner (LMLAC decoding technique for space time frequency block codes(STFBC MIMO-OFDM system with frequency offset (FO is being evaluated to provide the system with low complexity and maximum diversity. The simulation results showed that the scheme has the ability to reduce ICI effectively with a low decoding complexity and maximum diversity in terms of bandwidth efficiency and also in the bit error rate (BER performance especially at high signal to noise ratio.
Nakata, Manabu; Okada, Takashi; Komai, Yoshinori; Nohara, Hiroki [Kyoto Univ. (Japan). Hospital
1996-08-01
Modern linear accelerators have four independent jaws and multileaf collimators (MLC) of 1 cm width at the isocenter. Asymmetric fields defined by such independent jaws and irregular multileaf collimated fields can be used to match adjacent fields or to spare the spinal cord in external photon beam radiotherapy. We have developed a new approximate algorithm for depth dose calculations at the collimator rotation axis. The program is based on Clarkson`s principle, and uses a more accurate modification of Day`s method for asymmetric fields. Using this method, tissue-maximum ratios (TMR) and field factors of ten kinds of asymmetric fields and ten different irregular multileaf collimated fields were calculated and compared with the measured data for 6 MV and 15 MV photon beams. The dose accuracy with the general A/Pe method was about 3%, however, with the new modified Day`s method, accuracy was within 1.7% for TMR and 1.2% for field factors. The calculated TMR and field factors were found to be in good agreement with measurements for both the 6 MV and 15 MV photon beams. (author)
Vallotton, Nathalie; Price, Paul S
2016-05-17
This paper uses the maximum cumulative ratio (MCR) as part of a tiered approach to evaluate and prioritize the risk of acute ecological effects from combined exposures to the plant protection products (PPPs) measured in 3 099 surface water samples taken from across the United States. Assessments of the reported mixtures performed on a substance-by-substance approach and using a Tier One cumulative assessment based on the lowest acute ecotoxicity benchmark gave the same findings for 92.3% of the mixtures. These mixtures either did not indicate a potential risk for acute effects or included one or more individual PPPs that had concentrations in excess of their benchmarks. A Tier Two assessment using a trophic level approach was applied to evaluate the remaining 7.7% of the mixtures. This assessment reduced the number of mixtures of concern by eliminating the combination of endpoint from multiple trophic levels, identified invertebrates and nonvascular plants as the most susceptible nontarget organisms, and indicated that a only a very limited number of PPPs drove the potential concerns. The combination of the measures of cumulative risk and the MCR enabled the identification of a small subset of mixtures where a potential risk would be missed in substance-by-substance assessments.
Rayleigh-maximum-likelihood bilateral filter for ultrasound image enhancement.
Li, Haiyan; Wu, Jun; Miao, Aimin; Yu, Pengfei; Chen, Jianhua; Zhang, Yufeng
2017-04-17
Ultrasound imaging plays an important role in computer diagnosis since it is non-invasive and cost-effective. However, ultrasound images are inevitably contaminated by noise and speckle during acquisition. Noise and speckle directly impact the physician to interpret the images and decrease the accuracy in clinical diagnosis. Denoising method is an important component to enhance the quality of ultrasound images; however, several limitations discourage the results because current denoising methods can remove noise while ignoring the statistical characteristics of speckle and thus undermining the effectiveness of despeckling, or vice versa. In addition, most existing algorithms do not identify noise, speckle or edge before removing noise or speckle, and thus they reduce noise and speckle while blurring edge details. Therefore, it is a challenging issue for the traditional methods to effectively remove noise and speckle in ultrasound images while preserving edge details. To overcome the above-mentioned limitations, a novel method, called Rayleigh-maximum-likelihood switching bilateral filter (RSBF) is proposed to enhance ultrasound images by two steps: noise, speckle and edge detection followed by filtering. Firstly, a sorted quadrant median vector scheme is utilized to calculate the reference median in a filtering window in comparison with the central pixel to classify the target pixel as noise, speckle or noise-free. Subsequently, the noise is removed by a bilateral filter and the speckle is suppressed by a Rayleigh-maximum-likelihood filter while the noise-free pixels are kept unchanged. To quantitatively evaluate the performance of the proposed method, synthetic ultrasound images contaminated by speckle are simulated by using the speckle model that is subjected to Rayleigh distribution. Thereafter, the corrupted synthetic images are generated by the original image multiplied with the Rayleigh distributed speckle of various signal to noise ratio (SNR) levels and
Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications
Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz
2016-12-01
This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledge of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.
Yamanaka, Kota; Hirata, Shinnosuke; Hachiya, Hiroyuki
2016-07-01
Ultrasonic distance measurement for obstacles has been recently applied in automobiles. The pulse-echo method based on the transmission of an ultrasonic pulse and time-of-flight (TOF) determination of the reflected echo is one of the typical methods of ultrasonic distance measurement. Improvement of the signal-to-noise ratio (SNR) of the echo and the avoidance of crosstalk between ultrasonic sensors in the pulse-echo method are required in automotive measurement. The SNR of the reflected echo and the resolution of the TOF are improved by the employment of pulse compression using a maximum-length sequence (M-sequence), which is one of the binary pseudorandom sequences generated from a linear feedback shift register (LFSR). Crosstalk is avoided by using transmitted signals coded by different M-sequences generated from different LFSRs. In the case of lower-order M-sequences, however, the number of measurement channels corresponding to the pattern of the LFSR is not enough. In this paper, pulse compression using linear-frequency-modulated (LFM) signals coded by M-sequences has been proposed. The coding of LFM signals by the same M-sequence can produce different transmitted signals and increase the number of measurement channels. In the proposed method, however, the truncation noise in autocorrelation functions and the interference noise in cross-correlation functions degrade the SNRs of received echoes. Therefore, autocorrelation properties and cross-correlation properties in all patterns of combinations of coded LFM signals are evaluated.
K. Yao
2007-12-01
Full Text Available We investigate the maximum likelihood (ML direction-of-arrival (DOA estimation of multiple wideband sources in the presence of unknown nonuniform sensor noise. New closed-form expression for the direction estimation CramÃƒÂ©r-Rao-Bound (CRB has been derived. The performance of the conventional wideband uniform ML estimator under nonuniform noise has been studied. In order to mitigate the performance degradation caused by the nonuniformity of the noise, a new deterministic wideband nonuniform ML DOA estimator is derived and two associated processing algorithms are proposed. The first algorithm is based on an iterative procedure which stepwise concentrates the log-likelihood function with respect to the DOAs and the noise nuisance parameters, while the second is a noniterative algorithm that maximizes the derived approximately concentrated log-likelihood function. The performance of the proposed algorithms is tested through extensive computer simulations. Simulation results show the stepwise-concentrated ML algorithm (SC-ML requires only a few iterations to converge and both the SC-ML and the approximately-concentrated ML algorithm (AC-ML attain a solution close to the derived CRB at high signal-to-noise ratio.
The optical synthetic aperture image restoration based on the improved maximum-likelihood algorithm
Geng, Zexun; Xu, Qing; Zhang, Baoming; Gong, Zhihui
2012-09-01
Optical synthetic aperture imaging (OSAI) can be envisaged in the future for improving the image resolution from high altitude orbits. Several future projects are based on optical synthetic aperture for science or earth observation. Comparing with equivalent monolithic telescopes, however, the partly filled aperture of OSAI induces the attenuation of the modulation transfer function of the system. Consequently, images acquired by OSAI instrument have to be post-processed to restore ones equivalent in resolution to that of a single filled aperture. The maximum-likelihood (ML) algorithm proposed by Benvenuto performed better than traditional Wiener filter did, but it didn't work stably and the point spread function (PSF), was assumed to be known and unchanged in iterative restoration. In fact, the PSF is unknown in most cases, and its estimation was expected to be updated alternatively in optimization. Facing these limitations of this method, an improved ML (IML) reconstruction algorithm was proposed in this paper, which incorporated PSF estimation by means of parameter identification into ML, and updated the PSF successively during iteration. Accordingly, the IML algorithm converged stably and reached better results. Experiment results showed that the proposed algorithm performed much better than ML did in peak signal to noise ratio, mean square error and the average contrast evaluation indexes.
Application of Artificial Bee Colony Algorithm to Maximum Likelihood DOA Estimation
Zhicheng Zhang; Jun Lin; Yaowu Shi
2013-01-01
Maximum Likelihood (ML) method has an excellent performance for Direction-Of-Arrival (DOA) estimation,but a multidimensional nonlinear solution search is required which complicates the computation and prevents the method from practical use.To reduce the high computational burden of ML method and make it more suitable to engineering applications,we apply the Artificial Bee Colony (ABC) algorithm to maximize the likelihood function for DOA estimation.As a recently proposed bio-inspired computing algorithm,ABC algorithm is originally used to optimize multivariable functions by imitating the behavior of bee colony finding excellent nectar sources in the nature environment.It offers an excellent alternative to the conventional methods in ML-DOA estimation.The performance of ABC-based ML and other popular meta-heuristic-based ML methods for DOA estimation are compared for various scenarios of convergence,Signal-to-Noise Ratio (SNR),and number of iterations.The computation loads of ABC-based ML and the conventional ML methods for DOA estimation are also investigated.Simulation results demonstrate that the proposed ABC based method is more efficient in computation and statistical performance than other ML-based DOA estimation methods.
Application of a maximum likelihood algorithm to ultrasound modulated optical tomography.
Huynh, Nam T; He, Diwei; Hayes-Gill, Barrie R; Crowe, John A; Walker, John G; Mather, Melissa L; Rose, Felicity R A J; Parker, Nicholas G; Povey, Malcolm J W; Morgan, Stephen P
2012-02-01
In pulsed ultrasound modulated optical tomography (USMOT), an ultrasound (US) pulse performs as a scanning probe within the sample as it propagates, modulating the scattered light spatially distributed along its propagation axis. Detecting and processing the modulated signal can provide a 1-dimensional image along the US axis. A simple model is developed wherein the detected signal is modelled as a convolution of the US pulse and the properties (ultrasonic/optical) of the medium along the US axis. Based upon this model, a maximum likelihood (ML) method for image reconstruction is established. For the first time to our knowledge, the ML technique for an USMOT signal is investigated both theoretically and experimentally. The ML method inverts the data to retrieve the spatially varying properties of the sample along the US axis, and a signal proportional to the optical properties can be acquired. Simulated results show that the ML method can serve as a useful reconstruction tool for a pulsed USMOT signal even when the signal-to-noise ratio (SNR) is close to unity. Experimental data using 5 cm thick tissue phantoms (scattering coefficient μ(s) = 6.5 cm(-1), anisotropy factor g=0.93) demonstrate that the axial resolution is 160 μm and the lateral resolution is 600 μm using a 10 MHz transducer.
Cianfrini, C.; Corcione, M.; Habib, E.; Quintino, A.
2017-06-01
Natural convection in air-filled rectangular cavities inclined with respect to gravity, so that the heated wall is facing upwards, is studied numerically under the assumption of two-dimensional laminar flow. A computational code based on the SIMPLE-C algorithm is used for the solution of the system of the mass, momentum and energy transfer governing equations. Simulations are performed for height-to-width aspect ratios of the enclosure from 0.25 to 8, Rayleigh numbers based on the length of the heated and cooled walls from 102 to 107, and tilting angles of the enclosure from 0° to 75°. The existence of an optimal tilting angle is confirmed for any investigated configuration, at a location that increases as the Rayleigh number is decreased, and the height-to-width aspect ratio of the cavity are increased, unless the value of the Rayleigh number is that corresponding to the onset of convection or just higher. Dimensionless correlating equations are developed to predict the optimal tilting angle and the heat transfer performance of the enclosure.
结合最大方差比准则和PCNN模型的图像分割%Image segmentation with PCNN model and maximum of variance ratio
辛国江; 邹北骥; 李建锋; 陈再良; 蔡美玲
2011-01-01
脉冲耦合神经网络(PCNN)模型在图像分割方面有着很好的应用.在各项参数确定的情况下,其分割结果的好坏取决于循环迭代次数的多少,而PCNN模型自身无法实现迭代次数的自动判定.为此提出一种结合最大方差比准则的PCNN迭代次数自动判定算法,用于实现图像的自动分割.算法利用最大方差比准则找到图像的最优分割界限,确定PCNN的迭代次数,获得最优图像分割结果,然后利用最大香农熵准则验证分割结果.实验表明:提出的算法实现了PCNN迭代次数的自动判定,提高了PCNN的迭代速度,运行效率优于基于2D-OTSU和基于交叉熵的自动分割算法,图像分割效果良好.%The Pulse Coupled Neural Network (FCNN) model is very suitable for image segmentation. With given parameters, the results of segmentation are determined only by the times of iteration. However, the PCNN model itself cannot automatically discover the optimal iteration times. Therefore, an algorithm based on the maximization of variance ratio criteria is proposed to solve this problem. The algorithm can automatically discover the best iteration times by applying the maximization of variance ratio criteria, and get the best segmentation results. Eventually, the Shannon entropy rule is used to check the segmentation results. The experimental results show that the algorithm can automatically discover the optimal iteration times, the segmentation results are satisfactory, and it improves the speed of PCNN iteration, and it is also more efficient than the automatic segmentation algorithm based 2D-OTSU and cross-entropy.
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-06-16
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain's response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
Kyungsoo Kim
2016-06-01
Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
Wang, Kezhi
2014-10-01
Bit error rate (BER) and outage probability for amplify-and-forward (AF) relaying systems with two different channel estimation methods, disintegrated channel estimation and cascaded channel estimation, using pilot-aided maximum likelihood method in slowly fading Rayleigh channels are derived. Based on the BERs, the optimal values of pilot power under the total transmitting power constraints at the source and the optimal values of pilot power under the total transmitting power constraints at the relay are obtained, separately. Moreover, the optimal power allocation between the pilot power at the source, the pilot power at the relay, the data power at the source and the data power at the relay are obtained when their total transmitting power is fixed. Numerical results show that the derived BER expressions match with the simulation results. They also show that the proposed systems with optimal power allocation outperform the conventional systems without power allocation under the same other conditions. In some cases, the gain could be as large as several dB\\'s in effective signal-to-noise ratio.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Rui A. P. Perdigão
2012-06-01
Full Text Available The application of the Maximum Entropy (ME principle leads to a minimum of the Mutual Information (MI, I(X,Y, between random variables X,Y, which is compatible with prescribed joint expectations and given ME marginal distributions. A sequence of sets of joint constraints leads to a hierarchy of lower MI bounds increasingly approaching the true MI. In particular, using standard bivariate Gaussian marginal distributions, it allows for the MI decomposition into two positive terms: the Gaussian MI (I_{g}, depending upon the Gaussian correlation or the correlation between ‘Gaussianized variables’, and a non‑Gaussian MI (I_{ng}, coinciding with joint negentropy and depending upon nonlinear correlations. Joint moments of a prescribed total order p are bounded within a compact set defined by Schwarz-like inequalities, where I_{ng} grows from zero at the ‘Gaussian manifold’ where moments are those of Gaussian distributions, towards infinity at the set’s boundary where a deterministic relationship holds. Sources of joint non-Gaussianity have been systematized by estimating I_{ng} between the input and output from a nonlinear synthetic channel contaminated by multiplicative and non-Gaussian additive noises for a full range of signal-to-noise ratio (snr variances. We have studied the effect of varying snr on I_{g} and I_{ng} under several signal/noise scenarios.
Yuan-Hong Jiang
Full Text Available OBJECTIVES: The aim of this study was to investigate the predictive values of the total International Prostate Symptom Score (IPSS-T and voiding to storage subscore ratio (IPSS-V/S in association with total prostate volume (TPV and maximum urinary flow rate (Qmax in the diagnosis of bladder outlet-related lower urinary tract dysfunction (LUTD in men with lower urinary tract symptoms (LUTS. METHODS: A total of 298 men with LUTS were enrolled. Video-urodynamic studies were used to determine the causes of LUTS. Differences in IPSS-T, IPSS-V/S ratio, TPV and Qmax between patients with bladder outlet-related LUTD and bladder-related LUTD were analyzed. The positive and negative predictive values (PPV and NPV for bladder outlet-related LUTD were calculated using these parameters. RESULTS: Of the 298 men, bladder outlet-related LUTD was diagnosed in 167 (56%. We found that IPSS-V/S ratio was significantly higher among those patients with bladder outlet-related LUTD than patients with bladder-related LUTD (2.28±2.25 vs. 0.90±0.88, p1 or >2 was factored into the equation instead of IPSS-T, PPV were 91.4% and 97.3%, respectively, and NPV were 54.8% and 49.8%, respectively. CONCLUSIONS: Combination of IPSS-T with TPV and Qmax increases the PPV of bladder outlet-related LUTD. Furthermore, including IPSS-V/S>1 or >2 into the equation results in a higher PPV than IPSS-T. IPSS-V/S>1 is a stronger predictor of bladder outlet-related LUTD than IPSS-T.
A signal-to-noise approach to score normalization
Arampatzis, A.; Kamps, J.; Cheung, D.; Song, I.-Y.; Chu, W.; Hu, X.; Lin, J.; Li, J.; Peng, Z.
2009-01-01
Score normalization is indispensable in distributed retrieval and fusion or meta-search where merging of result-lists is required. Distributional approaches to score normalization with reference to relevance, such as binary mixture models like the normal-exponential, suffer from lack of universality
Interferometric Imaging of Geostationary Satellites: Signal-to-Noise Considerations
2011-09-01
and the extent to which they cover the necessary portions of the UV plane . Once the photon counting noise becomes smaller than the UV coverage noise, ad...satellites,” in Proc. SPIE 4091, Imaging Technology and Telescopes, J. W. Bilbro, J. B. Breckinridge, R. A. Carreras , S. R. Czyzak, M. J. Eckart, R. D...SPIE 4091, Imaging Technology and Telescopes, J. W. Bilbro, J. B. Breckinridge, R. A. Carreras , S. R. Czyzak, M. J. Eckart, R. D. Fiete, and P. S
Shieh, Shin-Lin; Han, Yunghsiang S
2007-01-01
A common problem on sequential-type decoding is that at the signal-to-noise ratio (SNR) below the one corresponding to the cutoff rate, the average decoding complexity per information bit and the required stack size grow rapidly with the information length. In order to alleviate the problem in the maximum-likelihood sequential decoding algorithm (MLSDA), we propose to directly eliminate the top path whose end node is $\\Delta$-trellis-level prior to the farthest one among all nodes that have been expanded thus far by the sequential search. Following random coding argument, we analyze the early-elimination window $\\Delta$ that results in negligible performance degradation for the MLSDA. Our analytical results indicate that the required early elimination window for negligible performance degradation is just twice of the constraint length for rate one-half convolutional codes. For rate one-third convolutional codes, the required early-elimination window even reduces to the constraint length. The suggestive theore...
Logging Signals Filter Based on Wavelet Modulus Maximum%基于小波模极大值的测井信号滤波
董璐璐; 房文静; 徐静
2012-01-01
On the basis of thermal neutron count curve filter in Pulsed Neutron-Neutron (PNN) logging, the effective formation macroscopic capture cross section can be obtained. Because the interference of statistic fluctuation on PNN logging signals; the spread characteristics of wavelet transform modulus maximum of the signals and noise in different scales are discussed based on the investigation of modulus maximum characteristics. Proposed is an effective PNN logging signals preprocessing method-wavelet transform modulus maximum filtering method. For case study, PNN logging SSN curves in a well are filtered by db4 wavelet. The practical application result shows that the wavelet modulus maximum effectively removes the noise and improves the signal to noise ratio of PNN logging signals.%脉冲中子-中子测井(PNN)热中子计数率曲线滤波处理是获取有效地层宏观俘获截面值的研究基础.针对PNN测井信号受到统计起伏的噪声干扰问题,在分析小波变换模极大值特性的基础上,分析PNN测井信号和干扰噪声的小波变换模极大值在不同尺度上的传播特性,建立PNN测井信号小波变换模极大值去噪算法.以油田某井为例,实现对PNN测井短源距计数率曲线的滤波处理.结果表明,基于小波变换模极大值的滤波方法能够有效去除PNN测井信号噪声干扰,提高测井信号的信噪比.
李娟; 张克兆; 李生权; 刘超
2015-01-01
Considering the permanent magnet synchronous wind generator system with uncertainties, multi interferences and low efficiency, a maximum power point tracking with active disturbance rejection control strategy based on the best tip speed ratio was proposed to track the motor speed real time and to capture the maximum power. The active disturbance rejection controller does not depend on the mathematical model of the system. The uncertainties including nonlinear, strong coupling, parameter variations and ex-ternal disturbances wer lumped to the total disturbances of system, which affect the tracking speed in real time. The extended state observer estimates the total disturbances, and then compensates them through the feedback controller, which improves the speed tracking ability. Simulation results show that, com-pared with the traditional PI control method, the proposed control strategy not only guarantees the system to achieve maximum power output, but also has strong robustness against uncertain dynamics and external disturbances.%针对永磁同步风力发电系统中存在的不确定、多干扰、效率低等问题,提出一种以实现最大功率跟踪控制为目标,实时跟踪电机转速的基于最佳叶尖速比的自抗扰控制策略. 该方法不依赖于系统数学模型,将永磁同步风力发电机存在的、影响转速难以实时跟踪的非线性、强耦合、参数变化、外界干扰等不确定性看成系统总干扰. 通过扩张状态观测器对系统的总干扰进行估计,然后通过反馈控制器进行干扰补偿,从而提高转速的跟踪能力. 仿真结果表明,与传统的PI控制方法相比,自抗扰控制不仅能保证系统实现最大功率输出,而且提高了系统的鲁棒性和抗干扰性能.
YOGENDRA TYAGI
2012-08-01
Full Text Available In this paper the drilling of mild steel with the help of CNC drilling machining operation with tool use high speed steel by applying taguchi methodology has been reported. The signal-to-noise ratio applied to find optimum process parameter for CNC drilling machining .A L9 orthogonal array and analysis of variance(ANOVA are applied to study the performance characteristics of machining parameter (spindle speed, feed, depth with consideration of good surface finish as well as high material removal rate(MRR .Surfacefinishing is one of the prime requirements of customers of machining parts .Results obtained by taguchi method and signal-to-noise ratio match closely with (ANOVA and the feed is most effective factor for MRR. And spindle speed is the most effective factor for surface roughness. Multiple regression equation are formulated forestimating predicted value surface roughness and material removal rate
Bellili, Faouzi; Meftehi, Rabii; Affes, Sofiene; Stephenne, Alex
2015-01-01
In this paper, we tackle for the first time the problem of maximum likelihood (ML) estimation of the signal-to-noise ratio (SNR) parameter over time-varying single-input multiple-output (SIMO) channels. Both the data-aided (DA) and the non-data-aided (NDA) schemes are investigated. Unlike classical techniques where the channel is assumed to be slowly time-varying and, therefore, considered as constant over the entire observation period, we address the more challenging problem of instantaneous (i.e., short-term or local) SNR estimation over fast time-varying channels. The channel variations are tracked locally using a polynomial-in-time expansion. First, we derive in closed-form expressions the DA ML estimator and its bias. The latter is subsequently subtracted in order to obtain a new unbiased DA estimator whose variance and the corresponding Cram\\'er-Rao lower bound (CRLB) are also derived in closed form. Due to the extreme nonlinearity of the log-likelihood function (LLF) in the NDA case, we resort to the expectation-maximization (EM) technique to iteratively obtain the exact NDA ML SNR estimates within very few iterations. Most remarkably, the new EM-based NDA estimator is applicable to any linearly-modulated signal and provides sufficiently accurate soft estimates (i.e., soft detection) for each of the unknown transmitted symbols. Therefore, hard detection can be easily embedded in the iteration loop in order to improve its performance at low to moderate SNR levels. We show by extensive computer simulations that the new estimators are able to accurately estimate the instantaneous per-antenna SNRs as they coincide with the DA CRLB over a wide range of practical SNRs.
Burns, Brian; Wilson, Neil E; Furuyama, Jon K; Thomas, M Albert
2014-02-01
The four-dimensional (4D) echo-planar correlated spectroscopic imaging (EP-COSI) sequence allows for the simultaneous acquisition of two spatial (ky, kx) and two spectral (t2, t1) dimensions in vivo in a single recording. However, its scan time is directly proportional to the number of increments in the ky and t1 dimensions, and a single scan can take 20–40 min using typical parameters, which is too long to be used for a routine clinical protocol. The present work describes efforts to accelerate EP-COSI data acquisition by application of non-uniform under-sampling (NUS) to the ky–t1 plane of simulated and in vivo EP-COSI datasets then reconstructing missing samples using maximum entropy (MaxEnt) and compressed sensing (CS). Both reconstruction problems were solved using the Cambridge algorithm, which offers many workflow improvements over other l1-norm solvers. Reconstructions of retrospectively under-sampled simulated data demonstrate that the MaxEnt and CS reconstructions successfully restore data fidelity at signal-to-noise ratios (SNRs) from 4 to 20 and 5× to 1.25× NUS. Retrospectively and prospectively 4× under-sampled 4D EP-COSI in vivo datasets show that both reconstruction methods successfully remove NUS artifacts; however, MaxEnt provides reconstructions equal to or better than CS. Our results show that NUS combined with iterative reconstruction can reduce 4D EP-COSI scan times by 75% to a clinically viable 5 min in vivo, with MaxEnt being the preferred method. 2013 John Wiley & Sons, Ltd.
Optimized Large-Scale CMB Likelihood And Quadratic Maximum Likelihood Power Spectrum Estimation
Gjerløw, E; Eriksen, H K; Górski, K M; Gruppuso, A; Jewell, J B; Plaszczynski, S; Wehus, I K
2015-01-01
We revisit the problem of exact CMB likelihood and power spectrum estimation with the goal of minimizing computational cost through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al.\\ (1997), and here we develop it into a fully working computational framework for large-scale polarization analysis, adopting \\WMAP\\ as a worked example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked \\WMAP\\ sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8\\% at $\\ell\\le32$, and a...
Bergen, Harold A.
Speeches and compositions often become mere word lists, obscuring the message's true meaning with too many words. This paper shows that the words "use,""of," and "it" can be eliminated from writing and speech, making communications shorter, more understandable, and more efficient. Examples are provided. (RL)
Rivet, D.; Campillo, M.; Sanchez-Sesma, F.; Singh, S. K.
2012-04-01
We reconstruct Rayleigh and Love waves from cross-correlations of ambient seismic noise recorded at 19 broad-band stations of the MesoAmerica Seismic Experiment (MASE) and Valley of Mexico Experiment (VMEX). The cross-correlations are computed over 2 years of noise records for the 8 MASE stations and over 1 year for the 11 VMEX stations. We use surface waves with sufficient signal-to-noise ratio to measure group velocity dispersion curves at period of 0.5 to 3 seconds. For paths within the soft quaternary sediments basin, the maximum energy is observed at velocity higher than expected for the fundamental mode. This observation suggests the importance of higher modes as the main vectors of energy in such complex structures. To perform a reliable inversion of the velocity structure beneath the valley, an identification of these dominants modes is required. To identify the modes of surface waves we use the spectral ratio of the horizontal components over the vertical component (H/V) measured on seismic coda. We compare the observed values with the theoretical H/V for the velocity model deduced from surface wave dispersion when assuming a particular mode. H/V ratio in the coda is computed under the hypothesis of equipartition of a diffuse field in a layered medium following Margerin et al. [2009] and Sánchez-Sesma et al. [2011]. We processed several events to ensure that the observed H/V is stable. The comparison of the modelled dispersion and H/V ratio allows for mode identification, and consequently to recover the velocity model of the structure. We conclude on the predominance of higher modes in our observations. The excitation of higher modes is key element of explanation for the long duration and amplification of the seismic signals observed in the Valley of Mexico.
CALIPSO lidar ratio retrieval over the ocean.
Josset, Damien; Rogers, Raymond; Pelon, Jacques; Hu, Yongxiang; Liu, Zhaoyan; Omar, Ali; Zhai, Peng-Wang
2011-09-12
We are demonstrating on a few cases the capability of CALIPSO to retrieve the 532 nm lidar ratio over the ocean when CloudSat surface scattering cross section is used as a constraint. We are presenting the algorithm used and comparisons with the column lidar ratio retrieved by the NASA airborne high spectral resolution lidar. For the three cases presented here, the agreement is fairly good. The average CALIPSO 532 nm column lidar ratio bias is 13.7% relative to HSRL, and the relative standard deviation is 13.6%. Considering the natural variability of aerosol microphysical properties, this level of accuracy is significant since the lidar ratio is a good indicator of aerosol types. We are discussing dependencies of the accuracy of retrieved aerosol lidar ratio on atmospheric aerosol homogeneity, lidar signal to noise ratio, and errors in the optical depth retrievals. We are obtaining the best result (bias 7% and standard deviation around 6%) for a nighttime case with a relatively constant lidar ratio (in the vertical) indicative of homogeneous aerosol type.
Philipp, J.
2011-12-01
A detailed analysis of the measurement procedures recommended by the International Telecommunication Union (ITU) shows that - with proper definition of audio quality - the FM broadcasting system can provide an audio signal-to-noise ratio of no better than 40 dB, when the interference in the neighboring channels exhausts the limits established by the internationally agreed protection ratios. Thus any attempt to relax the protection, be it motivated by the desire to implement additional FM or new digital services in the FM band, would inevitably degrade reception quality of existing services to levels hardly acceptable by broadcast listeners.
Honguero Martínez, A F; García Jiménez, M D; García Vicente, A; López-Torres Hidalgo, J; Colon, M J; van Gómez López, O; Soriano Castrejón, Á M; León Atance, P
2016-01-01
F-18 fluorodeoxyglucose integrated PET-CT scan is commonly used in the work-up of lung cancer to improve preoperative disease stage. The aim of the study was to analyze the ratio between SUVmax of N1 lymph nodes and primary lung cancer to establish prediction of mediastinal disease (N2) in patients operated on non-small cell lung cancer. This is a retrospective study of a prospective database. Patients operated on non-small cell lung cancer (NSCLC) with N1 disease by PET-CT scan were included. None of them had previous induction treatment, but they underwent standard surgical resection plus systematic lymphadenectomy. There were 51 patients with FDG-PET-CT scan N1 disease. 44 (86.3%) patients were male with a mean age of 64.1±10.8 years. Type of resection: pneumonectomy=4 (7.9%), lobectomy/bilobectomy=44 (86.2%), segmentectomy=3 (5.9%). adenocarcinoma=26 (51.0%), squamous=23 (45.1%), adenosquamous=2 (3.9%). Lymph nodes after surgical resection: N0=21 (41.2%), N1=12 (23.5%), N2=18 (35.3%). Mean ratio of the SUVmax of N1 lymph node to the SUVmax of the primary lung tumor (SUVmax N1/T ratio) was 0.60 (range 0.08-2.80). ROC curve analysis to obtain the optimal cut-off value of SUVmax N1/T ratio to predict N2 disease was performed. At multivariate analysis, we found that a ratio of 0.46 or greater was an independent predictor factor of N2 mediastinal lymph node metastases with a sensitivity and specificity of 77.8% and 69.7%, respectively. SUVmax N1/T ratio in NSCLC patients correlates with mediastinal lymph node metastasis (N2 disease) after surgical resection. When SUVmax N1/T ratio on integrated PET-CT scan is equal or superior to 0.46, special attention should be paid on higher probability of N2 disease. Copyright © 2015 Elsevier España, S.L.U. and SEMNIM. All rights reserved.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Kozdon, R.; Kelly, D.; Fournelle, J.; Valley, J. W.
2012-12-01
Earth surface temperatures warmed by ~5°C during an ancient (~55.5 Ma) global warming event termed the Paleocene-Eocene thermal maximum (PETM). This transient (~200 ka) "hyperthermal" climate state had profound consequences for the planet's surficial processes and biosphere, and is widely touted as being an ancient analog for climate change driven by human activities. Hallmarks of the PETM are pervasive carbonate dissolution in the ocean basins and a negative carbon isotope excursion (CIE) recorded in variety of substrates including soil and marine carbonates. Together these lines of evidence signal the rapid (≤30 ka) release of massive quantities (≥2000 Gt) of 13C-depleted carbon into the exogenic carbon cycle. Paleoenvironmental reconstructions based on pedogenic features in paleosols, clay mineralogy and sedimentology of coastal and continental deposits, and land-plant communities indicate that PETM warmth was accompanied by a major perturbation to the hydrologic cycle. Micropaleontological evidence and n-alkane hydrogen isotope records indicate that increased poleward moisture transport reduced sea-surface salinities (SSSs) in the central Arctic Ocean during the PETM. Such findings are broadly consistent with predictions of climate model simulations. Here we reassess a well-studied PETM record from the Southern Ocean (ODP Site 690) in light of new δ18O and Mg/Ca data obtained from planktic foraminiferal shells by secondary ion mass spectrometry (SIMS) and electron microprobe analysis (EMPA), respectively. The unparalleled spatial resolution of these in situ techniques permits extraction of more reliable δ18O and Mg/Ca data by targeting of minute (≤10 μm spots), biogenic domains within individual planktic foraminifera that retain the original shell chemistry (Kozdon et al. 2011, Paleocean.). In general, the stratigraphic profile and magnitude of the δ18O decrease (~2.2‰) delimiting PETM warming in our SIMS-generated record are similar to those of
Feng Hui(冯晖); Lin Zhenghui
2004-01-01
Cascaded sigma-delta (MASH) modulators for higher order oversampled analog-to-digital conversion rely on precise matching of contributions from different quantizers to cancel lower order quantization noise from intermediate delta-sigma stages. This paper studies the effect of analog imperfections in the implementation, such as finite gain of the amplifiers and capacitor ratio mismatch, and presents an adaptive algorithm and implementation architectures for digital correction of such analog imperfections. Behavioral simulations on 1-1-1 oversampled converters demonstrate over 10dB improvements in signal-to-noise and over 20 dB improvements in dynamic range performance.
史海芳; 李树有; 姬永刚
2008-01-01
For two normal populations with u~nown means μi and variances σ2i>0,i=1,2,assume that there is a semi-order restriction between ratios of means and standard deviations and sample numbers of two normal populations are different.A procedure of obtaining the maximum likelihood estimatom of μi's and σ's under the semi-order restrictions is proposed.For i=3 case,some connected results and simulations are given.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Albarracin, R; Robles, G; Martinez-Tarifa, J M; Ardila-Rey, J
2015-09-01
Partial discharges measurement is one of the most useful tools for condition monitoring of high-voltage (HV) equipment. These phenomena can be measured on-line in radiofrequency (RF) with sensors such as the Vivaldi antenna, used in this paper, which improves the signal-to-noise ratio by rejecting FM and low-frequency TV bands. Additionally, the power ratios (PR), a signal-processing technique based on the power distribution of the incoming signals in frequency bands, are used to characterize different sources of PD and electromagnetic noise (EMN). The calculation of the time length of the pulses is introduced to separate signals where the PR alone do not give a conclusive solution. Thus, if several EM sources could be previously calibrated, it is possible to detect pulses corresponding to PD activity. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Physical layer security of underlay cognitive radio using maximal ratio combining
Hui ZHAO; Dan-yang WANG; Chao-qing TANG; Ya-ping LIU; Gao-feng PAN; Ting-ting LI; Yun-fei CHEN
2016-01-01
We investigate the secrecy outage performance of maximal ratio combining (MRC) in cognitive radio networks over Rayleigh fading channels. In a single-input multiple-output wiretap system, we consider a secondary user (SU-TX) that transmits confidential messages to another secondary user (SU-RX) equipped with M (M ≥ 1) antennas where the MRC technique is adopted to improve its received signal-to-noise ratio. Meanwhile, an eaves-dropper equipped with N (N ≥ 1) antennas adopts the MRC scheme to overhear the information between SU-TX and SU-RX. SU-TX adopts the underlay strategy to guarantee the service quality of the primary user without spectrum sensing. We derive the closed-form expressions for an exact and asymptotic secrecy outage probability.
Noise in Class AB translinear filter
Martini, G.; Svelto, V. [Pavia Univ. (Italy). Dip di Elettronica
1998-07-01
A specific statistical approach to describe the noise properties of non linear circuits is used. The noise properties of translinear filters operated in class AB are considered. This kind of filter has a dynamic range larger then the maximum signal to noise ratio, and exhibit signal to noise ratio saturation at high signal level. The paper shows how the noise properties depend on the circuit design parameters.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Francescon, Paolo, E-mail: paolo.francescon@ulssvicenza.it; Satariano, Ninfa [Department of Radiation Oncology, Ospedale Di Vicenza, Viale Rodolfi, Vicenza 36100 (Italy); Beddar, Sam [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77005 (United States); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, Indiana 46202 (United States)
2014-10-15
Purpose: Evaluate the ability of different dosimeters to correctly measure the dosimetric parameters percentage depth dose (PDD), tissue-maximum ratio (TMR), and off-axis ratio (OAR) in water for small fields. Methods: Monte Carlo (MC) simulations were used to estimate the variation of k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} for several types of microdetectors as a function of depth and distance from the central axis for PDD, TMR, and OAR measurements. The variation of k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} enables one to evaluate the ability of a detector to reproduce the PDD, TMR, and OAR in water and consequently determine whether it is necessary to apply correction factors. The correctness of the simulations was verified by assessing the ratios between the PDDs and OARs of 5- and 25-mm circular collimators used with a linear accelerator measured with two different types of dosimeters (the PTW 60012 diode and PTW PinPoint 31014 microchamber) and the PDDs and the OARs measured with the Exradin W1 plastic scintillator detector (PSD) and comparing those ratios with the corresponding ratios predicted by the MC simulations. Results: MC simulations reproduced results with acceptable accuracy compared to the experimental results; therefore, MC simulations can be used to successfully predict the behavior of different dosimeters in small fields. The Exradin W1 PSD was the only dosimeter that reproduced the PDDs, TMRs, and OARs in water with high accuracy. With the exception of the EDGE diode, the stereotactic diodes reproduced the PDDs and the TMRs in water with a systematic error of less than 2% at depths of up to 25 cm; however, they produced OAR values that were significantly different from those in water, especially in the tail region (lower than 20% in some cases). The microchambers could be used for PDD
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Minimum Length - Maximum Velocity
Panes, Boris
2011-01-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example we can predict the ratio between the minimum lengths in space and time using the results from OPERA about superluminal neutrinos.
Controllable single accumulated state-sequential acquisition with low signal noise ratio
JI Jiang; HUANG KaiZhi; JIN Liang; ZHANG LiZhi; ZHANG Meng
2009-01-01
The sequential estimation (SE) algorithm has a poor performance in the environment with a low signal-to-noise ratio (SNR) and a high bit error rate (BER), especially for unknown initial acquisition sequence. This paper summarizes the conventional sequence acquisition model, and discovers its several prob-persedly. To solve these problems, the paper presents a new algorithm, CSAS-SA (controllable single accumulated state-sequential acquisition). This algorithm accumulates the sequence innovation to a single appointed sequence state and makes the useful information accumulated effectively. Through simulation, CSAS-SA has a higher probability of success acquisition. When SNR equals -3 dB, the performance can be improved by 70%.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
A 8X Oversampling Ratio, 14bit, 5-MSamples/s Cascade 3-1 Sigma-delta Modulator
Y. Yin
2005-01-01
Full Text Available A 14-b, 5-MHz output-rate cascaded 3-1 sigma-delta analog-to-digital converters (ADC has been developed for broadband communication applications, and a novel 4th-order noise-shaping is obtained by using the proposed architecture. At a low oversampling ratio (OSR of 8, the ADC achieves 91.5dB signal-to-quantization ratio (SQNR, in contrast to 71.8dB of traditional 2-1-1 cascaded sigma-delta ADC in 2.5-MHz bandwidth and over 80dB signal-to-noise and distortion (SINAD even under assumptions of awful circuit non-idealities and opamp non-linearity. The proposed architecture can potentially operates at much more high frequencies with scaled IC technology, to expand the analog-to-digital conversion rate for high-resolution applications.
A 8X Oversampling Ratio, 14bit, 5-MSamples/s Cascade 3-1 Sigma-delta Modulator
Yin, Y.; Klar, H.; Wennekers, P.
2005-05-01
A 14-b, 5-MHz output-rate cascaded 3-1 sigma-delta analog-to-digital converters (ADC) has been developed for broadband communication applications, and a novel 4th-order noise-shaping is obtained by using the proposed architecture. At a low oversampling ratio (OSR) of 8, the ADC achieves 91.5dB signal-to-quantization ratio (SQNR), in contrast to 71.8dB of traditional 2-1-1 cascaded sigma-delta ADC in 2.5-MHz bandwidth and over 80dB signal-to-noise and distortion (SINAD) even under assumptions of awful circuit non-idealities and opamp non-linearity. The proposed architecture can potentially operates at much more high frequencies with scaled IC technology, to expand the analog-to-digital conversion rate for high-resolution applications.
Daprà, M; Salumbides, E J; Murphy, M T; Ubachs, W
2016-01-01
Carbon monoxide (CO) absorption in the sub-damped Lyman-$\\alpha$ absorber at redshift $z_{abs} \\simeq 2.69$, toward the background quasar SDSS J123714.60+064759.5 (J1237+0647), was investigated for the first time in order to search for a possible variation of the proton-to-electron mass ratio, $\\mu$, over a cosmological time-scale. The observations were performed with the Very Large Telescope/Ultraviolet and Visual Echelle Spectrograph with a signal-to-noise ratio of 40 per 2.5 kms$^{-1}$ per pixel at $\\sim 5000$ \\AA. Thirteen CO vibrational bands in this absorber are detected: the A$^{1}\\Pi$ - X$^{1}\\Sigma^{+}$ ($\
Signal to noise : listening for democracy and environment in climate change discourse
Glover, L. [Delaware Univ., Newark, DE (United States)
2000-06-01
This paper discussed the importance of active involvement by civic society in achieving long term greenhouse gas (GHG) emission reduction targets to stabilize atmospheric GHG gas concentrations. On the basis of the attempted GHG reductions by Annex I nations in the first reporting period under the UN Framework Convention on Climate Change (FCCC), climate change policy was generally a failure. Few developed nations managed to return annual emissions to anywhere near 1990 levels. This paper focused on the failures in national climate change policy in the United States and Australia in reducing GHG emissions. The author stated that the cause of these failures was not due to communication inadequacies between governments and the general public. National policy formulation processes have been characterized by minimal community input and low discourse over the ethical and practical implications of ecological justice. It was emphasized that civic society should be engaged in longer-term policy formulations to effectively overcome the limitations currently imposed by liberal-democratic nation states and ecological modernisation policy approaches. It was cautioned that until civic society is involved, progress will be bound by the contradictions of seeking to create ecologically-minded communities through governance that fails to explain the relationships between social behaviour and global ecology. 45 refs.
Resolution and signal-to-noise measurement of U.S. Army night-vision goggles
Rivamonte, Lorenzo A.
1990-10-01
The ability to quantitatively characterize the performance of night vision goggles (NVG) is being investigated because the present method of resolution evaluation relies on an imprecise, subjective pass/fail judgement by a trained observer viewing a test pattern. Variation in an observer's training, experience, psychological state, decision bias and visual acuity strongly affect his or her decision when required to decide if a marginal pair of goggles passes or fails. The controversy concerning the increase in commercial and military helicopter accidents involving NVG indicates a need to determine if 1) the use of defective or marginal NVG is a contributing factor to the increase in accidents or 2) the apparent correlation between NVG and accidents is simply due to the increased use of NVG in an expanded and inherently more dangerous flight envelope. The U.S. Army TMDE Support Group (USATSG) has developed instrumentation to augment the AN/3895 TS test set that presents high and low light level resolution targets to AN/PVS-5, AN/AVS-6 and AN/PVS-7 NVG. The NVG Resolution Augmentation to the AN/3895 TS presented here can also quantitatively measure image quality of other image producing systems which are normally viewed, adjusted or inspected by a human observer. The NVG Resolution Augmentation features a custom electronic circuit which provides a user-friendly interface between a commercially available CCD camera, monitor and oscilloscope. USATSG's Army Primary Standards Laboratory at the Redstone Arsenal is presently studying the possibility of a new measurement service by investigating various CCD camera/lens combinations in order to characterize a machine vision standard observer. A characterized image analysis system would enable absolute as well as relative measurements of image quality.
Lineshape spectroscopy with a very high resolution, very high signal-to-noise crystal spectrometer
Beiersdorfer, P.; Magee, E. W.; Brown, G. V.; Chen, H.; Emig, J. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Hell, N. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Dr. Remeis-Sternwarte & ECAP, Universität Erlangen-Nürnberg, 96049 Bamberg (Germany); Bitter, M.; Hill, K. W. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States); Allan, P.; Brown, C. R. D.; Hill, M. P.; Hoarty, D. J.; Hobbs, L. M. R.; James, S. F. [Directorate of Research and Applied Science, AWE plc, Reading RG7 4PR (United Kingdom)
2016-06-15
We have developed a high-resolution x-ray spectrometer for measuring the shapes of spectral lines produced from laser-irradiated targets on the Orion laser facility. The instrument utilizes a spherically bent crystal geometry to spatially focus and spectrally analyze photons from foil or microdot targets. The high photon collection efficiency resulting from its imaging properties allows the instrument to be mounted outside the Orion chamber, where it is far less sensitive to particles, hard x-rays, or electromagnetic pulses than instruments housed close to the target chamber center in ten-inch manipulators. Moreover, Bragg angles above 50° are possible, which provide greatly improved spectral resolution compared to radially viewing, near grazing-incidence crystal spectrometers. These properties make the new instrument an ideal lineshape diagnostic for determining plasma temperature and density. We describe its calibration on the Livermore electron beam ion trap facility and present spectral data of the K-shell emission from highly charged sulfur produced by long-pulse as well as short-pulse beams on the Orion laser in the United Kingdom.
Program Package for the Analysis of High Resolution High Signal-To-Noise Stellar Spectra
Piskunov, N.; Ryabchikova, T.; Pakhomov, Yu.; Sitnova, T.; Alekseeva, S.; Mashonkina, L.; Nordlander, T.
2017-06-01
The program package SME (Spectroscopy Made Easy), designed to perform an analysis of stellar spectra using spectral fitting techniques, was updated due to adding new functions (isotopic and hyperfine splittins) in VALD and including grids of NLTE calculations for energy levels of few chemical elements. SME allows to derive automatically stellar atmospheric parameters: effective temperature, surface gravity, chemical abundances, radial and rotational velocities, turbulent velocities, taking into account all the effects defining spectral line formation. SME package uses the best grids of stellar atmospheres that allows us to perform spectral analysis with the similar accuracy in wide range of stellar parameters and metallicities - from dwarfs to giants of BAFGK spectral classes.
Schuh, A. E.; Denning, A. S.; Corbin, K. D.; Uliasz, M.; Zupanski, D.; Parazoo, N. C.
2007-12-01
Estimation of regional carbon fluxes from sparse atmospheric data by transport inversion is complicated by high- frequency variations in surface fluxes in both space and time. We assume that a forward coupled model of the vegetated land surface and atmosphere adequately captures most of the high-frequency variations (SiB-RAMS) as a `preprocessor` of input data from remote sensing and large-scale weather. We then use continuous CO2 observations and backward-in-time Lagrangian particle modeling to estimate persistent multiplicative biases in photosynthesis and ecosystem respiration, constraining the temporal pattern of these fluxes with the forward model. With a sparse network of continuous observing sites in North America, the inverse problem is still badly underconstrained for flux biases on the model grid scale. Previous studies have reduced the dimensionality of this problem by using large `regions` such as biomes or ecoregions, or by seeking a smooth solution in space. This could introduce substantial bias in the solution because the actual flux biases are likely to be quite heterogeneous. We have evaluated the degree to which carbon flux over large regions (500 to 1500 km) can be recovered when the true spatial pattern is not smooth. We performed ensembles of inversions for a 4-month case study in May- August, 2004 over North America with synthetic mid-day CO2 observations from a network of 8 towers. A smooth regional field of model biases was superposed with ensembles of various degrees of grid-scale `noise,` and these were then used to create synthetic concentration data. The pseudodata were then inverted to estimate gridded values of the biases, which were then combined with time-varying model fluxes to create regional maps of sources and sinks. We found that the degree to which corrections in regional fluxes are possible will depend on the relative amount of variance in the regional vs grid scales, but that the system is quite successful in estimating regional monthly fluxes even when the regional scale constitutes a smaller percentage of the overall variance.
Reducing low signal-to-noise FUSE spectra: confirmation of Lyman continuum escape from Haro 11
Leitet, E; Piskunov, N; Andersson, B-G
2011-01-01
Galaxies are believed to be the main providers of Lyman continuum (LyC) photons during the early phases of the cosmic reionization. Little is known however, when it comes to escape fractions and the mechanisms behind the leakage. To learn more one may look at local objects, but so far only one low-z galaxy has shown any sign of emitting LyC radiation. With data from the Far Ultraviolet Spectroscopic Explorer (FUSE), Bergvall et al. (2006) found an absolute escape fraction of ionizing photons (f_esc) of 4-10% for the blue compact galaxy Haro 11. However, using a newer version of the reduction pipeline on the same data set, Grimes et al. (2007) could not confirm this and derived an upper limit of f_esc \\leq 2%. Here, using the last version of the pipeline CalFUSE v3.2, we aim at settling the question if Haro 11 is emitting ionizing radiation to a significant level or not. We also investigate the performance of the reduction pipeline for faint targets such as Haro 11. At these faint flux levels both FUSE and Cal...
2015-01-01
In the thermal infrared (TIR) waveband, solving the target emissivity spectrum and temperature leads to an ill-posed problem in which the number of unknown parameters is larger than that of available measurements. Generally, the approaches developed for solving this kind of problems are called, by a joint name, the TES (temperature and emissivity separation) algorithm. As is shown in the name, the TES algorithm is dedicated to separating the target temperature and emissivity in the calculating procedure. In this paper, a novel method called the new MaxEnt (maximum entropy) TES algorithm is proposed, which is considered as a promotion of the MaxEnt TES algorithm proposed by Barducci. The maximum entropy estimation is utilized as the basic framework in the two preceding algorithms, so that the two algorithms both could make temperature and emissivity separation, independent of experiential information derived by some special data bases. As a result, the two algorithms could be applied to solve the temperature and emissivity spectrum of the targets which are absolutely unknown to us. However, what makes the two algorithms different is that the alpha spectrum derived by the ADE (alpha derived emissivity) method is considered as priori information to be added in the new MaxEnt TES algorithm. Based on the Wien approximation, the ADE method is dedicated to the calculation of the alpha spectrum which has a similar distribution to the true emissivity spectrum. Based on the preceding promotion, the new MaxEnt TES algorithm keeps a simpler mathematical formalism. Without any doubt, the new MaxEnt TES algorithm provides a faster computation for large volumes of data (i.e. hyperspectral images of the Earth). Some numerical simulations have been performed; the data and results show that, the maximum RMSE of emissivity estimation is 0.017, the maximum absolute error of temperature estimation is 0.62 K. Added with Gaussian white noise in which the signal to noise ratio is measured
Interstellar CN and CH+ in Diffuse Molecular Clouds: 12C/13C Ratios and CN Excitation
Ritchey, A M; Lambert, D L
2010-01-01
We present very high signal-to-noise ratio absorption-line observations of CN and CH+ along 13 lines of sight through diffuse molecular clouds. The data are examined to extract precise isotopologic ratios of 12CN/13CN and 12CH+/13CH+ in order to assess predictions of diffuse cloud chemistry. Our results on 12CH+/13CH+ confirm that this ratio does not deviate from the ambient 12C/13C ratio in local interstellar clouds, as expected if the formation of CH+ involves nonthermal processes. We find that 12CN/13CN, however, can be significantly fractionated away from the ambient value. The dispersion in our sample of 12CN/13CN ratios is similar to that found in recent surveys of 12CO/13CO. For sight lines where both ratios have been determined, the 12CN/13CN ratios are generally fractionated in the opposite sense compared to 12CO/13CO. Chemical fractionation in CO results from competition between selective photodissociation and isotopic charge exchange. An inverse relationship between 12CN/13CN and 12CO/13CO follows ...
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Parametric optimization of thermoelectric elements footprint for maximum power generation
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost-perform...
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Empirical determination of Einstein A-coefficient ratios of bright [Fe II] lines
Giannini, T; Nisini, B; Lorenzetti, D; Alcala', J M; Bacciotti, F; Bonito, R; Podio, L; Stelzer, B
2014-01-01
The Einstein spontaneous rates (A-coefficients) of Fe^+ lines have been computed by several authors, with results that differ from each other up to 40%. Consequently, models for line emissivities suffer from uncertainties which in turn affect the determination of the physical conditions at the base of line excitation. We provide an empirical determination of the A-coefficient ratios of bright [Fe II] lines, which would represent both a valid benchmark for theoretical computations and a reference for the physical interpretation of the observed lines. With the ESO-VLT X-shooter instrument between 3,000 A, and 24,700 A, we obtained a spectrum of the bright Herbig-Haro object HH1. We detect around 100 [Fe II] lines, some of which with a signal-to-noise ratio > 100. Among these latter, we selected those emitted by the same level, whose de-reddened intensity ratio is a direct function of the Einstein A-coefficient ratios. From the same X-shooter spectrum, we got an accurate estimate of the extinction toward HH1 thr...
A SPARSITY AND COMPRESSION RATIO JOINT ADJUSTMENT METHOD FOR COLLABORATIVE SPECTRUM SENSING
Chi Jingxiu; Zhang Jianwu; Xu Xiaorong
2012-01-01
Spectrum sensing is the fundamental task for Cognitive Radio (CR).To overcome the challenge of high sampling rate in traditional spectral estimation methods,Compressed Sensing (CS) theory is developed.A sparsity and compression ratio joint adjustment algorithm for compressed spectrum sensing in CR network is investigated,with the hypothesis that the sparsity level is unknown as priori knowledge at CR terminals.As perfect spectrum reconstruction is not necessarily required during spectrum detection process,the proposed algorithm only performs a rough estimate of sparsity level.Meanwhile,in order to further reduce the sensing measurement,different compression ratios for CR terminals with varying Signal-to-Noise Ratio (SNR) are considered.The proposed algorithm,which optimizes the compression ratio as well as the estimated sparsity level,can greatly reduce the sensing measurement without degrading the detection performance.It also requires less steps of iteration for convergence.Corroborating simulation results are presented to testify the effectiveness of the proposed algorithm for collaborative spectrum sensing.
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
Pratt, M. J.; Shen, W.; Wiens, D.; Winberry, J. P.; Anandakrishnan, S.
2016-12-01
Horizontal-to-vertical (H/V) ellipticity ratios of Rayleigh waves have been used to determine shallow (reflection imaging showing a deeper sedimentary package that extends to an unknown depth. It is also known that the frictional properties of the WIS ice-bed interface at 700 m depth are highly heterogeneous, including stick-spots of high friction, possibly as a result of compacted sediment or bedrock, and active subglacial lakes where frictional coefficients are effectively zero. Ambient noise cross-correlations are calculated between all station pairs, restricting the minimum interstation distance to 20 km, as well as constraining valid H/V ratios of radial and vertical sources between the same station pair to wave energy with good signal-to-noise between 6 s and 20 s that are sensitive to the shear velocity of the shallowest sedimentary layers beneath the ice stream and is combined with average phase and group velocity of the area to help constrain the inversion. H/V ratio modeling results suggest that ratios are highly susceptible to sedimentary layer thickness. Ratios also increase over the observed frequency band with the presence of a shallow, saturated sedimentary layer with high Vp/Vs. In preliminary results, we observe an increase in H/V ratio towards the grounding line as well as at stations where hydro-potential surface is high. These higher ratios can be attributed to higher water content within sediments, or an increase in the sedimentary layer thickness.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Huerta, E A; Brown, Duncan A
2012-01-01
The LIGO detector is undergoing a major upgrade that will increase its sensitivity by a factor of 10, and extend its bandwidth from 40 Hz to 10 Hz on the lower frequency end, while also allowing for high-frequency operation due to its tunability. This advanced LIGO (aLIGO) detector will extend the mass range at which compact mass binaries may be detected by a factor of four or more at a fixed signal-to-noise ratio [1]. The inspirals of stellar-mass compact objects into intermediate-mass black holes (IMBHs) of 50-350 solar masses will lie in the frequency band of aLIGO [2]. GW searches for these type of events will provide conclusive evidence for the existence of IMBHs and explore the dynamics of cluster environments. To realize this science we need to develop waveform templates that accurately capture the dynamical evolution of these type of events before aLIGO begins observations. Implementing gravitational self-force (SF) corrections in templates for compact binaries with mass-ratios 1:10-1:1000 will be ess...
EMPIRICAL DETERMINATION OF EINSTEIN A-COEFFICIENT RATIOS OF BRIGHT [Fe II] LINES
Giannini, T.; Antoniucci, S.; Nisini, B.; Lorenzetti, D. [INAF-Osservatorio Astronomico di Roma, Via Frascati 33, I-00040 Monte Porzio Catone (Italy); Alcalá, J. M. [INAF-Osservatorio Astronomico di Capodimonte, Via Moiariello 16, I-80131 Napoli (Italy); Bacciotti, F.; Podio, L. [INAF-Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, I-50125 Firenze (Italy); Bonito, R.; Stelzer, B., E-mail: teresa.giannini@oa-roma.inaf.it [INAF-Osservatorio Astronomico di Palermo, Piazza del Parlamento 1, I-90134 Palermo (Italy)
2015-01-01
The Einstein spontaneous rates (A-coefficients) of Fe{sup +} lines have been computed by several authors with results that differ from each other by up to 40%. Consequently, models for line emissivities suffer from uncertainties that in turn affect the determination of the physical conditions at the base of line excitation. We provide an empirical determination of the A-coefficient ratios of bright [Fe II] lines that would represent both a valid benchmark for theoretical computations and a reference for the physical interpretation of the observed lines. With the ESO-Very Large Telescope X-shooter instrument between 3000 Å and 24700 Å, we obtained a spectrum of the bright Herbig-Haro object HH 1. We detect around 100 [Fe II] lines, some of which with a signal-to-noise ratios ≥100. Among these latter lines, we selected those emitted by the same level, whose dereddened intensity ratios are direct functions of the Einstein A-coefficient ratios. From the same X-shooter spectrum, we got an accurate estimate of the extinction toward HH 1 through intensity ratios of atomic species, H I recombination lines and H{sub 2} ro-vibrational transitions. We provide seven reliable A-coefficient ratios between bright [Fe II] lines, which are compared with the literature determinations. In particular, the A-coefficient ratios involving the brightest near-infrared lines (λ12570/λ16440 and λ13209/λ16440) are in better agreement with the predictions by the Quinet et al. relativistic Hartree-Fock model. However, none of the theoretical models predict A-coefficient ratios in agreement with all of our determinations. We also show that literature data of near-infrared intensity ratios better agree with our determinations than with theoretical expectations.
Back Work Ratio of Brayton Cycle
Malaver de la Fuente M.
2010-07-01
Full Text Available This paper analizes the existing relation between temperatures, back work ratio and net work of Brayton cycle, a cycle that describes gas turbine engines performance. The application of computational soft ware helps to show the influence of back work ratio or coupling ratio, compressor and turbine in let temperatures in an ideal thermodynamical cycle. The results lead to deduce that the maximum value reached in back work ratio will depend on the ranges of maximum and minimal temperatures of Brayton cycle.
Christoph Nick
2015-07-01
Full Text Available Improving the interface between electrodes and neurons has been the focus of research for the last decade. Neuroelectrodes should show small geometrical surface area and low impedance for measuring and high charge injection capacities for stimulation. Increasing the electrochemically active surface area by using nanoporous electrode material or by integrating nanostructures onto planar electrodes is a common approach to improve this interface. In this paper a simulation approach for neuro electrodes' characteristics with integrated high aspect ratio nano structures based on a point-contact-model is presented. The results are compared with experimental findings conducted with real nanostructured microelectrodes. In particular, effects of carbon nanotubes and gold nanowires integrated onto microelectrodes are described. Simulated and measured impedance properties are presented and its effects onto the transfer function between the neural membrane potential and the amplifier output signal are studied based on the point-contact-model. Simulations show, in good agreement with experimental results, that electrode impedances can be dramatically reduced by the integration of high aspect ratio nanostructures such as gold nanowires and carbon nanotubes. This lowers thermal noise and improves the signal-to-noise ratio for measuring electrodes. It also may increase the adhesion of cells to the substrate and thus increase measurable signal amplitudes.
Stellar mass-to-light ratios from galaxy spectra: how accurate can they be?
Gallazzi, Anna
2009-01-01
Stellar masses play a crucial role in the exploration of galaxy properties and the evolution of the galaxy population. In this paper, we explore the minimum possible uncertainties in stellar mass-to-light (M/L) ratios from the assumed star formation history (SFH) and metallicity distribution, with the goals of providing a minimum set of requirements for observational studies. We use a large Monte Carlo library of SFHs to study as a function of galaxy spectral type and signal-to-noise ratio (S/N) the statistical uncertainties of M/L values using either absorption-line data or broad band colors. The accuracy of M/L estimates can be significantly improved by using metal-sensitive indices in combination with age-sensitive indices, in particular for galaxies with intermediate-age or young stellar populations. While M/L accuracy clearly depends on the spectral S/N ratio, there is no significant gain in improving the S/N much above 50/pix and limiting uncertainties of 0.03 dex are reached. Assuming that dust is accu...
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Krämer, Martin; Herrmann, Karl-Heinz; Biermann, Judith; Reichenbach, Jurgen R
2014-08-01
To demonstrate radial golden-ratio-based cardiac cine imaging by using interspersed one-dimensional (1D) navigators. The 1D navigators were interspersed into the acquisition of radial spokes which were continuously rotated by an angle increment based on the golden-ratio. Performing correlation analysis between the 1D navigator projections, time points corresponding to the same cardiac motion phases were automatically identified and used to combine retrospectively golden-ratio rotated radial spokes from multiple data windows. Data windows were shifted consecutively for dynamic reconstruction of different cardiac motion frames. Experiments were performed during a single breathhold. By artificially reducing the amount of input data, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) as well as artifact level was evaluated for different breathhold durations. Analysis of the 1D navigator data provided a detailed correlation function revealing cardiac motion over time. Imaging results were comparable to images reconstructed based on a timely synchronized ECG. Cardiac cine images with a low artifact level and good image quality in terms of SNR and CNR were reconstructed from volunteer data achieving a CNR between the myocardium and the left ventricular cavity of 50 for the longest breathhold duration of 26 s. CNR maintained a value higher than 30 for acquisition times as low as 10 s. Combining radial golden-ratio-based imaging with an intrinsic navigator is a promising and robust method for performing high quality cardiac cine imaging. © 2013 Wiley Periodicals, Inc.
Comparison of image deconvolution algorithms on simulated and laboratory infrared images
Proctor, D. [Lawrence Livermore National Lab., CA (United States)
1994-11-15
We compare Maximum Likelihood, Maximum Entropy, Accelerated Lucy-Richardson, Weighted Goodness of Fit, and Pixon reconstructions of simple scenes as a function of signal-to-noise ratio for simulated images with randomly generated noise. Reconstruction results of infrared images taken with the TAISIR (Temperature and Imaging System InfraRed) are also discussed.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Maoinser Mohd Azuwan
2014-07-01
Full Text Available Drilling hybrid fiber reinforced polymer (HFRP composite is a novel approach in fiber reinforced polymer (FRP composite machining studies as this material combining two different fibers in a single matrix that resulted in considerable improvement in mechanical properties and cost saving as compared to conventional fiber composite material. This study presents the development and optimized way of drilling HFRP composite at various drilling parameters such as drill point angle, feed rate and cutting speed by using the full factorial design experiment with the combination of analysis of variance (ANOVA approach and signal to noise (S/N ratio analysis. The results identified optimum drilling parameters for drilling the HFRP composite using small drill point angle at low feed rate and medium cutting speed that resulted in lower thrust force.
A method of measuring the [α/Fe] ratios from the spectra of the LAMOST survey
Li, Ji; Han, Chen; Xiang, Mao-Sheng; Shi, Jian-Rong; Zhao, Jing-Kun; Liu, Xiao-Wei; Zhang, Hua-Wei; Yuan, Hai-Bo; Ci, Xuan; Zhang, Xiao-Feng; Wang, Yue-Xiang; Huang, Yang; Zhang, Yong; Hou, Yong-Hui; Wang, Yue-Fei; Cao, Zi-Huang
2016-07-01
The [α/Fe] ratios in stars are good tracers to probe the formation history of stellar populations and the chemical evolution of the Galaxy. The spectroscopic survey of LAMOST provides a good opportunity to determine [α/Fe] of millions of stars in the Galaxy. We present a method of measuring the [α/Fe] ratios from LAMOST spectra using the template-matching technique of the LSP3 pipeline. We use three test samples of stars selected from the ELODIE and MILES libraries, as well as the LEGUE survey to validate our method. Based on the test results, we conclude that our method is valid for measuring [α/Fe] from low-resolution spectra acquired by the LAMOST survey. Within the range of the stellar parameters T eff = [5000, 7500] K, log g = [1.0, 5.0] dex and [Fe/H]= [-1.5, +0.5] dex, our [α/Fe] measurements are consistent with values derived from high-resolution spectra, and the accuracy of our [α/Fe] measurements from LAMOST spectra is better than 0.1 dex with spectral signal-to-noise higher than 20.
Expected gain in the pyramid wavefront sensor with limited Strehl ratio
Viotto, V.; Ragazzoni, R.; Bergomi, M.; Magrin, D.; Farinato, J.
2016-09-01
Context. One of the main properties of the pyramid wavefront sensor is that, once the loop is closed, and as the reference star image shrinks on the pyramid pin, the wavefront estimation signal-to-noise ratio can considerably improve. This has been shown to translate into a gain in limiting magnitude when compared with the Shack-Hartmann wavefront sensor, in which the sampling on the wavefront is performed before the light is split into four quadrants, which does not allow the quality of the focused spot to increase. Since this property is strictly related to the size of the re-imaged spot on the pyramid pin, the better the wavefront correction, the higher the gain. Aims: The goal of this paper is to extend the descriptive and analytical computation of this gain that was given in a previous paper, to partial wavefront correction conditions, which are representative for most of the wide field correction adaptive optics systems. Methods: After focusing on the low Strehl ratio regime, we analyze the minimum spatial sampling required for the wavefront sensor correction to still experience a considerable gain in sensitivity between the pyramid and the Shack-Hartmann wavefront sensors. Results: We find that the gain can be described as a function of the sampling in terms of the Fried parameter.
Hadjtaieb, Amir
2013-09-12
In this paper, we propose an incremental multinode relaying protocol with arbitrary N-relay nodes that allows an efficient use of the channel spectrum. The destination combines the received signals from the source and the relays using maximal ratio Combining (MRC). The transmission ends successfully once the accumulated signal-to-noise ratio (SNR) exceeds a predefined threshold. The number of relays participating in the transmission is adapted to the channel conditions based on the feedback from the destination. The use of incremental relaying allows obtaining a higher spectral efficiency. Moreover, the symbol error probability (SEP) performance is enhanced by using MRC at the relays. The use of MRC at the relays implies that each relay overhears the signals from the source and all previous relays and combines them using MRC. The proposed protocol differs from most of existing relaying protocol by the fact that it combines both incremental relaying and MRC at the relays for a multinode topology. Our analyses for a decode-and-forward mode show that: (i) compared to existing multinode relaying schemes, the proposed scheme can essentially achieve the same SEP performance but with less average number of time slots, (ii) compared to schemes without MRC at the relays, the proposed scheme can approximately achieve a 3 dB gain.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Birzvalk, Yu.
1978-01-01
The shunting ratio and the local shunting ratio, pertaining to currents induced by a magnetic field in a flow channel, are properly defined and systematically reviewed on the basis of the Lagrange criterion. Their definition is based on the energy balance and related to dimensionless parameters characterizing an MHD flow, these parameters evolving from the Hartmann number and the hydrodynamic Reynolds number as well as the magnetic Reynolds number, and the Lundquist number. These shunting ratios, of current density in the core of a stream (uniform) or equivalent mean current density to the short-circuit (maximum) current density, are given here for a slot channel with nonconducting or conducting walls, for a conduction channel with heavy side rails, and for an MHD-flow around bodies. 5 references, 1 figure.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
Kriegerowski, Marius; Cesca, Simone; Krüger, Frank; Dahm, Torsten; Horálek, Josef
2016-04-01
Aside from the propagation velocity of seismic waves, their attenuation can provide a direct measure of rock properties in the sampled subspace. We present a new attenuation tomography approach exploiting relative amplitude spectral ratios of earthquake pairs. We focus our investigation on North West Bohemia - a region characterized by intense earthquake swarm activity in a confined source region. The inter-event distances are small compared to the epicentral distances to the receivers meeting a fundamental requirement of the method. Due to the similar event locations also the ray paths are very similar. Consequently, the relative spectral ratio is affected mostly by rock properties along the path of the vector distance and thus representative of the focal region. In order to exclude effects of the seismic source spectra, only the high frequency content beyond the corner frequency is taken into consideration. This requires high quality as well as high sampling records. Future improvements in that respect can be expected from the ICDP proposal "Eger rift", which includes plans to install borehole monitoring in the investigated region. 1D and 3D synthetic tests show the feasibility of the presented method. Furthermore, we demonstrate influences of perturbations in source locations and travel time estimates on the determination of Q. Errors in Q scale linearly with errors in the differential travel times. These sources of errors can be attributed to the complex velocity structure of the investigated region. A critical aspect is the signal-to-noise ratio, which imposes a strong limitation and emphasizes the demand for high quality recordings. Hence, the presented method is expected to benefit from bore hole installations. Since we focus our analysis on the NW Bohemia case study example, a synthetic earthquake catalog incorporating source characteristics deduced from preceding moment tensor inversions coupled with a realistic velocity model provides us with a realistic
Minimum length-maximum velocity
Panes, Boris
2012-03-01
We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example, we can predict the ratio between the minimum lengths in space and time using the results from OPERA on superluminal neutrinos.
Quantitative analysis of autoradiographic image intensification using Thiourea-S35
Askins, B. S.; Odell, C. R.
1980-01-01
Photographic images enhanced by the method of Thiourea-S35 autoradiography are evaluated in terms of signal-to-noise ratio, detective quantum efficiency (DQE), and Wiener spectrum analysis using digitized images. It is determined that the original signal-to-noise ratio is not degraded by the intensification process which allows an increase in the practical working DQE as a function of density. These results apply at all spatial frequencies that were tested. The advantage given by autoradiography is the ability to produce usable images from emulsions originally exposed to the low densities corresponding to maximum DQE and movement of faint image densities above the level of the threshold for detection.
The optimal polarizations for achieving maximum contrast in radar images
Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.
1988-01-01
There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Spectral ratio method for measuring emissivity
Watson, K.
1992-01-01
The spectral ratio method is based on the concept that although the spectral radiances are very sensitive to small changes in temperature the ratios are not. Only an approximate estimate of temperature is required thus, for example, we can determine the emissivity ratio to an accuracy of 1% with a temperature estimate that is only accurate to 12.5 K. Selecting the maximum value of the channel brightness temperatures is an unbiased estimate. Laboratory and field spectral data are easily converted into spectral ratio plots. The ratio method is limited by system signal:noise and spectral band-width. The images can appear quite noisy because ratios enhance high frequencies and may require spatial filtering. Atmospheric effects tend to rescale the ratios and require using an atmospheric model or a calibration site. ?? 1992.
From Fibonacci Sequence to the Golden Ratio
Alberto Fiorenza
2013-01-01
Full Text Available We consider the well-known characterization of the Golden ratio as limit of the ratio of consecutive terms of the Fibonacci sequence, and we give an explanation of this property in the framework of the Difference Equations Theory. We show that the Golden ratio coincides with this limit not because it is the root with maximum modulus and multiplicity of the characteristic polynomial, but, from a more general point of view, because it is the root with maximum modulus and multiplicity of a restricted set of roots, which in this special case coincides with the two roots of the characteristic polynomial. This new perspective is the heart of the characterization of the limit of ratio of consecutive terms of all linear homogeneous recurrences with constant coefficients, without any assumption on the roots of the characteristic polynomial, which may be, in particular, also complex and not real.
Rudolph Spangler
Full Text Available BACKGROUND: PCR in principle can detect a single target molecule in a reaction mixture. Contaminating bacterial DNA in reagents creates a practical limit on the use of PCR to detect dilute bacterial DNA in environmental or public health samples. The most pernicious source of contamination is microbial DNA in DNA polymerase preparations. Importantly, all commercial Taq polymerase preparations inevitably contain contaminating microbial DNA. Removal of DNA from an enzyme preparation is problematical. METHODOLOGY/PRINCIPAL FINDINGS: This report demonstrates that the background of contaminating DNA detected by quantitative PCR with broad host range primers can be decreased greater than 10-fold through the simple expedient of Taq enzyme dilution, without altering detection of target microbes in samples. The general method is: For any thermostable polymerase used for high-sensitivity detection, do a dilution series of the polymerase crossed with a dilution series of DNA or bacteria that work well with the test primers. For further work use the concentration of polymerase that gave the least signal in its negative control (H(2O while also not changing the threshold cycle for dilutions of spiked DNA or bacteria compared to higher concentrations of Taq polymerase. CONCLUSIONS/SIGNIFICANCE: It is clear from the studies shown in this report that a straightforward procedure of optimizing the Taq polymerase concentration achieved "treatment-free" attenuation of interference by contaminating bacterial DNA in Taq polymerase preparations. This procedure should facilitate detection and quantification with broad host range primers of a small number of bona fide bacteria (as few as one in a sample.
Precise observations of the 12C/13C ratios of HC3N in the low-mass star-forming region L1527
Araki, Mitsunori; Sakai, Nami; Yamamoto, Satoshi; Oyama, Takahiro; Kuze, Nobuhiko; Tsukiyama, Koichi
2016-01-01
Using the Green Bank 100 m telescope and the Nobeyama 45 m telescope, we have observed the rotational emission lines of the three 13C isotopic species of HC3N in the 3 and 7 mm bands toward the low-mass star-forming region L1527 in order to explore their anomalous 12C/13C ratios. The column densities of the 13C isotopic species are derived from the intensities of the J = 5-4 lines observed at high signal-to-noise ratios. The abundance ratios are determined to be 1.00:1.01 +- 0.02:1.35 +- 0.03:86.4 +- 1.6 for [H13CCCN]:[HC13CCN]:[HCC13CN]:[HCCCN], where the errors represent one standard deviation. The ratios are very similar to those reported for the starless cloud, Taurus Molecular Cloud-1 Cyanopolyyne Peak (TMC-1 CP). These ratios cannot be explained by thermal equilibrium, but likely reflect the production pathways of this molecule. We have shown the equality of the abundances of H13CCCN and HC13CCN at a high-confidence level, which supports the production pathways of HC3N via C2H2 and C2H2+. The average 12...
DFT based spatial multiplexing and maximum ratio transmission for mm-wawe large MIMO
Phan-Huy, D.-T.; Tölli, A.; Rajatheva, N.;
2014-01-01
By using large point-to-point multiple input multiple output (MIMO), spatial multiplexing of a large number of data streams in wireless communications using millimeter-waves (mm-waves) can be achieved. However, according to the antenna spacing and transmitter-receiver distance, the MIMO channel...
Liu, Runna; Hu, Hong; Xu, Shanshan; Huo, Rui; Wang, Supin; Wan, Mingxi
2015-06-01
The quality of ultrafast active cavitation imaging (UACI) using plane wave transmission is hindered by low transmission pressure, which is necessary to prevent bubble destruction. In this study, a UACI method that combined wavelet transform with pulse inversion (PI) was proposed to enhance the contrast between the cavitation bubbles and surrounding tissues. The main challenge in using wavelet transform is the selection of the optimum mother wavelet. A mother wavelet named "cavitation bubble wavelet" and constructed according to Rayleigh-Plesset-Noltingk-Neppiras-Poritsky model was expected to obtain a high correlation between the bubbles and beamformed echoes. The method was validated by in vitro experiments. Results showed that the image quality was associated with the initial radius of bubble and the scale. The signal-to-noise ratio (SNR) of the best optimum cavitation bubble wavelet transform (CBWT) mode image was improved by 3.2 dB compared with that of the B-mode image in free-field experiments. The cavitation-to-tissue ratio of the best optimum PI-based CBWT mode image was improved by 2.3 dB compared with that of the PI-based B-mode image in tissue experiments. Furthermore, the SNR versus initial radius curve had the potential to estimate the size distribution of cavitation bubbles.
Hosaka, Makoto; Ishii, Toshiki; Tanaka, Asato; Koga, Shogo; Hoshizawa, Taku
2013-09-01
We developed an iterative method for optimizing the exposure schedule to obtain a constant signal-to-scatter ratio (SSR) to accommodate various recording conditions and achieve high-density recording. 192 binary images were recorded in the same location of a medium in approximately 300×300 µm2 using an experimental system embedded with a blue laser diode with a 405 nm wavelength and an objective lens with a 0.85 numerical aperture. The recording density of this multiplexing corresponds to 1 Tbit/in.2. The recording exposure time was optimized through the iteration of a three-step sequence consisting of total reproduced intensity measurement, target signal calculation, and recording energy density calculation. The SSR of pages recorded with this method was almost constant throughout the entire range of the reference beam angle. The signal-to-noise ratio of the sampled pages was over 2.9 dB, which is higher than the reproducible limit of 1.5 dB in our experimental system.
Chinnaraj Geetha
2014-03-01
Full Text Available The aim of the present study was to evaluate the use of the envelope difference index (EDI and log-likelihood ratio (LLR to quantify the independent and interactive effects of wide dynamic range compression, digital noise reduction and directionality, and to carry out selfrated quality measures. A recorded sentence embedded in speech spectrum noise at +5 dB signal to noise ratio was presented to a four channel digital hearing aid and the output was recorded with different combinations of algorithms at 30, 45 and 70 dB HL levels of presentation through a 2 cc coupler. EDI and LLR were obtained in comparison with the original signal using MATLAB software. In addition, thirty participants with normal hearing sensitivity rated the output on the loudness and clarity parameters of quality. The results revealed that the temporal changes happening at the output is independent of the number of algorithms activated together in a hearing aid. However, at a higher level of presentation, temporal cues are better preserved if all of these algorithms are deactivated. The spectral components speech tend to get affected by the presentation level. The results also indicate the importance of quality rating as this helps in considering whether the spectral and/or temporal deviations created in the hearing aid are desirable or not.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
ICP-MS with hexapole collision cell for isotope ratio measurements of Ca, Fe, and Se.
Boulyga, S F; Becker, J S
2001-07-01
To avoid mass interferences on analyte ions caused by argon ions and argon molecular ions via reactions with collision gases, an rf hexapole filled with helium and hydrogen has been used in inductively coupled plasma mass spectrometry (ICP-MS), and its performance has been studied. Up to tenfold improvement in sensitivity was observed for heavy elements (m > 100 u), because of better ion transmission through the hexapole ion guide. A reduction of argon ions Ar+ and the molecular ions of argon ArX+ (X = O, Ar) by up to three orders of magnitude was achieved in a hexapole collision cell of an ICP-MS ("Platform ICP", Micromass, Manchester, UK) as a result of gas-phase reactions with hydrogen when the hexapole bias (HB) was set to 0 V; at an HB of 1.6 V argon, and argon-based ions of masses 40 u, 56 u, and 80 u, were reduced by approximately four, two, and five orders of magnitude, respectively. The signal-to-noise ratio 80Se/ 40Ar2+ was improved by more than five orders of magnitude under optimized experimental conditions. Dependence of mass discrimination on collision-cell properties was studied in the mass range 10 u (boron) to 238 u (uranium). Isotopic analysis of the elements affected by mass-spectrometric interference, Ca, Fe, and Se, was performed using a Meinhard nebulizer and an ultrasonic nebulizer (USN). The measured isotope ratios were comparable with tabulated values from IUPAC. Precision of 0.26%, 0.19%, and 0.12%, respectively, and accuracy of 0.13% 0.25%, and 0.92%, respectively, was achieved for isotope ratios 44Ca/ 40Ca and 56Fe/57Fe in 10 microg L(-1) solution nebulized by means of a USN and for 78Se/80Se in 100 microg L(-1) solution nebulized by means of a Meinhard nebulizer.
A tropospheric ozone maximum over the equatorial Southern Indian Ocean
L. Zhang
2012-05-01
Full Text Available We examine the distribution of tropical tropospheric ozone (O_{3} from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O_{3} during 2005 to 2009 reveal a distinct, persistent O_{3} maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O_{3} observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O_{3} maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O_{3} maximum is dominated by the O_{3} production driven by lightning nitrogen oxides (NO_{x} emissions, which accounts for 62% of the tropospheric column O_{3} in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O_{3} maximum are rather small. The O_{3} productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O_{3} maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O_{3} maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Influence of maximum decking charge on intensity of blasting vibration
无
2006-01-01
Based on the character of short-time non-stationary random signal, the relationship between the maximum decking charge and energy distribution of blasting vibration signals was investigated by means of the wavelet packet method. Firstly, the characteristics of wavelet transform and wavelet packet analysis were described. Secondly, the blasting vibration signals were analyzed by wavelet packet based on software MATLAB, and the change of energy distribution curve at different frequency bands were obtained. Finally, the law of energy distribution of blasting vibration signals changing with the maximum decking charge was analyzed. The results show that with the increase of decking charge, the ratio of the energy of high frequency to total energy decreases, the dominant frequency bands of blasting vibration signals tend towards low frequency and blasting vibration does not depend on the maximum decking charge.
The maximum force in a column under constant speed compression
Kuzkin, Vitaly A
2015-01-01
Dynamic buckling of an elastic column under compression at constant speed is investigated assuming the first-mode buckling. Two cases are considered: (i) an imperfect column (Hoff's statement), and (ii) a perfect column having an initial lateral deflection. The range of parameters, where the maximum load supported by a column exceeds Euler static force is determined. In this range, the maximum load is represented as a function of the compression rate, slenderness ratio, and imperfection/initial deflection. Considering the results we answer the following question: "How slowly the column should be compressed in order to measure static load-bearing capacity?" This question is important for the proper setup of laboratory experiments and computer simulations of buckling. Additionally, it is shown that the behavior of a perfect column having an initial deflection differ significantlys form the behavior of an imperfect column. In particular, the dependence of the maximum force on the compression rate is non-monotoni...
Optimal Portfolio Strategy under Rolling Economic Maximum Drawdown Constraints
Xiaojian Yu; Siyu Xie; Weijun Xu
2014-01-01
This paper deals with the problem of optimal portfolio strategy under the constraints of rolling economic maximum drawdown. A more practical strategy is developed by using rolling Sharpe ratio in computing the allocation proportion in contrast to existing models. Besides, another novel strategy named “REDP strategy” is further proposed, which replaces the rolling economic drawdown of the portfolio with the rolling economic drawdown of the risky asset. The simulation tests prove that REDP stra...
Study of maximum pressure for composite hepta-tubular powders
M. C. Gupta
1959-10-01
Full Text Available In this paper the expressions for maximum pressure occurring positions in the case of composite hepta-tubular powers used in conventional guns and the corresponding conditions have been derived under certain conditions, viz., the value of n, the ratio of specific heats, has been assumed to be the same for both the charges and the covolume corrections have not been neglected.
Spatio-temporal observations of tertiary ozone maximum
V. F. Sofieva
2009-03-01
Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.
The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.
Since ozone in the mesosphere is very sensitive to HO_{x} concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HO_{x} enhancement from the increased ionization.
Tănase Alin-Eliodor
2014-08-01
Full Text Available This article focuses on computing techniques starting from trial balance data regarding financial key ratios. There are presented activity, liquidity, solvency and profitability financial key ratios. It is presented a computing methodology in three steps based on a trial balance.
Collins, Mimi
1997-01-01
Explores how human resource professionals, with above average offer/acceptance ratios, streamline their recruitment efforts. Profiles company strategies with internships, internal promotion, cooperative education programs, and how to get candidates to accept offers. Also discusses how to use the offer/acceptance ratio as a measure of program…
Akkerman, J. W.
1982-01-01
New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.
Wyer, J C; Salzinger, F H
1983-01-01
Many common management techniques have little use in managing a medical group practice. Ratio analysis, however, can easily be adapted to the group practice setting. Acting as broad-gauge indicators, financial ratios provide an early warning of potential problems and can be very useful in planning for future operations. The author has gathered a collection of financial ratios which were developed by participants at an education seminar presented for the Virginia Medical Group Management Association. Classified according to the human element, system component, and financial factor, the ratios provide a good sampling of measurements relevant to medical group practices and can serve as an example for custom-tailoring a ratio analysis system for your medical group.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Quantitative visually lossless compression ratio determination of JPEG2000 in digitized mammograms.
Georgiev, Verislav T; Karahaliou, Anna N; Skiadopoulos, Spyros G; Arikidis, Nikos S; Kazantzi, Alexandra D; Panayiotakis, George S; Costaridou, Lena I
2013-06-01
The current study presents a quantitative approach towards visually lossless compression ratio (CR) threshold determination of JPEG2000 in digitized mammograms. This is achieved by identifying quantitative image quality metrics that reflect radiologists' visual perception in distinguishing between original and wavelet-compressed mammographic regions of interest containing microcalcification clusters (MCs) and normal parenchyma, originating from 68 images from the Digital Database for Screening Mammography. Specifically, image quality of wavelet-compressed mammograms (CRs, 10:1, 25:1, 40:1, 70:1, 100:1) is evaluated quantitatively by means of eight image quality metrics of different computational principles and qualitatively by three radiologists employing a five-point rating scale. The accuracy of the objective metrics is investigated in terms of (1) their correlation (r) with qualitative assessment and (2) ROC analysis (A z index), employing pooled radiologists' rating scores as ground truth. The quantitative metrics mean square error, mean absolute error, peak signal-to-noise ratio, and structural similarity demonstrated strong correlation with pooled radiologists' ratings (r, 0.825, 0.823, -0.825, and -0.826, respectively) and the highest area under ROC curve (A z , 0.922, 0.920, 0.922, and 0.922, respectively). For each quantitative metric, the highest accuracy values of corresponding ROC curves were used to define metric cut-off values. The metrics cut-off values were subsequently used to suggest a visually lossless CR threshold, estimated to be between 25:1 and 40:1 for the dataset analyzed. Results indicate the potential of the quantitative metrics approach in predicting visually lossless CRs in case of MCs in mammography.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Svendsen, Anders Jørgen; Holmskov, U; Bro, Peter
1995-01-01
hitherto unnoted differences between controls and patients with either rheumatoid arthritis or systemic lupus erythematosus. For this we use simple, but unconventional, graphic representations of the data, based on difference plots and ratio plots. Differences between patients with Burkitt's lymphoma...... and systemic lupus erythematosus from another previously published study (Macanovic, M. and Lachmann, P.J. (1979) Clin. Exp. Immunol. 38, 274) are also represented using ratio plots. Our observations indicate that analysis by regression analysis may often be misleading....
Different methods to alter surface morphology of high aspect ratio structures
Leber, M., E-mail: moritz.leber@utah.edu [Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT (United States); Shandhi, M.M.H. [Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT (United States); Hogan, A. [Blackrock Microsystems, Salt Lake City, UT (United States); Solzbacher, F. [Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT (United States); Bhandari, R.; Negi, S. [Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT (United States); Blackrock Microsystems, Salt Lake City, UT (United States)
2016-03-01
Graphical abstract: Surface engineering of high aspect ratio silicon structures. - Highlights: • Multiple roughening techniques for high aspect ratio devices were investigated. • Modification of surface morphology of high aspect ratio silicon devices (1:15). • Decrease of 76% in impedance proves significant increase in surface area. - Abstract: In various applications such as neural prostheses or solar cells, there is a need to alter the surface morphology of high aspect ratio structures so that the real surface area is greater than geometrical area. The change in surface morphology enhances the devices functionality. One of the applications of altering the surface morphology is of neural implants such as the Utah electrode array (UEA) that communicate with single neurons by charge injection induced stimulation or by recording electrical neural signals. For high selectivity between single cells of the nervous system, the electrode surface area is required to be as small as possible, while the impedance is required to be as low as possible for good signal to noise ratios (SNR) during neural recording. For stimulation, high charge injection and charge transfer capacities of the electrodes are required, which increase with the electrode surface. Traditionally, researchers have worked with either increasing the roughness of the existing metallization (platinum grey, black) or other materials such as Iridium Oxide and PEDOT. All of these previously investigated methods lead to more complicated metal deposition processes that are difficult to control and often have a critical impact on the mechanical properties of the metal films. Therefore, a modification of the surface underneath the electrode's coating will increase its surface area while maintaining the standard and well controlled metal deposition process. In this work, the surfaces of the silicon micro-needles were engineered by creating a defined microstructure on the electrodes surface using several
Classification between normal and tumor tissues based on the pair-wise gene expression ratio
Wong YC
2004-10-01
Full Text Available Abstract Background Precise classification of cancer types is critically important for early cancer diagnosis and treatment. Numerous efforts have been made to use gene expression profiles to improve precision of tumor classification. However, reliable cancer-related signals are generally lacking. Method Using recent datasets on colon and prostate cancer, a data transformation procedure from single gene expression to pair-wise gene expression ratio is proposed. Making use of the internal consistency of each expression profiling dataset this transformation improves the signal to noise ratio of the dataset and uncovers new relevant cancer-related signals (features. The efficiency in using the transformed dataset to perform normal/tumor classification was investigated using feature partitioning with informative features (gene annotation as discriminating axes (single gene expression or pair-wise gene expression ratio. Classification results were compared to the original datasets for up to 10-feature model classifiers. Results 82 and 262 genes that have high correlation to tissue phenotype were selected from the colon and prostate datasets respectively. Remarkably, data transformation of the highly noisy expression data successfully led to lower the coefficient of variation (CV for the within-class samples as well as improved the correlation with tissue phenotypes. The transformed dataset exhibited lower CV when compared to that of single gene expression. In the colon cancer set, the minimum CV decreased from 45.3% to 16.5%. In prostate cancer, comparable CV was achieved with and without transformation. This improvement in CV, coupled with the improved correlation between the pair-wise gene expression ratio and tissue phenotypes, yielded higher classification efficiency, especially with the colon dataset – from 87.1% to 93.5%. Over 90% of the top ten discriminating axes in both datasets showed significant improvement after data transformation. The
Constraining lowermost mantle structure with PcP/P amplitude ratios from large aperture arrays
Ventosa, S.; Romanowicz, B. A.
2015-12-01
Observations of weak short-period teleseismic body waves help to resolve lowermost mantle structure at short wavelengths, which is essential for understanding mantle dynamics and the interactions between the mantle and core. Their limited amount and uneven distribution are however major obstacles to solve for volumetric structure of the D" region, topography of the core-mantle boundary (CMB) and D" discontinuity, and the trade-offs among them. While PcP-P differential travel times provide important information, there are trade-offs between velocity structure and core-mantle boundary topography, which PcP/P amplitude ratios can help resolve, as long as lateral variations in attenuation and biases due to focusing are small or can be corrected for. Dense broadband seismic networks help to improve signal-to-noise ratio (SNR) of the target phases and signal-to-interference ratio (SIR) of other mantle phases when the slowness difference is large enough. To improve SIR and SNR of teleseismic PcP data, we have introduced the slant-stacklet transform to define coherent-guided filters able to separate and enhance signals according to their slowness, time of arrival and frequency content. We thus obtain optimal PcP/P amplitude ratios in the least-square sense using two short sliding windows to match the P signal with a candidate PcP signal. This method allows us to dramatically increase the amount of high-quality observations of short-period PcP/P amplitude ratios by allowing for smaller events and wider epicentral distance and depth ranges.We present the results of measurement of PcP/P amplitude ratios, sampling regions around the Pacific using dense arrays in North America and Japan. We observe that short-period P waves traveling through slabs are strongly affected by focusing, in agreement with the bias we have observed and corrected for due to mantle heterogeneities on PcP-P travel time differences. In Central America, this bias is by far the stronger anomaly we observe
Garimella, Sandilya V B; Hamid, Ahmed M; Deng, Liulin; Ibrahim, Yehia M; Webb, Ian K; Baker, Erin S; Prost, Spencer A; Norheim, Randolph V; Anderson, Gordon A; Smith, Richard D
2016-12-06
In this work we report an approach for spatial and temporal gas-phase ion population manipulation, wherein we collapse ion distributions in ion mobility (IM) separations into tighter packets providing higher sensitivity measurements in conjunction with mass spectrometry (MS). We do this for ions moving from a conventional traveling wave (TW)-driven region to a region where the TW is intermittently halted or "stuttered". This approach causes the ion packets spanning a number of TW-created traveling traps (TT) to be redistributed into fewer TT, resulting in spatial compression. The degree of spatial compression is controllable and determined by the ratio of stationary time of the TW in the second region to its moving time. This compression ratio ion mobility programming (CRIMP) approach has been implemented using "structures for lossless ion manipulations" (SLIM) in conjunction with MS. CRIMP with the SLIM-MS platform is shown to provide increased peak intensities, reduced peak widths, and improved signal-to-noise (S/N) ratios with MS detection. CRIMP also provides a foundation for extremely long path length and multipass IM separations in SLIM providing greatly enhanced IM resolution by reducing the detrimental effects of diffusional peak broadening and increasing peak widths.
Maximum Work of Free-Piston Stirling Engine Generators
Kojima, Shinji
2017-04-01
Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.
George Marsaglia
2006-05-01
Full Text Available This article extends and amplifies on results from a paper of over forty years ago. It provides software for evaluating the density and distribution functions of the ratio z/w for any two jointly normal variates z,w, and provides details on methods for transforming a general ratio z/w into a standard form, (a+x/(b+y , with x and y independent standard normal and a, b non-negative constants. It discusses handling general ratios when, in theory, none of the moments exist yet practical considerations suggest there should be approximations whose adequacy can be verified by means of the included software. These approximations show that many of the ratios of normal variates encountered in practice can themselves be taken as normally distributed. A practical rule is developed: If a < 2.256 and 4 < b then the ratio (a+x/(b+y is itself approximately normally distributed with mean μ = a/(1.01b − .2713 and variance 2 = (a2 + 1/(b2 + .108b − 3.795 − μ2.
George Marsaglia
2006-05-01
Full Text Available This article extends and amplifies on results from a paper of over forty years ago. It provides software for evaluating the density and distribution functions of the ratio z/w for any two jointly normal variates z,w, and provides details on methods for transforming a general ratio z/w into a standard form, (a+x/(b+y , with x and y independent standard normal and a, b non-negative constants. It discusses handling general ratios when, in theory, none of the moments exist yet practical considerations suggest there should be approximations whose adequacy can be verified by means of the included software. These approximations show that many of the ratios of normal variates encountered in practice can themselves be taken as normally distributed. A practical rule is developed: If a < 2.256 and 4 < b then the ratio (a+x/(b+y is itself approximately normally distributed with mean μ = a/(1.01b - .2713 and variance σ2 = (a2 + 1/(b2 + .108b - 3.795 μ2.
Gkinis, Vasileios; Holme, Christian; Morris, Valerie; Thayer, Abigail Grace; Vaughn, Bruce; Kjaer, Helle Astrid; Vallelonga, Paul; Simonsen, Marius; Jensen, Camilla Marie; Svensson, Anders; Maffrezzoli, Niccolo; Vinther, Bo; Dallmayr, Remi
2017-04-01
We present a performance comparison study between two state of the art Cavity Ring Down Spectrometers (Picarro L2310-i, L2140-i). The comparison took place during the Continuous Flow Analysis (CFA) campaign for the measurement of the Renland ice core, over a period of three months. Instant and complete vaporisation of the ice core melt stream, as well as of in-house water reference materials is achieved by accurate control of microflows of liquid into a homemade calibration system by following simple principles of the Hagen-Poiseuille law. Both instruments share the same vaporisation unit in a configuration that minimises sample preparation discrepancies between the two analyses. We describe our SMOW-SLAP calibration and measurement protocols for such a CFA application and present quality control metrics acquired during the full period of the campaign on a daily basis. The results indicate an unprecedented performance for all 3 isotopic ratios (δ2H, δ17O, δ18O ) in terms of precision, accuracy and resolution. We also comment on the precision and accuracy of the second order excess parameters of HD16O and H217O over H218O (Dxs, Δ17O ). To our knowledge these are the first reported CFA measurements at this level of precision and accuracy for all three isotopic ratios. Differences on the performance of the two instruments are carefully assessed during the measurement and reported here. Our quality control protocols extend to the area of low water mixing ratios, a regime in which often atmospheric vapour measurements take place and Cavity Ring Down Analysers show a poorer performance due to the lower signal to noise ratios. We address such issues and propose calibration protocols from which water vapour isotopic analyses can benefit from.
Kjærgaard, Søren; Canudas-Romo, Vladimir
2017-01-01
, the prospective potential support ratio usually focuses on the current mortality schedule, or period life expectancy. Instead, in this paper we look at the actual mortality experienced by cohorts in a population, using cohort life tables. We analyse differences between the two perspectives using mortality models......, historical data, and forecasted data. Cohort life expectancy takes future mortality improvements into account, unlike period life expectancy, leading to a higher prospective potential support ratio. Our results indicate that using cohort instead of period life expectancy returns around 0.5 extra younger...
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Optimal Portfolio Strategy under Rolling Economic Maximum Drawdown Constraints
Xiaojian Yu
2014-01-01
Full Text Available This paper deals with the problem of optimal portfolio strategy under the constraints of rolling economic maximum drawdown. A more practical strategy is developed by using rolling Sharpe ratio in computing the allocation proportion in contrast to existing models. Besides, another novel strategy named “REDP strategy” is further proposed, which replaces the rolling economic drawdown of the portfolio with the rolling economic drawdown of the risky asset. The simulation tests prove that REDP strategy can ensure the portfolio to satisfy the drawdown constraint and outperforms other strategies significantly. An empirical comparison research on the performances of different strategies is carried out by using the 23-year monthly data of SPTR, DJUBS, and 3-month T-bill. The investment cases of single risky asset and two risky assets are both studied in this paper. Empirical results indicate that the REDP strategy successfully controls the maximum drawdown within the given limit and performs best in both return and risk.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Miles, T. R.; Haslum, M. N.; Wheeler, T. J.
1998-01-01
A study involving 11,804 British children (age 10) found that when specified criteria for dyslexia were used, 269 children qualified as dyslexic. These included 223 boys and 46 girls, for a ratio of 4.51 to 1. Difficulties in interpreting these data are discussed and a defense of the criteria is provided. (Author/CR)
PO de Wet
2005-06-01
Full Text Available The rectilinear Steiner ratio was shown to be 3/2 by Hwang [Hwang FK, 1976, On Steiner minimal trees with rectilinear distance, SIAM Journal on Applied Mathematics, 30, pp. 104– 114.]. We use continuity and introduce restricted point sets to obtain an alternative, short and self-contained proof of this result.
The biochemical composition of plankton in a subsurface chlorophyll maximum
Dortch, Quay
1987-06-01
The biochemical composition of plankton at a station with a deep, subsurface chlorophyll maximum (SCM) below a nitrogen-depleted surface layer off the Washington coast was determined in order to answer long-standing questions about the nature and causes of SCM. The chlorophyll maximum did not correspond to a protein-biomass maximum, and chlorophyll: protein ratios indicate that only in the SCM were phytoplankton a major constituent of the total biomass. Ratios of free amino acids: protein in the particulate matter were high at all depths in the euphotic zone. From this it can be concluded that phytoplankton in the SCM are N-sufficient, since they make up 80-90% of the biomass there. Above and below the SCM, where non-phytoplankton predominate, the state of N deficiency or sufficiency of the phytoplankton cannot be ascertained until more is known about how the chemical composition of phytoplankton, zooplankton and bacteria are related. However, if it is assumed that very N-sufficient zooplankton and bacteria would not coexist with very N-deficient phytoplankton, then it seems likely that the phytoplankton were also N-sufficient or nearly so. Thus, the biochemical indicators do not support the hypothesis that the SCM forms because it represents the only layer in the water column with adequate N and light for phytoplankton growth. Comparison of the chlorophyll: protein ratios with those from cultures and from other regions suggests that oligotrophic areas have a much higher proportion of non-phytoplankton biomass than do eutrophic areas.
郇浩; 陶选如; 陶然; 程小康; 董朝; 李鹏飞
2014-01-01
To reach a compromise between efficient dynamic performance and high tracking accuracy of carrier tracking loop in high-dynamic circumstance which results in large Doppler frequency and Doppler frequency rate-of-change, a fast maximum likelihood estimation method of Doppler frequency rate-of-change is proposed in this paper, and the estimation value is utilized to aid the carrier tracking loop. First, it is pointed out that the maximum likelihood estimation method of Doppler frequency and Doppler frequency rate-of-change is equivalent to the Fractional Fourier Fransform (FrFT). Second, the estimation method of Doppler frequency rate-of-change, which combines the instant self-correlation and the segmental Discrete Fourier Transform (DFT) is proposed to solve the large two-dimensional search calculation amount of the Doppler frequency and Doppler frequency rate-of-change, and the received coarse estimation value is applied to narrow down the search range. Finally, the estimation value is used in the carrier tracking loop to reduce the dynamic stress and improve the tracking accuracy. Theoretical analysis and computer simulation show that the search calculation amount falls to 5.25 percent of the original amount with Signal to Noise Ratio (SNR)-30 dB, and the Root Mean Sguare Error(RMSE) of frequency tracked is only 8.46 Hz/s, compared with the traditional carrier tracking method the tracking sensitivity can be improved more than 3 dB.%高动态环境下接收信号含有较大的多普勒频率及其变化率，传统载波跟踪方法难以在高动态应力和跟踪精度两方面取得较好折中，针对这一问题该文提出一种多普勒频率变化率快速最大似然估计方法，并利用估计值辅助载波跟踪环路。首先指出了多普勒频率及其变化率的最大似然估计可等效采用分数阶傅里叶变换(FrFT)来实现；其次，针对频率及其变化率2维搜索运算量大的问题，提出一种瞬时自相关与分段离
Signal-To Ratio Improvement in NMR via Receiver Hardware Optimization.
Duensing, George Randall
The goal of this research was to increase available signal-to-noise ratio (SNR) in magnetic resonance imaging (MRI) by applying specific knowledge of the imaging system to improve receiver probes (coils) and receiving hardware. A brief history of improvements in MRI receiver and coil design is presented, including the transition from large linear volume coils to local surface coils and quadrature volume coils. Then quadrature surface coils are introduced and finally multi-coil arrays with independent acquisition systems. The research covers improvements in these areas and begins with a surface coil which is adjustable in size to optimize performance given the region of interest. By careful design of trombone-like coil elements, physical adjustment can be made without electrical adjustment. Second, new understanding of noise correlation and crosstalk between coils is developed and applied to multi-coil arrays. This provides the ability to increase available SNR for such systems. Third, a method for optimally combining multiple coils in a transverse (extending perpendicular to the static magnetic field) array into a single channel by proper signal combination is presented. This method is termed generalized quadrature because of the similarity of the method to standard quadrature combination, but with freedom in weighting and phasing in the combination process. Fourth, several methods of manipulating the multiple signals from an array to allow separation after acquisition are presented. These methods require new hardware demands but allow significant improvements in SNR for either transverse or longitudinal arrays. Fifth, several novel design methods are demonstrated, including an algorithm for impedance matching, a generalized quadrature combination method, transmission synchronized rf shielding and a bird-cage surface coil. Finally, the potential future applications and benefits of this research are presented.
GA-BASED MAXIMUM POWER DISSIPATION ESTIMATION OF VLSI SEQUENTIAL CIRCUITS OF ARBITRARY DELAY MODELS
Lu Junming; Lin Zhenghui
2002-01-01
In this paper, the glitching activity and process variations in the maximum power dissipation estimation of CMOS circuits are introduced. Given a circuit and the gate library,a new Genetic Algorithm (GA)-based technique is developed to determine the maximum power dissipation from a statistical point of view. The simulation on ISCAS-89 benchmarks shows that the ratio of the maximum power dissipation with glitching activity over the maximum power under zero-delay model ranges from 1.18 to 4.02. Compared with the traditional Monte Carlo-based technique, the new approach presented in this paper is more effective.
GA—BASED MAXIMUM POWER DISSIPATION ESTIMATION OF VLSI SEQUENTIAL CIRCUITS OF ARBITRARY DELAY MODELS
LuJunming; LinZhenghui
2002-01-01
In this paper,the glitching activity and process variations in the maximum power dissipation estimation of CMOS circulits are introduced.Given a circuit and the gate library,a new Genetic Algorithm (GA)-based technique is developed to determine the maximum power dissipation from a statistical point of view.The simulation on ISCAS-89 benchmarks shows that the ratio of the maximum power dissipation with glitching activity over the maximum power under zero-delay model ranges from 1.18 to 4.02.Compared with the traditional Monte Carlo-based technique,the new approach presented in this paper is more effective.
Maximum Likelihood Joint Tracking and Association in Strong Clutter
Leonid I. Perlovsky
2013-01-01
Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non-combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague-to-crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly-cluttered scenarios and results in an orders-of-magnitude improvement in signal-to-clutter ratio.
Maximum Likelihood Joint Tracking and Association in Strong Clutter
Leonid I. Perlovsky
2013-01-01
Full Text Available We have developed a maximum likelihood formulation for a joint detection, tracking and association problem. An efficient non‐combinatorial algorithm for this problem is developed in case of strong clutter for radar data. By using an iterative procedure of the dynamic logic process “from vague‐to‐crisp” explained in the paper, the new tracker overcomes the combinatorial complexity of tracking in highly‐cluttered scenarios and results in an orders‐of‐magnitude improvement in signal‐ to‐clutter ratio.
Park, Hyunbin; Sim, Minseob; Kim, Shiho
2015-06-01
We propose a way of achieving maximum power and power-transfer efficiency from thermoelectric generators by optimized selection of maximum-power-point-tracking (MPPT) circuits composed of a boost-cascaded-with-buck converter. We investigated the effect of switch resistance on the MPPT performance of thermoelectric generators. The on-resistances of the switches affect the decrease in the conversion gain and reduce the maximum output power obtainable. Although the incremental values of the switch resistances are small, the resulting difference in the maximum duty ratio between the input and output powers is significant. For an MPPT controller composed of a boost converter with a practical nonideal switch, we need to monitor the output power instead of the input power to track the maximum power point of the thermoelectric generator. We provide a design strategy for MPPT controllers by considering the compromise in which a decrease in switch resistance causes an increase in the parasitic capacitance of the switch.
Modeling the Maximum Spreading of Liquid Droplets Impacting Wetting and Nonwetting Surfaces.
Lee, Jae Bong; Derome, Dominique; Guyer, Robert; Carmeliet, Jan
2016-02-09
Droplet impact has been imaged on different rigid, smooth, and rough substrates for three liquids with different viscosity and surface tension, with special attention to the lower impact velocity range. Of all studied parameters, only surface tension and viscosity, thus the liquid properties, clearly play a role in terms of the attained maximum spreading ratio of the impacting droplet. Surface roughness and type of surface (steel, aluminum, and parafilm) slightly affect the dynamic wettability and maximum spreading at low impact velocity. The dynamic contact angle at maximum spreading has been identified to properly characterize this dynamic spreading process, especially at low impact velocity where dynamic wetting plays an important role. The dynamic contact angle is found to be generally higher than the equilibrium contact angle, showing that statically wetting surfaces can become less wetting or even nonwetting under dynamic droplet impact. An improved energy balance model for maximum spreading ratio is proposed based on a correct analytical modeling of the time at maximum spreading, which determines the viscous dissipation. Experiments show that the time at maximum spreading decreases with impact velocity depending on the surface tension of the liquid, and a scaling with maximum spreading diameter and surface tension is proposed. A second improvement is based on the use of the dynamic contact angle at maximum spreading, instead of quasi-static contact angles, to describe the dynamic wetting process at low impact velocity. This improved model showed good agreement compared to experiments for the maximum spreading ratio versus impact velocity for different liquids, and a better prediction compared to other models in literature. In particular, scaling according to We(1/2) is found invalid for low velocities, since the curves bend over to higher maximum spreading ratios due to the dynamic wetting process.
Efficiency at Maximum Power of Low-Dissipation Carnot Engines
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; van den Broeck, Christian
2010-10-01
We study the efficiency at maximum power, η*, of engines performing finite-time Carnot cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For engines reaching Carnot efficiency ηC=1-Tc/Th in the reversible limit (long cycle time, zero dissipation), we find in the limit of low dissipation that η* is bounded from above by ηC/(2-ηC) and from below by ηC/2. These bounds are reached when the ratio of the dissipation during the cold and hot isothermal phases tend, respectively, to zero or infinity. For symmetric dissipation (ratio one) the Curzon-Ahlborn efficiency ηCA=1-Tc/Th is recovered.
Maximum tunneling velocities in symmetric double well potentials
Manz, Jörn; Schmidt, Burkhard; Yang, Yonggang
2014-01-01
We consider coherent tunneling of one-dimensional model systems in non-cyclic or cyclic symmetric double well potentials. Generic potentials are constructed which allow for analytical estimates of the quantum dynamics in the non-relativistic deep tunneling regime, in terms of the tunneling distance, barrier height and mass (or moment of inertia). For cyclic systems, the results may be scaled to agree well with periodic potentials for which semi-analytical results in terms of Mathieu functions exist. Starting from a wavepacket which is initially localized in one of the potential wells, the subsequent periodic tunneling is associated with tunneling velocities. These velocities (or angular velocities) are evaluated as the ratio of the flux densities versus the probability densities. The maximum velocities are found under the top of the barrier where they scale as the square root of the ratio of barrier height and mass (or moment of inertia), independent of the tunneling distance. They are applied exemplarily to ...
Efficiency at maximum power of low-dissipation Carnot engines.
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; Van den Broeck, Christian
2010-10-01
We study the efficiency at maximum power, η*, of engines performing finite-time Carnot cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For engines reaching Carnot efficiency ηC=1-Tc/Th in the reversible limit (long cycle time, zero dissipation), we find in the limit of low dissipation that η* is bounded from above by ηC/(2-ηC) and from below by ηC/2. These bounds are reached when the ratio of the dissipation during the cold and hot isothermal phases tend, respectively, to zero or infinity. For symmetric dissipation (ratio one) the Curzon-Ahlborn efficiency ηCA=1-√Tc/Th] is recovered.
Kano, Y.; Tadokoro, K.; Nishigami, K.; Mori, J.
2006-12-01
We measured the seismic attenuation of the rock mass surrounding the Nojima fault, Japan, by estimating the P-wave quality factor, Qp, using spectral ratios derived from a multi-depth (800 m and 1800 m) seismometer array. We detected an increase of Qp in 2003-2006 compared to 1999-2000. Following the 1995 Kobe earthquake, the project "Fault Zone Probe" drilled three boreholes to depths of 500 m, 800 m, 1800 m, in Toshima, along the southern part of the Nojima fault. The 1800-m borehole was reported to reach the fault surface. One seismometer (TOS1) was installed at the bottom of the 800-m borehole in 1996 and another (TOS2) at the bottom of 1800-m borehole in 1997. The sampling rate of the seismometers is 100 Hz. The slope of the spectral ratios for the two stations plotted on a linear-log plot is -π t^{*}, where t^{*} is the travel time divided by the Qp for the path difference between the stations. For the estimation of Qp, we used events recorded by both TOS1 and TOS2 for periods of 1999-2000 and 2003-2006. To improve the signal-to-noise ratio of the spectral ratios, we first calculated spectra ratios between TOS1 and TOS2 for each event and averaged the values over the earthquakes for each period. We used the events that occurred within 10 km from TOS2, and the numbers of events are 74 for 1999-2000 and 105 for 2003-2006. Magnitudes of the events range from M0.5 to M3.1. The average value of Qp for 1999-2000 increased significantly compared to 2003-2006. The attenuation of rock mass surrounding the fault in 2003-2006 is smaller than that in 1999-2000, which suggests that the fault zone became stiffer after the earthquake. At the Nojima fault, permeability measured by repeated pumping tests decreased with time from the Kobe earthquake, infering the closure of cracks and a fault healing process occurred The increase of Qp is another piece of evidence for the healing process of the Nojima fault zone. u.ac.jp/~kano/
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Daprà, M; Murphy, M T; Ubachs, W
2015-01-01
Molecular hydrogen transitions in the sub-damped Lyman alpha absorber at redshift z = 2.69, toward the background quasar SDSS J123714.60+064759.5, were analyzed in order to search for a possible variation of the proton-to-electron mass ratio mu over a cosmological time-scale. The system is composed of three absorbing clouds where 137 H2 and HD absorption features were detected. The observations were taken with the Very Large Telescope/Ultraviolet and Visual Echelle Spectrograph with a signal-to-noise ratio of 32 per 2.5 km/s pixel, covering the wavelengths from 356.6 to 409.5 nm. A comprehensive fitting method was used to fit all the absorption features at once. Systematic effects of distortions to the wavelength calibrations were analyzed in detail from measurements of asteroid and `solar twin' spectra, and were corrected for. The final constraint on the relative variation in mu between the absorber and the current laboratory value is dmu/mu = (-5.4 \\pm 6.3 stat \\pm 4.0 syst) x 10^(-6), consistent with no va...
Melendez, Jorge
2009-01-01
High-resolution (R ~ 100 000), high signal-to-noise spectra of M71 giants have been obtained with HIRES at the KeckI Telescope in order to measure their Mg isotopic ratios, as well as elemental abundances of C, N, O, Na, Mg, Al, Si, Ca, Ti, Ni, Zr and La. We demonstrate that M71 has two populations, the first having weak CN, normal O, Na, Mg, and Al, and a low ratio of 26Mg/Mg (~4%) consistent with models of galactic chemical evolution with no contribution from AGB stars. The Galactic halo could have been formed from the dissolution of globular clusters prior to their intermediate mass stars reaching the AGB. The second population has enhanced Na and Al accompanied by lower O and by higher 26Mg/Mg (~8%), consistent with models which do incorporate ejecta from AGB stars via normal stellar winds. All the M71 giants have identical [Fe/H], [Si/Fe], [Ca/Fe], [Ti/Fe] and [Ni/Fe] to within sigma = 0.04 dex (10%). We therefore infer that the timescale for formation of the first generation of stars we see today in thi...
Daprà, M.; Niu, M. L.; Salumbides, E. J.; Murphy, M. T.; Ubachs, W.
2016-08-01
Carbon monoxide (CO) absorption in the sub-damped Lyα absorber at redshift {z}{abs}≃ 2.69 toward the background quasar SDSS J123714.60+064759.5 (J1237+0647) was investigated for the first time in order to search for a possible variation of the proton-to-electron mass ratio, μ, over a cosmological timescale. The observations were performed with the Very Large Telescope/Ultraviolet and Visual Echelle Spectrograph with a signal-to-noise ratio of 40 per 2.5 km s-1 per pixel at ˜5000 Å. Thirteen CO vibrational bands in this absorber are detected: the {{{A}}}1{{\\Pi }} - {{{X}}}1{{{Σ }}}+ (ν \\prime , 0) for ν \\prime =0{--}8, {{{B}}}1{{{Σ }}}+ - {{{X}}}1{{{Σ }}}+ (0, 0), {{{C}}}1{{{Σ }}}+ - {{{X}}}1{{{Σ }}}+ (0, 0), and {{{E}}}1{{\\Pi }} - {{{X}}}1{{{Σ }}}+ (0, 0) singlet-singlet bands and the {d}3{{Δ }} - {{{X}}}1{{{Σ }}}+ (5, 0) singlet-triplet band. An updated database including the most precise molecular inputs needed for a μ-variation analysis is presented for rotational levels J = 0-5, consisting of transition wavelengths, oscillator strengths, natural lifetime damping parameters, and sensitivity coefficients to a variation of the proton-to-electron mass ratio. A comprehensive fitting method was used to fit all the CO bands at once and an independent constraint of {{Δ }}μ /μ =(0.7+/- {1.6}{stat}+/- {0.5}{syst})× {10}-5 was derived from CO only. A combined analysis using both molecular hydrogen and CO in the same J1237+0647 absorber returned a final constraint on the relative variation of {{Δ }}μ /μ =(-5.6+/- {5.6}{stat}+/- {3.1}{syst})× {10}-6, which is consistent with no variation over a look-back time of ˜11.4 Gyr.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Decoding OvTDM with sphere-decoding algorithm
无
2008-01-01
Overlapped time division multiplexing (OvTDM) is a new type of transmission scheme with high spectrum efficiency and low threshold signal-to-noise ratio (SNR). In this article, the structure of OvTDM is introduced and the sphere-decoding algorithm of complex domain is proposed for OvTDM. Simulations demonstrate that the proposed algorithm can achieve maximum likelihood (ML) decoding with lower complexity as compared to traditional maximum likelihood sequence demodulation (MLSD) or viterbi algorithm (VA).
Envera Variable Compression Ratio Engine
Charles Mendler
2011-03-15
the compression ratio can be raised (to as much as 18:1) providing high engine efficiency. It is important to recognize that for a well designed VCR engine cylinder pressure does not need to be higher than found in current production turbocharged engines. As such, there is no need for a stronger crankcase, bearings and other load bearing parts within the VCR engine. The Envera VCR mechanism uses an eccentric carrier approach to adjust engine compression ratio. The crankshaft main bearings are mounted in this eccentric carrier or 'crankshaft cradle' and pivoting the eccentric carrier 30 degrees adjusts compression ratio from 9:1 to 18:1. The eccentric carrier is made up of a casting that provides rigid support for the main bearings, and removable upper bearing caps. Oil feed to the main bearings transits through the bearing cap fastener sockets. The eccentric carrier design was chosen for its low cost and rigid support of the main bearings. A control shaft and connecting links are used to pivot the eccentric carrier. The control shaft mechanism features compression ratio lock-up at minimum and maximum compression ratio settings. The control shaft method of pivoting the eccentric carrier was selected due to its lock-up capability. The control shaft can be rotated by a hydraulic actuator or an electric motor. The engine shown in Figures 3 and 4 has a hydraulic actuator that was developed under the current program. In-line 4-cylinder engines are significantly less expensive than V engines because an entire cylinder head can be eliminated. The cost savings from eliminating cylinders and an entire cylinder head will notably offset the added cost of the VCR and supercharging. Replacing V6 and V8 engines with in-line VCR 4-cylinder engines will provide high fuel economy at low cost. Numerous enabling technologies exist which have the potential to increase engine efficiency. The greatest efficiency gains are realized when the right combination of advanced and new
Dependence of maximum concentration from chemical accidents on release duration
Hanna, Steven; Chang, Joseph
2017-01-01
Chemical accidents often involve releases of a total mass, Q, of stored material in a tank over a time duration, td, of less than a few minutes. The value of td is usually uncertain because of lack of knowledge of key information, such as the size and location of the hole and the pressure and temperature of the chemical. In addition, it is rare that eyewitnesses or video cameras are present at the time of the accident. For inhalation hazards, serious health effects (such as damage to the respiratory system) are determined by short term averages (pressurized liquefied chlorine releases from tanks are given, focusing on scenarios from the Jack Rabbit I (JR I) field experiment. The analytical calculations and the predictions of the SLAB dense gas dispersion model agree that the ratio of maximum C for two different td's is greatest (as much as a factor of ten) near the source. At large distances (beyond a few km for the JR I scenarios), where tt exceeds both td's, the ratio of maximum C approaches unity.
IDENTIFICATION OF IDEOTYPES BY CANONICAL ANALYSIS IN Panicum maximum
Janaina Azevedo Martuscello
2015-04-01
Full Text Available Grouping of genotypes by canonical variable analysis is an important tool in breeding. It allows the grouping of individuals with similar characteristics that are associated with superior agronomic performance and may indicate the ideal profile of a plant for the region. The objective of the present study was to define, by canonical analysis, the agronomic profile of Panicum maximum plants adapted to the Agreste region. The experiment was conducted in a completely randomized design with 28 treatments, 22 genotypes of Panicum maximum, and cultivars Mombasa, Tanzania, Massai, Milenio, BRS Zuri, and BRS Tamani in triplicate in 4-m² plots. Plots were harvested five times and the following traits were evaluated: plant height; total, leaf, and stem; dead dry matter yields; leaf:stem ratio; leaf percentage; and volumetric density of forage. The analysis of canonical variables was performed based on the phenotypic means of the evaluated traits and on the residual variance and covariance matrix. Genotype PM34 showed higher mean leaf dry matter yield under the conditions of the Agreste of Alagoas (on average 53% higher than cultivars Mombasa, Tanzania, Milenio and Massai. It was possible to summarize the variation observed in eight agronomic characteristics in only two canonical variables accounting for 81.44 % of the data variation. The ideotype plant adapted to the conditions of the Agreste should be tall and present high leaf yield, leaf percentage, and leaf:stem ratio, and intermediate values of volumetric density of forage.
Maximum tunneling velocities in symmetric double well potentials
Manz, Jörn [State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Laser Spectroscopy, Shanxi University, 92, Wucheng Road, Taiyuan 030006 (China); Institut für Chemie und Biochemie, Freie Universität Berlin, Takustr. 3, 14195 Berlin (Germany); Schild, Axel [Institut für Chemie und Biochemie, Freie Universität Berlin, Takustr. 3, 14195 Berlin (Germany); Schmidt, Burkhard, E-mail: burkhard.schmidt@fu-berlin.de [Institut für Mathematik, Freie Universität Berlin, Arnimallee 6, 14195 Berlin (Germany); Yang, Yonggang, E-mail: ygyang@sxu.edu.cn [State Key Laboratory of Quantum Optics and Quantum Optics Devices, Institute of Laser Spectroscopy, Shanxi University, 92, Wucheng Road, Taiyuan 030006 (China)
2014-10-17
Highlights: • Coherent tunneling in one-dimensional symmetric double well potentials. • Potentials for analytical estimates in the deep tunneling regime. • Maximum velocities scale as the square root of the ratio of barrier height and mass. • In chemical physics maximum tunneling velocities are in the order of a few km/s. - Abstract: We consider coherent tunneling of one-dimensional model systems in non-cyclic or cyclic symmetric double well potentials. Generic potentials are constructed which allow for analytical estimates of the quantum dynamics in the non-relativistic deep tunneling regime, in terms of the tunneling distance, barrier height and mass (or moment of inertia). For cyclic systems, the results may be scaled to agree well with periodic potentials for which semi-analytical results in terms of Mathieu functions exist. Starting from a wavepacket which is initially localized in one of the potential wells, the subsequent periodic tunneling is associated with tunneling velocities. These velocities (or angular velocities) are evaluated as the ratio of the flux densities versus the probability densities. The maximum velocities are found under the top of the barrier where they scale as the square root of the ratio of barrier height and mass (or moment of inertia), independent of the tunneling distance. They are applied exemplarily to several prototypical molecular models of non-cyclic and cyclic tunneling, including ammonia inversion, Cope rearrangement of semibullvalene, torsions of molecular fragments, and rotational tunneling in strong laser fields. Typical maximum velocities and angular velocities are in the order of a few km/s and from 10 to 100 THz for our non-cyclic and cyclic systems, respectively, much faster than time-averaged velocities. Even for the more extreme case of an electron tunneling through a barrier of height of one Hartree, the velocity is only about one percent of the speed of light. Estimates of the corresponding time scales for
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i