WorldWideScience

Sample records for amplitude-based estimation method

  1. Correlation-Based Amplitude Estimation of Coincident Partials in Monaural Musical Signals

    Directory of Open Access Journals (Sweden)

    Jayme Garcia Arnal Barbedo

    2010-01-01

    Full Text Available This paper presents a method for estimating the amplitude of coincident partials generated by harmonic musical sources (instruments and vocals. It was developed as an alternative to the commonly used interpolation approach, which has several limitations in terms of performance and applicability. The strategy is based on the following observations: (a the parameters of partials vary with time; (b such a variation tends to be correlated when the partials belong to the same source; (c the presence of an interfering coincident partial reduces the correlation; and (d such a reduction is proportional to the relative amplitude of the interfering partial. Besides the improved accuracy, the proposed technique has other advantages over its predecessors: it works properly even if the sources have the same fundamental frequency, it is able to estimate the first partial (fundamental, which is not possible using the conventional interpolation method, it can estimate the amplitude of a given partial even if its neighbors suffer intense interference from other sources, it works properly under noisy conditions, and it is immune to intraframe permutation errors. Experimental results show that the strategy clearly outperforms the interpolation approach.

  2. A General Method to Estimate Earthquake Moment and Magnitude using Regional Phase Amplitudes

    Energy Technology Data Exchange (ETDEWEB)

    Pasyanos, M E

    2009-11-19

    This paper presents a general method of estimating earthquake magnitude using regional phase amplitudes, called regional M{sub o} or regional M{sub w}. Conceptually, this method uses an earthquake source model along with an attenuation model and geometrical spreading which accounts for the propagation to utilize regional phase amplitudes of any phase and frequency. Amplitudes are corrected to yield a source term from which one can estimate the seismic moment. Moment magnitudes can then be reliably determined with sets of observed phase amplitudes rather than predetermined ones, and afterwards averaged to robustly determine this parameter. We first examine in detail several events to demonstrate the methodology. We then look at various ensembles of phases and frequencies, and compare results to existing regional methods. We find regional M{sub o} to be a stable estimator of earthquake size that has several advantages over other methods. Because of its versatility, it is applicable to many more events, particularly smaller events. We make moment estimates for earthquakes ranging from magnitude 2 to as large as 7. Even with diverse input amplitude sources, we find magnitude estimates to be more robust than typical magnitudes and existing regional methods and might be tuned further to improve upon them. The method yields a more meaningful quantity of seismic moment, which can be recast as M{sub w}. Lastly, it is applied here to the Middle East region using an existing calibration model, but it would be easy to transport to any region with suitable attenuation calibration.

  3. A fast and reliable method for simultaneous waveform, amplitude and latency estimation of single-trial EEG/MEG data.

    Directory of Open Access Journals (Sweden)

    Wouter D Weeda

    Full Text Available The amplitude and latency of single-trial EEG/MEG signals may provide valuable information concerning human brain functioning. In this article we propose a new method to reliably estimate single-trial amplitude and latency of EEG/MEG signals. The advantages of the method are fourfold. First, no a-priori specified template function is required. Second, the method allows for multiple signals that may vary independently in amplitude and/or latency. Third, the method is less sensitive to noise as it models data with a parsimonious set of basis functions. Finally, the method is very fast since it is based on an iterative linear least squares algorithm. A simulation study shows that the method yields reliable estimates under different levels of latency variation and signal-to-noise ratioÕs. Furthermore, it shows that the existence of multiple signals can be correctly determined. An application to empirical data from a choice reaction time study indicates that the method describes these data accurately.

  4. An Alternative Method for Tilecal Signal Detection and Amplitude Estimation

    CERN Document Server

    Sotto-Maior Peralva, B; The ATLAS collaboration; Manhães de Andrade Filho, L; Manoel de Seixas, J

    2011-01-01

    The Barrel Hadronic calorimeter of ATLAS (Tilecal) is a detector used in the reconstruction of hadrons, jets, muons and missing transverse energy from the proton-proton collisions at the Large Hadron Collider (LHC). It comprises 10,000 channels in four readout partitions and each calorimeter cell is made of two readout channels for redundancy. The energy deposited by the particles produced in the collisions is read out by the several readout channels and its value is estimated by an optimal filtering algorithm, which reconstructs the amplitude and the time of the digitized signal pulse sampled every 25 ns. This work deals with signal detection and amplitude estimation for the Tilecal under low signal-to-noise ratio (SNR) conditions. It explores the applicability (at the cell level) of a Matched Filter (MF), which is known to be the optimal signal detector in terms of the SNR. Moreover, it investigates the impact of signal detection when summing both signals from the same cell before estimating the amplitude, ...

  5. Analytical estimations of limit cycle amplitude for delay-differential equations

    Directory of Open Access Journals (Sweden)

    Tamás Molnár

    2016-09-01

    Full Text Available The amplitude of limit cycles arising from Hopf bifurcation is estimated for nonlinear delay-differential equations by means of analytical formulas. An improved analytical estimation is introduced, which allows more accurate quantitative prediction of periodic solutions than the standard approach that formulates the amplitude as a square-root function of the bifurcation parameter. The improved estimation is based on special global properties of the system: the method can be applied if the limit cycle blows up and disappears at a certain value of the bifurcation parameter. As an illustrative example, the improved analytical formula is applied to the problem of stick balancing.

  6. Fringe image analysis based on the amplitude modulation method.

    Science.gov (United States)

    Gai, Shaoyan; Da, Feipeng

    2010-05-10

    A novel phase-analysis method is proposed. To get the fringe order of a fringe image, the amplitude-modulation fringe pattern is carried out, which is combined with the phase-shift method. The primary phase value is obtained by a phase-shift algorithm, and the fringe-order information is encoded in the amplitude-modulation fringe pattern. Different from other methods, the amplitude-modulation fringe identifies the fringe order by the amplitude of the fringe pattern. In an amplitude-modulation fringe pattern, each fringe has its own amplitude; thus, the order information is integrated in one fringe pattern, and the absolute fringe phase can be calculated correctly and quickly with the amplitude-modulation fringe image. The detailed algorithm is given, and the error analysis of this method is also discussed. Experimental results are presented by a full-field shape measurement system where the data has been processed using the proposed algorithm. (c) 2010 Optical Society of America.

  7. Speech Enhancement by MAP Spectral Amplitude Estimation Using a Super-Gaussian Speech Model

    Directory of Open Access Journals (Sweden)

    Lotter Thomas

    2005-01-01

    Full Text Available This contribution presents two spectral amplitude estimators for acoustical background noise suppression based on maximum a posteriori estimation and super-Gaussian statistical modelling of the speech DFT amplitudes. The probability density function of the speech spectral amplitude is modelled with a simple parametric function, which allows a high approximation accuracy for Laplace- or Gamma-distributed real and imaginary parts of the speech DFT coefficients. Also, the statistical model can be adapted to optimally fit the distribution of the speech spectral amplitudes for a specific noise reduction system. Based on the super-Gaussian statistical model, computationally efficient maximum a posteriori speech estimators are derived, which outperform the commonly applied Ephraim-Malah algorithm.

  8. Study on modulation amplitude stabilization method for PEM based on FPGA in atomic magnetometer

    Science.gov (United States)

    Wang, Qinghua; Quan, Wei; Duan, Lihong

    2017-10-01

    Atomic magnetometer which uses atoms as sensitive elements have ultra-high precision and has wide applications in scientific researches. The photoelastic modulation method based on photoelastic modulator (PEM) is used in the atomic magnetometer to detect the small optical rotation angle of a linearly polarized light. However, the modulation amplitude of the PEM will drift due to the environmental factors, which reduces the precision and long-term stability of the atomic magnetometer. Consequently, stabilizing the PEM's modulation amplitude is essential to precision measurement. In this paper, a modulation amplitude stabilization method for PEM based on Field Programmable Gate Array (FPGA) is proposed. The designed control system contains an optical setup and an electrical part. The optical setup is used to measure the PEM's modulation amplitude. The FPGA chip, with the PID control algorithm implemented in it, is used as the electrical part's micro controller. The closed loop control method based on the photoelastic modulation detection system can directly measure the PEM's modulation amplitude in real time, without increasing the additional optical devices. In addition, the operating speed of the modulation amplitude stabilization control system can be greatly improved because of the FPGA's parallel computing feature, and the PID control algorithm ensures flexibility to meet different needs of the PEM's modulation amplitude set values. The Modelsim simulation results show the correctness of the PID control algorithm, and the long-term stability of the PEM's modulation amplitude reaches 0.35% in a 3-hour continuous measurement.

  9. Statistical amplitude scale estimation for quantization-based watermarking

    NARCIS (Netherlands)

    Shterev, I.D.; Lagendijk, I.L.; Heusdens, R.

    2004-01-01

    Quantization-based watermarking schemes are vulnerable to amplitude scaling. Therefore the scaling factor has to be accounted for either at the encoder, or at the decoder, prior to watermark decoding. In this paper we derive the marginal probability density model for the watermarked and attacked

  10. Amplitude Models for Discrimination and Yield Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, William Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-01

    This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.

  11. Tsunami Amplitude Estimation from Real-Time GNSS.

    Science.gov (United States)

    Jeffries, C.; MacInnes, B. T.; Melbourne, T. I.

    2017-12-01

    Tsunami early warning systems currently comprise modeling of observations from the global seismic network, deep-ocean DART buoys, and a global distribution of tide gauges. While these tools work well for tsunamis traveling teleseismic distances, saturation of seismic magnitude estimation in the near field can result in significant underestimation of tsunami excitation for local warning. Moreover, DART buoy and tide gauge observations cannot be used to rectify the underestimation in the available time, typically 10-20 minutes, before local runup occurs. Real-time GNSS measurements of coseismic offsets may be used to estimate finite faulting within 1-2 minutes and, in turn, tsunami excitation for local warning purposes. We describe here a tsunami amplitude estimation algorithm; implemented for the Cascadia subduction zone, that uses continuous GNSS position streams to estimate finite faulting. The system is based on a time-domain convolution of fault slip that uses a pre-computed catalog of hydrodynamic Green's functions generated with the GeoClaw shallow-water wave simulation software and maps seismic slip along each section of the fault to points located off the Cascadia coast in 20m of water depth and relies on the principle of the linearity in tsunami wave propagation. The system draws continuous slip estimates from a message broker, convolves the slip with appropriate Green's functions which are then superimposed to produce wave amplitude at each coastal location. The maximum amplitude and its arrival time are then passed into a database for subsequent monitoring and display. We plan on testing this system using a suite of synthetic earthquakes calculated for Cascadia whose ground motions are simulated at 500 existing Cascadia GPS sites, as well as real earthquakes for which we have continuous GNSS time series and surveyed runup heights, including Maule, Chile 2010 and Tohoku, Japan 2011. This system has been implemented in the CWU Geodesy Lab for the Cascadia

  12. Estimation of Multiple Pitches in Stereophonic Mixtures using a Codebook-based Approach

    DEFF Research Database (Denmark)

    Hansen, Martin Weiss; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2017-01-01

    In this paper, a method for multi-pitch estimation of stereophonic mixtures of multiple harmonic signals is presented. The method is based on a signal model which takes the amplitude and delay panning parameters of the sources in a stereophonic mixture into account. Furthermore, the method is based...... on the extended invariance principle (EXIP), and a codebook of realistic amplitude vectors. For each fundamental frequency candidate in each of the sources, the amplitude estimates are mapped to entries in the codebook, and the pitch and model order are estimated jointly. The performance of the proposed method...

  13. Phase-Inductance-Based Position Estimation Method for Interior Permanent Magnet Synchronous Motors

    Directory of Open Access Journals (Sweden)

    Xin Qiu

    2017-12-01

    Full Text Available This paper presents a phase-inductance-based position estimation method for interior permanent magnet synchronous motors (IPMSMs. According to the characteristics of phase induction of IPMSMs, the corresponding relationship of the rotor position and the phase inductance is obtained. In order to eliminate the effect of the zero-sequence component of phase inductance and reduce the rotor position estimation error, the phase inductance difference is employed. With the iterative computation of inductance vectors, the position plane is further subdivided, and the rotor position is extracted by comparing the amplitudes of inductance vectors. To decrease the consumption of computer resources and increase the practicability, a simplified implementation is also investigated. In this method, the rotor position information is achieved easily, with several basic math operations and logical comparisons of phase inductances, without any coordinate transformation or trigonometric function calculation. Based on this position estimation method, the field orientated control (FOC strategy is established, and the detailed implementation is also provided. A series of experiment results from a prototype demonstrate the correctness and feasibility of the proposed method.

  14. A Kalman-based Fundamental Frequency Estimation Algorithm

    DEFF Research Database (Denmark)

    Shi, Liming; Nielsen, Jesper Kjær; Jensen, Jesper Rindom

    2017-01-01

    Fundamental frequency estimation is an important task in speech and audio analysis. Harmonic model-based methods typically have superior estimation accuracy. However, such methods usually as- sume that the fundamental frequency and amplitudes are station- ary over a short time frame. In this pape...

  15. Dictionary-Based Stochastic Expectation–Maximization for SAR Amplitude Probability Density Function Estimation

    OpenAIRE

    Moser , Gabriele; Zerubia , Josiane; Serpico , Sebastiano B.

    2006-01-01

    International audience; In remotely sensed data analysis, a crucial problem is represented by the need to develop accurate models for the statistics of the pixel intensities. This paper deals with the problem of probability density function (pdf) estimation in the context of synthetic aperture radar (SAR) amplitude data analysis. Several theoretical and heuristic models for the pdfs of SAR data have been proposed in the literature, which have been proved to be effective for different land-cov...

  16. Ray Tracing for Dispersive Tsunamis and Source Amplitude Estimation Based on Green's Law: Application to the 2015 Volcanic Tsunami Earthquake Near Torishima, South of Japan

    Science.gov (United States)

    Sandanbata, Osamu; Watada, Shingo; Satake, Kenji; Fukao, Yoshio; Sugioka, Hiroko; Ito, Aki; Shiobara, Hajime

    2018-04-01

    Ray tracing, which has been widely used for seismic waves, was also applied to tsunamis to examine the bathymetry effects during propagation, but it was limited to linear shallow-water waves. Green's law, which is based on the conservation of energy flux, has been used to estimate tsunami amplitude on ray paths. In this study, we first propose a new ray tracing method extended to dispersive tsunamis. By using an iterative algorithm to map two-dimensional tsunami velocity fields at different frequencies, ray paths at each frequency can be traced. We then show that Green's law is valid only outside the source region and that extension of Green's law is needed for source amplitude estimation. As an application example, we analyzed tsunami waves generated by an earthquake that occurred at a submarine volcano, Smith Caldera, near Torishima, Japan, in 2015. The ray-tracing results reveal that the ray paths are very dependent on its frequency, particularly at deep oceans. The validity of our frequency-dependent ray tracing is confirmed by the comparison of arrival angles and travel times with those of observed tsunami waveforms at an array of ocean bottom pressure gauges. The tsunami amplitude at the source is nearly twice or more of that just outside the source estimated from the array tsunami data by Green's law.

  17. Estimation of inhalation flow profile using audio-based methods to assess inhaler medication adherence

    Science.gov (United States)

    Lacalle Muls, Helena; Costello, Richard W.; Reilly, Richard B.

    2018-01-01

    Asthma and chronic obstructive pulmonary disease (COPD) patients are required to inhale forcefully and deeply to receive medication when using a dry powder inhaler (DPI). There is a clinical need to objectively monitor the inhalation flow profile of DPIs in order to remotely monitor patient inhalation technique. Audio-based methods have been previously employed to accurately estimate flow parameters such as the peak inspiratory flow rate of inhalations, however, these methods required multiple calibration inhalation audio recordings. In this study, an audio-based method is presented that accurately estimates inhalation flow profile using only one calibration inhalation audio recording. Twenty healthy participants were asked to perform 15 inhalations through a placebo Ellipta™ DPI at a range of inspiratory flow rates. Inhalation flow signals were recorded using a pneumotachograph spirometer while inhalation audio signals were recorded simultaneously using the Inhaler Compliance Assessment device attached to the inhaler. The acoustic (amplitude) envelope was estimated from each inhalation audio signal. Using only one recording, linear and power law regression models were employed to determine which model best described the relationship between the inhalation acoustic envelope and flow signal. Each model was then employed to estimate the flow signals of the remaining 14 inhalation audio recordings. This process repeated until each of the 15 recordings were employed to calibrate single models while testing on the remaining 14 recordings. It was observed that power law models generated the highest average flow estimation accuracy across all participants (90.89±0.9% for power law models and 76.63±2.38% for linear models). The method also generated sufficient accuracy in estimating inhalation parameters such as peak inspiratory flow rate and inspiratory capacity within the presence of noise. Estimating inhaler inhalation flow profiles using audio based methods may be

  18. A time-frequency analysis method to obtain stable estimates of magnetotelluric response function based on Hilbert-Huang transform

    Science.gov (United States)

    Cai, Jianhua

    2017-05-01

    The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.

  19. Removing damped sinusoidal vibrations in adaptive optics systems using a DFT-based estimation method

    Science.gov (United States)

    Kania, Dariusz

    2017-06-01

    The problem of a vibrations rejection in adaptive optics systems is still present in publications. These undesirable signals emerge because of shaking the system structure, the tracking process, etc., and they usually are damped sinusoidal signals. There are some mechanical solutions to reduce the signals but they are not very effective. One of software solutions are very popular adaptive methods. An AVC (Adaptive Vibration Cancellation) method has been presented and developed in recent years. The method is based on the estimation of three vibrations parameters and values of frequency, amplitude and phase are essential to produce and adjust a proper signal to reduce or eliminate vibrations signals. This paper presents a fast (below 10 ms) and accurate estimation method of frequency, amplitude and phase of a multifrequency signal that can be used in the AVC method to increase the AO system performance. The method accuracy depends on several parameters: CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, THD, b - number of A/D converter bits in a real time system, γ - the damping ratio of the tested signal, φ - the phase of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value of systematic error for γ = 0.1%, CiR = 1.1 and N = 32 is approximately 10^-4 Hz/Hz. This paper focuses on systematic errors of and effect of the signal phase and values of γ on the results.

  20. Multi-Pitch Estimation of Audio Recordings Using a Codebook-Based Approach

    DEFF Research Database (Denmark)

    Hansen, Martin Weiss; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2016-01-01

    ), and a codebook consisting of realistic amplitude vectors. A nonlinear least squares (NLS) cost function is formed based on the observed signal and a parametric model of the signal, for a set of fundamental frequency candidates. For each of these, amplitude estimates are computed. The magnitudes...... of these estimates are quantized according to a codebook, and an updated cost function is used to estimate the fundamental frequencies of the sources. The performance of the proposed estimator is evaluated using synthetic and real mixtures, and the results show that the proposed method is able to estimate multiple...

  1. Digital double random amplitude image encryption method based on the symmetry property of the parametric discrete Fourier transform

    Science.gov (United States)

    Bekkouche, Toufik; Bouguezel, Saad

    2018-03-01

    We propose a real-to-real image encryption method. It is a double random amplitude encryption method based on the parametric discrete Fourier transform coupled with chaotic maps to perform the scrambling. The main idea behind this method is the introduction of a complex-to-real conversion by exploiting the inherent symmetry property of the transform in the case of real-valued sequences. This conversion allows the encrypted image to be real-valued instead of being a complex-valued image as in all existing double random phase encryption methods. The advantage is to store or transmit only one image instead of two images (real and imaginary parts). Computer simulation results and comparisons with the existing double random amplitude encryption methods are provided for peak signal-to-noise ratio, correlation coefficient, histogram analysis, and key sensitivity.

  2. Spectrum estimation method based on marginal spectrum

    International Nuclear Information System (INIS)

    Cai Jianhua; Hu Weiwen; Wang Xianchun

    2011-01-01

    FFT method can not meet the basic requirements of power spectrum for non-stationary signal and short signal. A new spectrum estimation method based on marginal spectrum from Hilbert-Huang transform (HHT) was proposed. The procession of obtaining marginal spectrum in HHT method was given and the linear property of marginal spectrum was demonstrated. Compared with the FFT method, the physical meaning and the frequency resolution of marginal spectrum were further analyzed. Then the Hilbert spectrum estimation algorithm was discussed in detail, and the simulation results were given at last. The theory and simulation shows that under the condition of short data signal and non-stationary signal, the frequency resolution and estimation precision of HHT method is better than that of FFT method. (authors)

  3. Contributions of contour frequency, amplitude, and luminance to the watercolor effect estimated by conjoint measurement.

    Science.gov (United States)

    Gerardin, Peggy; Devinck, Frédéric; Dojat, Michel; Knoblauch, Kenneth

    2014-04-10

    The watercolor effect is a long-range, assimilative, filling-in phenomenon induced by a pair of distant, wavy contours of different chromaticities. Here, we measured joint influences of the contour frequency and amplitude and the luminance of the interior contour on the strength of the effect. Contour pairs, each enclosing a circular region, were presented with two of the dimensions varying independently across trials (luminance/frequency, luminance/amplitude, frequency/amplitude) in a conjoint measurement paradigm (Luce & Tukey, 1964). In each trial, observers judged which of the stimuli evoked the strongest fill-in color. Control stimuli were identical except that the contours were intertwined and generated little filling-in. Perceptual scales were estimated by a maximum likelihood method (Ho, Landy, & Maloney, 2008). An additive model accounted for the joint contributions of any pair of dimensions. As shown previously using difference scaling (Devinck & Knoblauch, 2012), the strength increases with luminance of the interior contour. The strength of the phenomenon was nearly independent of the amplitude of modulation of the contour but increased with its frequency up to an asymptotic level. On average, the strength of the effect was similar along a given dimension regardless of the other dimension with which it was paired, demonstrating consistency of the underlying estimated perceptual scales.

  4. A Channelization-Based DOA Estimation Method for Wideband Signals

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2016-07-01

    Full Text Available In this paper, we propose a novel direction of arrival (DOA estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR using direct wideband radio frequency (RF digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method.

  5. Taking into account latency, amplitude, and morphology: improved estimation of single-trial ERPs by wavelet filtering and multiple linear regression.

    Science.gov (United States)

    Hu, L; Liang, M; Mouraux, A; Wise, R G; Hu, Y; Iannetti, G D

    2011-12-01

    Across-trial averaging is a widely used approach to enhance the signal-to-noise ratio (SNR) of event-related potentials (ERPs). However, across-trial variability of ERP latency and amplitude may contain physiologically relevant information that is lost by across-trial averaging. Hence, we aimed to develop a novel method that uses 1) wavelet filtering (WF) to enhance the SNR of ERPs and 2) a multiple linear regression with a dispersion term (MLR(d)) that takes into account shape distortions to estimate the single-trial latency and amplitude of ERP peaks. Using simulated ERP data sets containing different levels of noise, we provide evidence that, compared with other approaches, the proposed WF+MLR(d) method yields the most accurate estimate of single-trial ERP features. When applied to a real laser-evoked potential data set, the WF+MLR(d) approach provides reliable estimation of single-trial latency, amplitude, and morphology of ERPs and thereby allows performing meaningful correlations at single-trial level. We obtained three main findings. First, WF significantly enhances the SNR of single-trial ERPs. Second, MLR(d) effectively captures and measures the variability in the morphology of single-trial ERPs, thus providing an accurate and unbiased estimate of their peak latency and amplitude. Third, intensity of pain perception significantly correlates with the single-trial estimates of N2 and P2 amplitude. These results indicate that WF+MLR(d) can be used to explore the dynamics between different ERP features, behavioral variables, and other neuroimaging measures of brain activity, thus providing new insights into the functional significance of the different brain processes underlying the brain responses to sensory stimuli.

  6. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    Directory of Open Access Journals (Sweden)

    Sekhar S Chandra

    2004-01-01

    Full Text Available We address the problem of estimating instantaneous frequency (IF of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE. The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD-based IF estimators for different signal-to-noise ratio (SNR.

  7. Transversity Amplitudes in Hypercharge Exchange Processes; Amplitudes de transversidad en procesos de intercambio de hipercarga

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar Benitez de Lugo, M.

    1979-07-01

    In this work we present several techniques developed for the extraction of the. Transversity amplitudes governing quasi two-body meson baryon reactions with hypercharge exchange. We review the methods used In processes having a pure spin configuration, as well as the more relevant results obtained with data from K{sup p} and Tp interactions at intermediate energies. The predictions of the additive quark model and the ones following from exchange degeneracy and etoxicity are discussed. We present a formalism for amplitude analysis developed for reactions with mixed spin configurations and discuss the methods of parametric estimation of the moduli and phases of.the amplitudes, as well as the various tests employed to check the goodness of the fits. The calculation of the generalized joint density matrices is given and we propose a method based on the generalization of the idea of multipole moments, which allows to investigate the structure of the decay angular correlations and establishes the quality of the fits and the validity of the simplifying assumptions currently used in this type of studies. (Author) 43 refs.

  8. COMPARISON OF HOLOGRAPHIC AND ITERATIVE METHODS FOR AMPLITUDE OBJECT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    I. A. Shevkunov

    2015-01-01

    Full Text Available Experimental comparison of four methods for the wavefront reconstruction is presented. We considered two iterative and two holographic methods with different mathematical models and algorithms for recovery. The first two of these methods do not use a reference wave recording scheme that reduces requirements for stability of the installation. A major role in phase information reconstruction by such methods is played by a set of spatial intensity distributions, which are recorded as the recording matrix is being moved along the optical axis. The obtained data are used consistently for wavefront reconstruction using an iterative procedure. In the course of this procedure numerical distribution of the wavefront between the planes is performed. Thus, phase information of the wavefront is stored in every plane and calculated amplitude distributions are replaced for the measured ones in these planes. In the first of the compared methods, a two-dimensional Fresnel transform and iterative calculation in the object plane are used as a mathematical model. In the second approach, an angular spectrum method is used for numerical wavefront propagation, and the iterative calculation is carried out only between closely located planes of data registration. Two digital holography methods, based on the usage of the reference wave in the recording scheme and differing from each other by numerical reconstruction algorithm of digital holograms, are compared with the first two methods. The comparison proved that the iterative method based on 2D Fresnel transform gives results comparable with the result of common holographic method with the Fourier-filtering. It is shown that holographic method for reconstructing of the object complex amplitude in the process of the object amplitude reduction is the best among considered ones.

  9. Advanced methods for scattering amplitudes in gauge theories

    International Nuclear Information System (INIS)

    Peraro, Tiziano

    2014-01-01

    We present new techniques for the evaluation of multi-loop scattering amplitudes and their application to gauge theories, with relevance to the Standard Model phenomenology. We define a mathematical framework for the multi-loop integrand reduction of arbitrary diagrams, and elaborate algebraic approaches, such as the Laurent expansion method, implemented in the software Ninja, and the multivariate polynomial division technique by means of Groebner bases.

  10. Advanced methods for scattering amplitudes in gauge theories

    Energy Technology Data Exchange (ETDEWEB)

    Peraro, Tiziano

    2014-09-24

    We present new techniques for the evaluation of multi-loop scattering amplitudes and their application to gauge theories, with relevance to the Standard Model phenomenology. We define a mathematical framework for the multi-loop integrand reduction of arbitrary diagrams, and elaborate algebraic approaches, such as the Laurent expansion method, implemented in the software Ninja, and the multivariate polynomial division technique by means of Groebner bases.

  11. A simple optical method for measuring the vibration amplitude of a speaker

    OpenAIRE

    UEDA, Masahiro; YAMAGUCHI, Toshihiko; KAKIUCHI, Hiroki; SUGA, Hiroshi

    1999-01-01

    A simple optical method has been proposed for measuring the vibration amplitude of a speaker vibrating with a frequency of approximately 10 kHz. The method is based on a multiple reflection between a vibrating speaker plane and a mirror parallel to that speaker plane. The multiple reflection can magnify a dispersion of the laser beam caused by the vibration, and easily make a measurement of the amplitude. The measuring sensitivity ranges between sub-microns and 1 mm. A preliminary experim...

  12. Estimation of shear velocity contrast for dipping or anisotropic medium from transmitted Ps amplitude variation with ray-parameter

    Science.gov (United States)

    Kumar, Prakash

    2015-12-01

    Amplitude versus offset analysis of P to P reflection is often used in exploration seismology for hydrocarbon exploration. In the present work, the feasibility to estimate crustal velocity structure from transmitted P to S wave amplitude variation with ray-parameter has been investigated separately for dipping layer and anisotropy medium. First, for horizontal and isotropic medium, the approximation of P-to-s conversion is used that is expressed as a linear form in terms of slowness. Next, the intercept of the linear regression has been used to estimate the shear wave velocity contrast (δβ) across an interface. The formulation holds good for isotropic and horizontal layer medium. Application of such formula to anisotropic medium or dipping layer data may lead to erroneous estimation of δβ. In order to overcome this problem, a method has been proposed to compensate the SV-amplitude using shifted version of SH-amplitude, and subsequently transforming SV amplitudes equivalent to that from isotropic or horizontal layer medium as the case may be. Once this transformation has been done, δβ can be estimated using isotropic horizontal layer formula. The shifts required in SH for the compensation are π/2 and π/4 for dipping layer and anisotropic medium, respectively. The effectiveness of the approach has been reported using various synthetic data sets. The methodology is also tested on real data from HI-CLIMB network in Himalaya, where the presence of dipping Moho has already been reported. The result reveals that the average shear wave velocity contrast across the Moho is larger towards the Indian side compared to the higher Himalayan and Tibetan regions.

  13. Correction of Misclassifications Using a Proximity-Based Estimation Method

    Directory of Open Access Journals (Sweden)

    Shmulevich Ilya

    2004-01-01

    Full Text Available An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  14. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    Science.gov (United States)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  15. Transversity Amplitudes in Hypercharge Exchange Processes

    International Nuclear Information System (INIS)

    Aguilar Benitez de Lugo, M.

    1979-01-01

    ' In this work we present several techniques developed for the extraction of the. Transversity amplitudes governing quasi two-body meson baryon reactions with hypercharge exchange. We review the methods used in processes having a pure spin configuration, as well as the more relevant results obtained with data from K p and Tp interactions at intermediate energies. The predictions of the additive quark model and the ones following from exchange degeneracy and etoxicity are discussed. We present a formalism for amplitude analysis developed for reactions with mixed spin configurations and discuss the methods of parametric estimation of the moduli and phases of the amplitudes, as well as the various tests employed to check the goodness of the fits. The calculation of the generalized joint density matrices is given and we propose a method based on the generalization of the idea of multipole moments, which allows to investigate the structure of the decay angular correlations and establishes the quality of the fits and the validity of the simplifying assumptions currently used in this type of studies. (Author) 43 refs

  16. Measurement of absolute displacement-amplitude of ultrasonic wave using piezo-electric detection method

    Energy Technology Data Exchange (ETDEWEB)

    Park, Seong Hyun; Kim, Jong Beom; Jhang, Kyung Young [Hanyang University, Seoul (Korea, Republic of)

    2017-02-15

    A nonlinear ultrasonic parameter is defined by the ratio of displacement amplitude of the fundamental frequency component to that of the second-order harmonic frequency component. In this study, the ultrasonic displacement amplitude of an SUS316 specimen was measured via a piezo-electric-based method to identify the validity of piezo-electric detection method. For comparison, the ultrasonic displacement was also determined via a laser-based Fabry-Pérot interferometer. The experimental results for both measurements were in good agreement. Additionally, the stability of the repeated test results from the piezo-electric method exceeded that of the laser-interferometric method. This result indicated that the piezo-electric detection method can be utilized to measure a nonlinear ultrasonic parameter due to its excellent stability although it involves a complicated process.

  17. Measurement of absolute displacement-amplitude of ultrasonic wave using piezo-electric detection method

    International Nuclear Information System (INIS)

    Park, Seong Hyun; Kim, Jong Beom; Jhang, Kyung Young

    2017-01-01

    A nonlinear ultrasonic parameter is defined by the ratio of displacement amplitude of the fundamental frequency component to that of the second-order harmonic frequency component. In this study, the ultrasonic displacement amplitude of an SUS316 specimen was measured via a piezo-electric-based method to identify the validity of piezo-electric detection method. For comparison, the ultrasonic displacement was also determined via a laser-based Fabry-Pérot interferometer. The experimental results for both measurements were in good agreement. Additionally, the stability of the repeated test results from the piezo-electric method exceeded that of the laser-interferometric method. This result indicated that the piezo-electric detection method can be utilized to measure a nonlinear ultrasonic parameter due to its excellent stability although it involves a complicated process

  18. A numerical integration-based yield estimation method for integrated circuits

    International Nuclear Information System (INIS)

    Liang Tao; Jia Xinzhang

    2011-01-01

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  19. A numerical integration-based yield estimation method for integrated circuits

    Energy Technology Data Exchange (ETDEWEB)

    Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)

    2011-04-15

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  20. A Review of Spectral Methods for Variable Amplitude Fatigue Prediction and New Results

    Science.gov (United States)

    Larsen, Curtis E.; Irvine, Tom

    2013-01-01

    A comprehensive review of the available methods for estimating fatigue damage from variable amplitude loading is presented. The dependence of fatigue damage accumulation on power spectral density (psd) is investigated for random processes relevant to real structures such as in offshore or aerospace applications. Beginning with the Rayleigh (or narrow band) approximation, attempts at improved approximations or corrections to the Rayleigh approximation are examined by comparison to rainflow analysis of time histories simulated from psd functions representative of simple theoretical and real world applications. Spectral methods investigated include corrections by Wirsching and Light, Ortiz and Chen, the Dirlik formula, and the Single-Moment method, among other more recent proposed methods. Good agreement is obtained between the spectral methods and the time-domain rainflow identification for most cases, with some limitations. Guidelines are given for using the several spectral methods to increase confidence in the damage estimate.

  1. Quantitative measurement of phase variation amplitude of ultrasonic diffraction grating based on diffraction spectral analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Meiyan, E-mail: yphantomohive@gmail.com; Zeng, Yingzhi; Huang, Zuohua, E-mail: zuohuah@163.com [Laboratory of Quantum Engineering and Quantum Materials, School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou, Guangdong 510006 (China)

    2014-09-15

    A new method based on diffraction spectral analysis is proposed for the quantitative measurement of the phase variation amplitude of an ultrasonic diffraction grating. For a traveling wave, the phase variation amplitude of the grating depends on the intensity of the zeroth- and first-order diffraction waves. By contrast, for a standing wave, this amplitude depends on the intensity of the zeroth-, first-, and second-order diffraction waves. The proposed method is verified experimentally. The measured phase variation amplitude ranges from 0 to 2π, with a relative error of approximately 5%. A nearly linear relation exists between the phase variation amplitude and driving voltage. Our proposed method can also be applied to ordinary sinusoidal phase grating.

  2. Comparison of methods for extracting annual cycle with changing amplitude in climate science

    Science.gov (United States)

    Deng, Q.; Fu, Z.

    2017-12-01

    Changes of annual cycle gains a growing concern recently. The basic hypothesis regards annual cycle as constant. Climatology mean within a time period is usually used to depict the annual cycle. Obviously this hypothesis contradicts with the fact that annual cycle is changing every year. For the lack of a unified definition about annual cycle, the approaches adopted in extracting annual cycle are various and may lead to different results. The precision and validity of these methods need to be examined. In this work we numerical experiments with known monofrequent annual cycle are set to evaluate five popular extracting methods: fitting sinusoids, complex demodulation, Ensemble Empirical Mode Decomposition (EEMD), Nonlinear Mode Decomposition (NMD) and Seasonal trend decomposition procedure based on loess (STL). Three different types of changing amplitude will be generated: steady, linear increasing and nonlinearly varying. Comparing the annual cycle extracted by these methods with the generated annual cycle, we find that (1) NMD performs best in depicting annual cycle itself and its amplitude change, (2) fitting sinusoids, complex demodulation and EEMD methods are more sensitive to long-term memory(LTM) of generated time series thus lead to overfitting annual cycle and too noisy amplitude, oppositely the result of STL underestimate the amplitude variation (3)all of them can present the amplitude trend correctly in long-time scale but the errors on account of noise and LTM are common in some methods over short time scales.

  3. Motion estimation using point cluster method and Kalman filter.

    Science.gov (United States)

    Senesh, M; Wolf, A

    2009-05-01

    The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal

  4. Phase difference estimation method based on data extension and Hilbert transform

    International Nuclear Information System (INIS)

    Shen, Yan-lin; Tu, Ya-qing; Chen, Lin-jun; Shen, Ting-ao

    2015-01-01

    To improve the precision and anti-interference performance of phase difference estimation for non-integer periods of sampling signals, a phase difference estimation method based on data extension and Hilbert transform is proposed. Estimated phase difference is obtained by means of data extension, Hilbert transform, cross-correlation, auto-correlation, and weighted phase average. Theoretical analysis shows that the proposed method suppresses the end effects of Hilbert transform effectively. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of phase difference estimation and has better performance of phase difference estimation than the correlation, Hilbert transform, and data extension-based correlation methods, which contribute to improving the measurement precision of the Coriolis mass flowmeter. (paper)

  5. Relative amplitude preservation processing utilizing surface consistent amplitude correction. Part 4; Surface consistent amplitude correction wo mochiita sotai shinpuku hozon shori. 4

    Energy Technology Data Exchange (ETDEWEB)

    Saeki, T [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1997-10-22

    Discussions were given on seismic exploration from the ground surface using the reflection method, for surface consistent amplitude correction from among effects imposed from the ground surface and a surface layer. Amplitude distribution on the reflection wave zone is complex. Therefore, items to be considered in making an analysis are multiple, such as estimation of spherical surface divergence effect and exponential attenuation effect, not only amplitude change through the surface layer. If all of these items are taken into consideration, burden of the work becomes excessive. As a method to solve this problem, utilization of amplitude in initial movement of a diffraction wave may be conceived. Distribution of the amplitude in initial movement of the diffraction wave shows a value relatively close to distribution of the vibration transmitting and receiving points. The reason for this is thought because characteristics of the vibration transmitting and receiving points related with waveline paths in the vicinity of the ground surface have no great difference both on the diffraction waves and on the reflection waves. The lecture described in this paper introduces an attempt of improving the efficiency of the surface consistent amplitude correction by utilizing the analysis of amplitude in initial movement of the diffraction wave. 4 refs., 2 figs.

  6. Stress estimation in reservoirs using an integrated inverse method

    Science.gov (United States)

    Mazuyer, Antoine; Cupillard, Paul; Giot, Richard; Conin, Marianne; Leroy, Yves; Thore, Pierre

    2018-05-01

    Estimating the stress in reservoirs and their surroundings prior to the production is a key issue for reservoir management planning. In this study, we propose an integrated inverse method to estimate such initial stress state. The 3D stress state is constructed with the displacement-based finite element method assuming linear isotropic elasticity and small perturbations in the current geometry of the geological structures. The Neumann boundary conditions are defined as piecewise linear functions of depth. The discontinuous functions are determined with the CMA-ES (Covariance Matrix Adaptation Evolution Strategy) optimization algorithm to fit wellbore stress data deduced from leak-off tests and breakouts. The disregard of the geological history and the simplified rheological assumptions mean that only the stress field, statically admissible and matching the wellbore data should be exploited. The spatial domain of validity of this statement is assessed by comparing the stress estimations for a synthetic folded structure of finite amplitude with a history constructed assuming a viscous response.

  7. Vector method for strain estimation in phase-sensitive optical coherence elastography

    Science.gov (United States)

    Matveyev, A. L.; Matveev, L. A.; Sovetsky, A. A.; Gelikonov, G. V.; Moiseev, A. A.; Zaitsev, V. Y.

    2018-06-01

    A noise-tolerant approach to strain estimation in phase-sensitive optical coherence elastography, robust to decorrelation distortions, is discussed. The method is based on evaluation of interframe phase-variation gradient, but its main feature is that the phase is singled out at the very last step of the gradient estimation. All intermediate steps operate with complex-valued optical coherence tomography (OCT) signals represented as vectors in the complex plane (hence, we call this approach the ‘vector’ method). In comparison with such a popular method as least-square fitting of the phase-difference slope over a selected region (even in the improved variant with amplitude weighting for suppressing small-amplitude noisy pixels), the vector approach demonstrates superior tolerance to both additive noise in the receiving system and speckle-decorrelation caused by tissue straining. Another advantage of the vector approach is that it obviates the usual necessity of error-prone phase unwrapping. Here, special attention is paid to modifications of the vector method that make it especially suitable for processing deformations with significant lateral inhomogeneity, which often occur in real situations. The method’s advantages are demonstrated using both simulated and real OCT scans obtained during reshaping of a collagenous tissue sample irradiated by an IR laser beam producing complex spatially inhomogeneous deformations.

  8. Optical asymmetric cryptography based on amplitude reconstruction of elliptically polarized light

    Science.gov (United States)

    Cai, Jianjun; Shen, Xueju; Lei, Ming

    2017-11-01

    We propose a novel optical asymmetric image encryption method based on amplitude reconstruction of elliptically polarized light, which is free from silhouette problem. The original image is analytically separated into two phase-only masks firstly, and then the two masks are encoded into amplitudes of the orthogonal polarization components of an elliptically polarized light. Finally, the elliptically polarized light propagates through a linear polarizer, and the output intensity distribution is recorded by a CCD camera to obtain the ciphertext. The whole encryption procedure could be implemented by using commonly used optical elements, and it combines diffusion process and confusion process. As a result, the proposed method achieves high robustness against iterative-algorithm-based attacks. Simulation results are presented to prove the validity of the proposed cryptography.

  9. An Expectation-Maximization Algorithm for Amplitude Estimation of Saturated Optical Transient Signals.

    Energy Technology Data Exchange (ETDEWEB)

    Kagie, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lanterman, Aaron D. [Georgia Inst. of Technology, Atlanta, GA (United States)

    2017-12-01

    This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.

  10. A service based estimation method for MPSoC performance modelling

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer; Madsen, Jan; Jensen, Bjørn Sand

    2008-01-01

    This paper presents an abstract service based estimation method for MPSoC performance modelling which allows fast, cycle accurate design space exploration of complex architectures including multi processor configurations at a very early stage in the design phase. The modelling method uses a service...... oriented model of computation based on Hierarchical Colored Petri Nets and allows the modelling of both software and hardware in one unified model. To illustrate the potential of the method, a small MPSoC system, developed at Bang & Olufsen ICEpower a/s, is modelled and performance estimates are produced...

  11. Computationally Efficient Amplitude Modulated Sinusoidal Audio Coding using Frequency-Domain Linear Prediction

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jensen, Søren Holdt

    2006-01-01

    A method for amplitude modulated sinusoidal audio coding is presented that has low complexity and low delay. This is based on a subband processing system, where, in each subband, the signal is modeled as an amplitude modulated sum of sinusoids. The envelopes are estimated using frequency......-domain linear prediction and the prediction coefficients are quantized. As a proof of concept, we evaluate different configurations in a subjective listening test, and this shows that the proposed method offers significant improvements in sinusoidal coding. Furthermore, the properties of the frequency...

  12. Blind third-order dispersion estimation based on fractional Fourier transformation for coherent optical communication

    Science.gov (United States)

    Yang, Lin; Guo, Peng; Yang, Aiying; Qiao, Yaojun

    2018-02-01

    In this paper, we propose a blind third-order dispersion estimation method based on fractional Fourier transformation (FrFT) in optical fiber communication system. By measuring the chromatic dispersion (CD) at different wavelengths, this method can estimation dispersion slope and further calculate the third-order dispersion. The simulation results demonstrate that the estimation error is less than 2 % in 28GBaud dual polarization quadrature phase-shift keying (DP-QPSK) and 28GBaud dual polarization 16 quadrature amplitude modulation (DP-16QAM) system. Through simulations, the proposed third-order dispersion estimation method is shown to be robust against nonlinear and amplified spontaneous emission (ASE) noise. In addition, to reduce the computational complexity, searching step with coarse and fine granularity is chosen to search optimal order of FrFT. The third-order dispersion estimation method based on FrFT can be used to monitor the third-order dispersion in optical fiber system.

  13. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  14. Explosive Yield Estimation using Fourier Amplitude Spectra of Velocity Histories

    Science.gov (United States)

    Steedman, D. W.; Bradley, C. R.

    2016-12-01

    The Source Physics Experiment (SPE) is a series of explosive shots of various size detonated at varying depths in a borehole in jointed granite. The testbed includes an extensive array of accelerometers for measuring the shock environment close-in to the explosive source. One goal of SPE is to develop greater understanding of the explosion phenomenology in all regimes: from near-source, non-linear response to the far-field linear elastic region, and connecting the analyses from the respective regimes. For example, near-field analysis typically involves review of kinematic response (i.e., acceleration, velocity and displacement) in the time domain and looks at various indicators (e.g., peaks, pulse duration) to facilitate comparison among events. Review of far-field data more often is based on study of response in the frequency domain to facilitate comparison of event magnitudes. To try to "bridge the gap" between approaches, we have developed a scaling law for Fourier amplitude spectra of near-field velocity histories that successfully collapses data from a wide range of yields (100 kg to 5000 kg) and range to sensors in jointed granite. Moreover, we show that we can apply this scaling law to data from a new event to accurately estimate the explosive yield of that event. This approach presents a new way of working with near-field data that will be more compatible with traditional methods of analysis of seismic data and should serve to facilitate end-to-end event analysis. The goal is that this new approach to data analysis will eventually result in improved methods for discrimination of event type (i.e., nuclear or chemical explosion, or earthquake) and magnitude.

  15. A comparison of efficient methods for the computation of Born gluon amplitudes

    International Nuclear Information System (INIS)

    Dinsdale, Michael; Ternick, Marko; Weinzierl, Stefan

    2006-01-01

    We compare four different methods for the numerical computation of the pure gluonic amplitudes in the Born approximation. We are in particular interested in the efficiency of the various methods as the number n of the external particles increases. In addition we investigate the numerical accuracy in critical phase space regions. The methods considered are based on (i) Berends-Giele recurrence relations, (ii) scalar diagrams, (iii) MHV vertices and (iv) BCF recursion relations

  16. Improvement of Accuracy for Background Noise Estimation Method Based on TPE-AE

    Science.gov (United States)

    Itai, Akitoshi; Yasukawa, Hiroshi

    This paper proposes a method of a background noise estimation based on the tensor product expansion with a median and a Monte carlo simulation. We have shown that a tensor product expansion with absolute error method is effective to estimate a background noise, however, a background noise might not be estimated by using conventional method properly. In this paper, it is shown that the estimate accuracy can be improved by using proposed methods.

  17. M-Arctan estimator based on the trust-region method

    Energy Technology Data Exchange (ETDEWEB)

    Hassaine, Yacine; Delourme, Benoit; Panciatici, Patrick [Gestionnaire du Reseau de Transport d Electricite Departement Methodes et appui Immeuble Le Colbert 9, Versailles Cedex (France); Walter, Eric [Laboratoire des signaux et systemes (L2S) Supelec, Gif-sur-Yvette (France)

    2006-11-15

    In this paper a new approach is proposed to increase the robustness of the classical L{sub 2}-norm state estimation. To achieve this task a new formulation of the Levemberg-Marquardt algorithm based on the trust-region method is applied to a new M-estimator, which we called M-Arctan. Results obtained on IEEE networks up to 300 buses are presented. (author)

  18. Rankin-Selberg methods for closed string amplitudes

    CERN Document Server

    Pioline, Boris

    2014-01-01

    After integrating over supermoduli and vertex operator positions, scattering amplitudes in superstring theory at genus $h\\leq 3$ are reduced to an integral of a Siegel modular function of degree $h$ on a fundamental domain of the Siegel upper half plane. A direct computation is in general unwieldy, but becomes feasible if the integrand can be expressed as a sum over images under a suitable subgroup of the Siegel modular group: if so, the integration domain can be extended to a simpler domain at the expense of keeping a single term in each orbit -- a technique known as the Rankin-Selberg method. Motivated by applications to BPS-saturated amplitudes, Angelantonj, Florakis and I have applied this technique to one-loop modular integrals where the integrand is the product of a Siegel-Narain theta function times a weakly, almost holomorphic modular form. I survey our main results, and take some steps in extending this method to genus greater than one.

  19. A new method of on-line multiparameter amplitude analysis with compression

    International Nuclear Information System (INIS)

    Morhac, M.; matousek, V.

    1996-01-01

    An algorithm of one-line multidimensional amplitude analysis with compression using fast adaptive orthogonal transform is presented in the paper. The method is based on a direct modification of multiplication coefficients of the signal flow graph of the fast Cooley-Tukey's algorithm. The coefficients are modified according to a reference vector representing the processed data. The method has been tested to compress three parameter experimental nuclear data. The efficiency of the derived adaptive transform is compared with classical orthogonal transforms. (orig.)

  20. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  1. Improved vertical streambed flux estimation using multiple diurnal temperature methods in series

    Science.gov (United States)

    Irvine, Dylan J.; Briggs, Martin A.; Cartwright, Ian; Scruggs, Courtney; Lautz, Laura K.

    2017-01-01

    Analytical solutions that use diurnal temperature signals to estimate vertical fluxes between groundwater and surface water based on either amplitude ratios (Ar) or phase shifts (Δϕ) produce results that rarely agree. Analytical solutions that simultaneously utilize Ar and Δϕ within a single solution have more recently been derived, decreasing uncertainty in flux estimates in some applications. Benefits of combined (ArΔϕ) methods also include that thermal diffusivity and sensor spacing can be calculated. However, poor identification of either Ar or Δϕ from raw temperature signals can lead to erratic parameter estimates from ArΔϕ methods. An add-on program for VFLUX 2 is presented to address this issue. Using thermal diffusivity selected from an ArΔϕ method during a reliable time period, fluxes are recalculated using an Ar method. This approach maximizes the benefits of the Ar and ArΔϕ methods. Additionally, sensor spacing calculations can be used to identify periods with unreliable flux estimates, or to assess streambed scour. Using synthetic and field examples, the use of these solutions in series was particularly useful for gaining conditions where fluxes exceeded 1 m/d.

  2. Available pressure amplitude of linear compressor based on phasor triangle model

    Science.gov (United States)

    Duan, C. X.; Jiang, X.; Zhi, X. Q.; You, X. K.; Qiu, L. M.

    2017-12-01

    The linear compressor for cryocoolers possess the advantages of long-life operation, high efficiency, low vibration and compact structure. It is significant to study the match mechanisms between the compressor and the cold finger, which determines the working efficiency of the cryocooler. However, the output characteristics of linear compressor are complicated since it is affected by many interacting parameters. The existing matching methods are simplified and mainly focus on the compressor efficiency and output acoustic power, while neglecting the important output parameter of pressure amplitude. In this study, a phasor triangle model basing on analyzing the forces of the piston is proposed. It can be used to predict not only the output acoustic power, the efficiency, but also the pressure amplitude of the linear compressor. Calculated results agree well with the measurement results of the experiment. By this phasor triangle model, the theoretical maximum output pressure amplitude of the linear compressor can be calculated simply based on a known charging pressure and operating frequency. Compared with the mechanical and electrical model of the linear compressor, the new model can provide an intuitionistic understanding on the match mechanism with faster computational process. The model can also explain the experimental phenomenon of the proportional relationship between the output pressure amplitude and the piston displacement in experiments. By further model analysis, such phenomenon is confirmed as an expression of the unmatched design of the compressor. The phasor triangle model may provide an alternative method for the compressor design and matching with the cold finger.

  3. A Comparison of Amplitude-Based and Phase-Based Positron Emission Tomography Gating Algorithms for Segmentation of Internal Target Volumes of Tumors Subject to Respiratory Motion

    International Nuclear Information System (INIS)

    Jani, Shyam S.; Robinson, Clifford G.; Dahlbom, Magnus; White, Benjamin M.; Thomas, David H.; Gaudio, Sergio; Low, Daniel A.; Lamb, James M.

    2013-01-01

    Purpose: To quantitatively compare the accuracy of tumor volume segmentation in amplitude-based and phase-based respiratory gating algorithms in respiratory-correlated positron emission tomography (PET). Methods and Materials: List-mode fluorodeoxyglucose-PET data was acquired for 10 patients with a total of 12 fluorodeoxyglucose-avid tumors and 9 lymph nodes. Additionally, a phantom experiment was performed in which 4 plastic butyrate spheres with inner diameters ranging from 1 to 4 cm were imaged as they underwent 1-dimensional motion based on 2 measured patient breathing trajectories. PET list-mode data were gated into 8 bins using 2 amplitude-based (equal amplitude bins [A1] and equal counts per bin [A2]) and 2 temporal phase-based gating algorithms. Gated images were segmented using a commercially available gradient-based technique and a fixed 40% threshold of maximum uptake. Internal target volumes (ITVs) were generated by taking the union of all 8 contours per gated image. Segmented phantom ITVs were compared with their respective ground-truth ITVs, defined as the volume subtended by the tumor model positions covering 99% of breathing amplitude. Superior-inferior distances between sphere centroids in the end-inhale and end-exhale phases were also calculated. Results: Tumor ITVs from amplitude-based methods were significantly larger than those from temporal-based techniques (P=.002). For lymph nodes, A2 resulted in ITVs that were significantly larger than either of the temporal-based techniques (P<.0323). A1 produced the largest and most accurate ITVs for spheres with diameters of ≥2 cm (P=.002). No significant difference was shown between algorithms in the 1-cm sphere data set. For phantom spheres, amplitude-based methods recovered an average of 9.5% more motion displacement than temporal-based methods under regular breathing conditions and an average of 45.7% more in the presence of baseline drift (P<.001). Conclusions: Target volumes in images generated

  4. Guideline for Bayesian Net based Software Fault Estimation Method for Reactor Protection System

    International Nuclear Information System (INIS)

    Eom, Heung Seop; Park, Gee Yong; Jang, Seung Cheol

    2011-01-01

    The purpose of this paper is to provide a preliminary guideline for the estimation of software faults in a safety-critical software, for example, reactor protection system's software. As the fault estimation method is based on Bayesian Net which intensively uses subjective probability and informal data, it is necessary to define formal procedure of the method to minimize the variability of the results. The guideline describes assumptions, limitations and uncertainties, and the product of the fault estimation method. The procedure for conducting a software fault-estimation method is then outlined, highlighting the major tasks involved. The contents of the guideline are based on our own experience and a review of research guidelines developed for a PSA

  5. Perceptual and statistical analysis of cardiac phase and amplitude images

    International Nuclear Information System (INIS)

    Houston, A.; Craig, A.

    1991-01-01

    A perceptual experiment was conducted using cardiac phase and amplitude images. Estimates of statistical parameters were derived from the images and the diagnostic potential of human and statistical decisions compared. Five methods were used to generate the images from 75 gated cardiac studies, 39 of which were classified as pathological. The images were presented to 12 observers experienced in nuclear medicine. The observers rated the images using a five-category scale based on their confidence of an abnormality presenting. Circular and linear statistics were used to analyse phase and amplitude image data, respectively. Estimates of mean, standard deviation (SD), skewness, kurtosis and the first term of the spatial correlation function were evaluated in the region of the left ventricle. A receiver operating characteristic analysis was performed on both sets of data and the human and statistical decisions compared. For phase images, circular SD was shown to discriminate better between normal and abnormal than experienced observers, but no single statistic discriminated as well as the human observer for amplitude images. (orig.)

  6. Evaluation of Model Based State of Charge Estimation Methods for Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Zhongyue Zou

    2014-08-01

    Full Text Available Four model-based State of Charge (SOC estimation methods for lithium-ion (Li-ion batteries are studied and evaluated in this paper. Different from existing literatures, this work evaluates different aspects of the SOC estimation, such as the estimation error distribution, the estimation rise time, the estimation time consumption, etc. The equivalent model of the battery is introduced and the state function of the model is deduced. The four model-based SOC estimation methods are analyzed first. Simulations and experiments are then established to evaluate the four methods. The urban dynamometer driving schedule (UDDS current profiles are applied to simulate the drive situations of an electrified vehicle, and a genetic algorithm is utilized to identify the model parameters to find the optimal parameters of the model of the Li-ion battery. The simulations with and without disturbance are carried out and the results are analyzed. A battery test workbench is established and a Li-ion battery is applied to test the hardware in a loop experiment. Experimental results are plotted and analyzed according to the four aspects to evaluate the four model-based SOC estimation methods.

  7. Modulating Function-Based Method for Parameter and Source Estimation of Partial Differential Equations

    KAUST Repository

    Asiri, Sharefa M.

    2017-10-08

    Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters

  8. Exploiting of the Compression Methods for Reconstruction of the Antenna Far-Field Using Only Amplitude Near-Field Measurements

    Directory of Open Access Journals (Sweden)

    J. Puskely

    2010-06-01

    Full Text Available The novel approach exploits the principle of the conventional two-plane amplitude measurements for the reconstruction of the unknown electric field distribution on the antenna aperture. The method combines a global optimization with a compression method. The global optimization method (GO is used to minimize the functional, and the compression method is used to reduce the number of unknown variables. The algorithm employs the Real Coded Genetic Algorithm (RCGA as the global optimization approach. The Discrete Cosine Transform (DCT and the Discrete Wavelet Transform (DWT are applied to reduce the number of unknown variables. Pros and cons of methods are investigated and reported for the solution of the problem. In order to make the algorithm faster, exploitation of amplitudes from a single scanning plane is also discussed. First, the algorithm is used to obtain an initial estimate. Subsequently, the common Fourier iterative algorithm is used to reach global minima with sufficient accuracy. The method is examined measuring the dish antenna.

  9. Channel estimation in DFT-based offset-QAM OFDM systems.

    Science.gov (United States)

    Zhao, Jian

    2014-10-20

    Offset quadrature amplitude modulation (offset-QAM) orthogonal frequency division multiplexing (OFDM) exhibits enhanced net data rates compared to conventional OFDM, and reduced complexity compared to Nyquist FDM (N-FDM). However, channel estimation in discrete-Fourier-transform (DFT) based offset-QAM OFDM is different from that in conventional OFDM and requires particular study. In this paper, we derive a closed-form expression for the demultiplexed signal in DFT-based offset-QAM systems and show that although the residual crosstalk is orthogonal to the decoded signal, its existence degrades the channel estimation performance when the conventional least-square method is applied. We propose and investigate four channel estimation algorithms for offset-QAM OFDM that vary in terms of performance, complexity, and tolerance to system parameters. It is theoretically and experimentally shown that simple channel estimation can be realized in offset-QAM OFDM with the achieved performance close to the theoretical limit. This, together with the existing advantages over conventional OFDM and N-FDM, makes this technology very promising for optical communication systems.

  10. An Amplitude Spectral Capon Estimator with a Variable Filter Length

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Smaragdis, Paris; Christensen, Mads Græsbøll

    2012-01-01

    The filter bank methods have been a popular non-parametric way of computing the complex amplitude spectrum. So far, the length of the filters in these filter banks has been set to some constant value independently of the data. In this paper, we take the first step towards considering the filter...

  11. The method of contour rotations and the three particle amplitudes

    International Nuclear Information System (INIS)

    Brinati, J.R.

    1980-01-01

    The application of the method of contour rotations to the solution of the Faddeev-Lovelace equations and the calculation of the break-up and stripping amplitudes in a system of three distinct particles is reviewed. A relationship between the masses of the particles is obtained, which permits the break-up amplitude to be calculated from a single iteration of the final integral equation. (Author) [pt

  12. Maximum Likelihood-Based Methods for Target Velocity Estimation with Distributed MIMO Radar

    Directory of Open Access Journals (Sweden)

    Zhenxin Cao

    2018-02-01

    Full Text Available The estimation problem for target velocity is addressed in this in the scenario with a distributed multi-input multi-out (MIMO radar system. A maximum likelihood (ML-based estimation method is derived with the knowledge of target position. Then, in the scenario without the knowledge of target position, an iterative method is proposed to estimate the target velocity by updating the position information iteratively. Moreover, the Carmér-Rao Lower Bounds (CRLBs for both scenarios are derived, and the performance degradation of velocity estimation without the position information is also expressed. Simulation results show that the proposed estimation methods can approach the CRLBs, and the velocity estimation performance can be further improved by increasing either the number of radar antennas or the information accuracy of the target position. Furthermore, compared with the existing methods, a better estimation performance can be achieved.

  13. Inverse amplitude method and Adler zeros

    International Nuclear Information System (INIS)

    Gomez Nicola, A.; Pelaez, J. R.; Rios, G.

    2008-01-01

    The inverse amplitude method is a powerful unitarization technique to enlarge the energy applicability region of effective Lagrangians. It has been widely used to describe resonances in hadronic physics, combined with chiral perturbation theory, as well as in the strongly interacting symmetry breaking sector. In this work we show how it can be slightly modified to also account for the subthreshold region, incorporating correctly the Adler zeros required by chiral symmetry and eliminating spurious poles. These improvements produce negligible effects on the physical region.

  14. Analytical method for estimating the thermal expansion coefficient of metals at high temperature

    International Nuclear Information System (INIS)

    Takamoto, S; Izumi, S; Nakata, T; Sakai, S; Oinuma, S; Nakatani, Y

    2015-01-01

    In this paper, we propose an analytical method for estimating the thermal expansion coefficient (TEC) of metals at high-temperature ranges. Although the conventional method based on quasiharmonic approximation (QHA) shows good results at low temperatures, anharmonic effects caused by large-amplitude thermal vibrations reduces its accuracy at high temperatures. Molecular dynamics (MD) naturally includes the anharmonic effect. However, since the computational cost of MD is relatively high, in order to make an interatomic potential capable of reproducing TEC, an analytical method is essential. In our method, analytical formulation of the radial distribution function (RDF) at finite temperature realizes the estimation of the TEC. Each peak of the RDF is approximated by the Gaussian distribution. The average and variance of the Gaussian distribution are formulated by decomposing the fluctuation of interatomic distance into independent elastic waves. We incorporated two significant anharmonic effects into the method. One is the increase in the averaged interatomic distance caused by large amplitude vibration. The second is the variation in the frequency of elastic waves. As a result, the TECs of fcc and bcc crystals estimated by our method show good agreement with those of MD. Our method enables us to make an interatomic potential that reproduces the TEC at high temperature. We developed the GEAM potential for nickel. The TEC of the fitted potential showed good agreement with experimental data from room temperature to 1000 K. As compared with the original potential, it was found that the third derivative of the wide-range curve was modified, while the zeroth, first and second derivatives were unchanged. This result supports the conventional theory of solid state physics. We believe our analytical method and developed interatomic potential will contribute to future high-temperature material development. (paper)

  15. Vehicle Speed Estimation and Forecasting Methods Based on Cellular Floating Vehicle Data

    Directory of Open Access Journals (Sweden)

    Wei-Kuang Lai

    2016-02-01

    Full Text Available Traffic information estimation and forecasting methods based on cellular floating vehicle data (CFVD are proposed to analyze the signals (e.g., handovers (HOs, call arrivals (CAs, normal location updates (NLUs and periodic location updates (PLUs from cellular networks. For traffic information estimation, analytic models are proposed to estimate the traffic flow in accordance with the amounts of HOs and NLUs and to estimate the traffic density in accordance with the amounts of CAs and PLUs. Then, the vehicle speeds can be estimated in accordance with the estimated traffic flows and estimated traffic densities. For vehicle speed forecasting, a back-propagation neural network algorithm is considered to predict the future vehicle speed in accordance with the current traffic information (i.e., the estimated vehicle speeds from CFVD. In the experimental environment, this study adopted the practical traffic information (i.e., traffic flow and vehicle speed from Taiwan Area National Freeway Bureau as the input characteristics of the traffic simulation program and referred to the mobile station (MS communication behaviors from Chunghwa Telecom to simulate the traffic information and communication records. The experimental results illustrated that the average accuracy of the vehicle speed forecasting method is 95.72%. Therefore, the proposed methods based on CFVD are suitable for an intelligent transportation system.

  16. Recovery of seismic attributes by using the amplitude zero offset migration; Recuperacao de atributos sismicos utilizando a migracao para afastamento nulo em verdadeira amplitude

    Energy Technology Data Exchange (ETDEWEB)

    Vasquez, Angela Cristina Romero

    1999-07-01

    In the present work a method was developed to extract reflections coefficients after applying amplitude zero offset migration (TA MZO) on synthetic seismic data composed of several common offset sections. Sorting to the common mid point domain (CMP) provides the conventional amplitude versus offset curve directly. A second MZO application with different weights provides an estimation of incident angles, transforming AVO in AVA. Four models were developed with this objective, which basic difference is the structural complexity. One of these models is based on Brazilian turbidity reservoir of Neo-Albian age and proves the wide applicability of this methodology on reservoir characterization. Final, AVA results were compared with the theoretical AVA, quantifying the relative errors between them.

  17. Bandwidth efficient channel estimation method for airborne hyperspectral data transmission in sparse doubly selective communication channels

    Science.gov (United States)

    Vahidi, Vahid; Saberinia, Ebrahim; Regentova, Emma E.

    2017-10-01

    A channel estimation (CE) method based on compressed sensing (CS) is proposed to estimate the sparse and doubly selective (DS) channel for hyperspectral image transmission from unmanned aircraft vehicles to ground stations. The proposed method contains three steps: (1) the priori estimate of the channel by orthogonal matching pursuit (OMP), (2) calculation of the linear minimum mean square error (LMMSE) estimate of the received pilots given the estimated channel, and (3) estimate of the complex amplitudes and Doppler shifts of the channel using the enhanced received pilot data applying a second round of a CS algorithm. The proposed method is named DS-LMMSE-OMP, and its performance is evaluated by simulating transmission of AVIRIS hyperspectral data via the communication channel and assessing their fidelity for the automated analysis after demodulation. The performance of the DS-LMMSE-OMP approach is compared with that of two other state-of-the-art CE methods. The simulation results exhibit up to 8-dB figure of merit in the bit error rate and 50% improvement in the hyperspectral image classification accuracy.

  18. A Copula-Based Method for Estimating Shear Strength Parameters of Rock Mass

    Directory of Open Access Journals (Sweden)

    Da Huang

    2014-01-01

    Full Text Available The shear strength parameters (i.e., the internal friction coefficient f and cohesion c are very important in rock engineering, especially for the stability analysis and reinforcement design of slopes and underground caverns. In this paper, a probabilistic method, Copula-based method, is proposed for estimating the shear strength parameters of rock mass. The optimal Copula functions between rock mass quality Q and f, Q and c for the marbles are established based on the correlation analyses of the results of 12 sets of in situ tests in the exploration adits of Jinping I-Stage Hydropower Station. Although the Copula functions are derived from the in situ tests for the marbles, they can be extended to be applied to other types of rock mass with similar geological and mechanical properties. For another 9 sets of in situ tests as an extensional application, by comparison with the results from Hoek-Brown criterion, the estimated values of f and c from the Copula-based method achieve better accuracy. Therefore, the proposed Copula-based method is an effective tool in estimating rock strength parameters.

  19. Scattering Amplitudes via Algebraic Geometry Methods

    DEFF Research Database (Denmark)

    Søgaard, Mads

    Feynman diagrams. The study of multiloop scattering amplitudes is crucial for the new era of precision phenomenology at the Large Hadron Collider (LHC) at CERN. Loop-level scattering amplitudes can be reduced to a basis of linearly independent integrals whose coefficients are extracted from generalized...

  20. Scattering Amplitudes via Algebraic Geometry Methods

    CERN Document Server

    Søgaard, Mads; Damgaard, Poul Henrik

    This thesis describes recent progress in the understanding of the mathematical structure of scattering amplitudes in quantum field theory. The primary purpose is to develop an enhanced analytic framework for computing multiloop scattering amplitudes in generic gauge theories including QCD without Feynman diagrams. The study of multiloop scattering amplitudes is crucial for the new era of precision phenomenology at the Large Hadron Collider (LHC) at CERN. Loop-level scattering amplitudes can be reduced to a basis of linearly independent integrals whose coefficients are extracted from generalized unitarity cuts. We take advantage of principles from algebraic geometry in order to extend the notion of maximal cuts to a large class of two- and three-loop integrals. This allows us to derive unique and surprisingly compact formulae for the coefficients of the basis integrals. Our results are expressed in terms of certain linear combinations of multivariate residues and elliptic integrals computed from products of ...

  1. Limitations of the time slide method of background estimation

    International Nuclear Information System (INIS)

    Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis

    2010-01-01

    Time shifting the output of gravitational wave detectors operating in coincidence is a convenient way of estimating the background in a search for short-duration signals. In this paper, we show how non-stationary data affect the background estimation precision. We present a method of measuring the fluctuations of the data and computing its effects on a coincident search. In particular, we show that for fluctuations of moderate amplitude, time slides larger than the fluctuation time scales can be used. We also recall how the false alarm variance saturates with the number of time shifts.

  2. Limitations of the time slide method of background estimation

    Energy Technology Data Exchange (ETDEWEB)

    Was, Michal; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Leroy, Nicolas; Robinet, Florent; Vavoulidis, Miltiadis, E-mail: mwas@lal.in2p3.f [LAL, Universite Paris-Sud, CNRS/IN2P3, Orsay (France)

    2010-10-07

    Time shifting the output of gravitational wave detectors operating in coincidence is a convenient way of estimating the background in a search for short-duration signals. In this paper, we show how non-stationary data affect the background estimation precision. We present a method of measuring the fluctuations of the data and computing its effects on a coincident search. In particular, we show that for fluctuations of moderate amplitude, time slides larger than the fluctuation time scales can be used. We also recall how the false alarm variance saturates with the number of time shifts.

  3. A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation

    Directory of Open Access Journals (Sweden)

    Tianshuang Qiu

    2007-12-01

    Full Text Available This paper proposes a geometric method to locate a mobile station (MS in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA, angle of arrival (AOA, and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of “biased” or “unbiased” is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.

  4. An improved Q estimation approach: the weighted centroid frequency shift method

    Science.gov (United States)

    Li, Jingnan; Wang, Shangxu; Yang, Dengfeng; Dong, Chunhui; Tao, Yonghui; Zhou, Yatao

    2016-06-01

    Seismic wave propagation in subsurface media suffers from absorption, which can be quantified by the quality factor Q. Accurate estimation of the Q factor is of great importance for the resolution enhancement of seismic data, precise imaging and interpretation, and reservoir prediction and characterization. The centroid frequency shift method (CFS) is currently one of the most commonly used Q estimation methods. However, for seismic data that contain noise, the accuracy and stability of Q extracted using CFS depend on the choice of frequency band. In order to reduce the influence of frequency band choices and obtain Q with greater precision and robustness, we present an improved CFS Q measurement approach—the weighted CFS method (WCFS), which incorporates a Gaussian weighting coefficient into the calculation procedure of the conventional CFS. The basic idea is to enhance the proportion of advantageous frequencies in the amplitude spectrum and reduce the weight of disadvantageous frequencies. In this novel method, we first construct a Gauss function using the centroid frequency and variance of the reference wavelet. Then we employ it as the weighting coefficient for the amplitude spectrum of the original signal. Finally, the conventional CFS is adopted for the weighted amplitude spectrum to extract the Q factor. Numerical tests of noise-free synthetic data demonstrate that the WCFS is feasible and efficient, and produces more accurate results than the conventional CFS. Tests for noisy synthetic data indicate that the new method has better anti-noise capability than the CFS. The application to field vertical seismic profile (VSP) data further demonstrates its validity5.

  5. Construction of multi-Regge amplitudes by the Van Hove--Durand method

    International Nuclear Information System (INIS)

    Morrow, R.A.

    1978-01-01

    The Van Hove--Durand method of deriving Regge amplitudes by summing Feynman tree diagrams is extended to the multi-Regge domain. Using previously developed vertex functions for particles of arbitrary spins, single-, double-, and triple-Regge amplitudes incorporating signature are obtained. Criteria necessary to arrive at unique Regge-pole terms are found. It is also shown how external spins can be included

  6. Estimating misclassification error: a closer look at cross-validation based methods

    Directory of Open Access Journals (Sweden)

    Ounpraseuth Songthip

    2012-11-01

    Full Text Available Abstract Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV methods based on sampling without replacement. Monte Carlo (MC simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.

  7. Digital baseline estimation method for multi-channel pulse height analyzing

    International Nuclear Information System (INIS)

    Xiao Wuyun; Wei Yixiang; Ai Xianyun

    2005-01-01

    The basic features of digital baseline estimation for multi-channel pulse height analysis are introduced. The weight-function of minimum-noise baseline filter is deduced with functional variational calculus. The frequency response of this filter is also deduced with Fourier transformation, and the influence of parameters on amplitude frequency response characteristics is discussed. With MATLAB software, the noise voltage signal from the charge sensitive preamplifier is simulated, and the processing effect of minimum-noise digital baseline estimation is verified. According to the results of this research, digital baseline estimation method can estimate baseline optimally, and it is very suitable to be used in digital multi-channel pulse height analysis. (authors)

  8. Comparison of Prevalence- and Smoking Impact Ratio-Based Methods of Estimating Smoking-Attributable Fractions of Deaths

    Directory of Open Access Journals (Sweden)

    Kyoung Ae Kong

    2016-04-01

    Full Text Available Background: Smoking is a major modifiable risk factor for premature mortality. Estimating the smoking-attributable burden is important for public health policy. Typically, prevalence- or smoking impact ratio (SIR-based methods are used to derive estimates, but there is controversy over which method is more appropriate for country-specific estimates. We compared smoking-attributable fractions (SAFs of deaths estimated by these two methods. Methods: To estimate SAFs in 2012, we used several different prevalence-based approaches using no lag and 10- and 20-year lags. For the SIR-based method, we obtained lung cancer mortality rates from the Korean Cancer Prevention Study (KCPS and from the United States-based Cancer Prevention Study-II (CPS-II. The relative risks for the diseases associated with smoking were also obtained from these cohort studies. Results: For males, SAFs obtained using KCPS-derived SIRs were similar to those obtained using prevalence-based methods. For females, SAFs obtained using KCPS-derived SIRs were markedly greater than all prevalence-based SAFs. Differences in prevalence-based SAFs by time-lag period were minimal among males, but SAFs obtained using longer-lagged prevalence periods were significantly larger among females. SAFs obtained using CPS-II-based SIRs were lower than KCPS-based SAFs by >15 percentage points for most diseases, with the exceptions of lung cancer and chronic obstructive pulmonary disease. Conclusions: SAFs obtained using prevalence- and SIR-based methods were similar for males. However, neither prevalence-based nor SIR-based methods resulted in precise SAFs among females. The characteristics of the study population should be carefully considered when choosing a method to estimate SAF.

  9. Exact solution to the Coulomb wave using the linearized phase-amplitude method

    Directory of Open Access Journals (Sweden)

    Shuji Kiyokawa

    2015-08-01

    Full Text Available The author shows that the amplitude equation from the phase-amplitude method of calculating continuum wave functions can be linearized into a 3rd-order differential equation. Using this linearized equation, in the case of the Coulomb potential, the author also shows that the amplitude function has an analytically exact solution represented by means of an irregular confluent hypergeometric function. Furthermore, it is shown that the exact solution for the Coulomb potential reproduces the wave function for free space expressed by the spherical Bessel function. The amplitude equation for the large component of the Dirac spinor is also shown to be the linearized 3rd-order differential equation.

  10. Least Squares Estimate of the Initial Phases in STFT based Speech Enhancement

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Krawczyk-Becker, Martin; Gerkmann, Timo

    2015-01-01

    In this paper, we consider single-channel speech enhancement in the short time Fourier transform (STFT) domain. We suggest to improve an STFT phase estimate by estimating the initial phases. The method is based on the harmonic model and a model for the phase evolution over time. The initial phases...... are estimated by setting up a least squares problem between the noisy phase and the model for phase evolution. Simulations on synthetic and speech signals show a decreased error on the phase when an estimate of the initial phase is included compared to using the noisy phase as an initialisation. The error...... on the phase is decreased at input SNRs from -10 to 10 dB. Reconstructing the signal using the clean amplitude, the mean squared error is decreased and the PESQ score is increased....

  11. ON MEASURING AMPLITUDES AND PERIODS OF PHYSICAL PENDULUM MICRO-SWINGS WITH ROLLING-CONTACT BEARING

    Directory of Open Access Journals (Sweden)

    N. N. Riznookaya

    2011-01-01

    Full Text Available The paper considers a method and an instrument for measuring amplitudes and  periods of physical pendulum oscillations with rolling-contact bearing in the regime of micro-swings when the oscillation amplitude is significantly less of an elastic contact angle. It has been established that the main factors limiting a measuring accuracy are noises of the measuring circuit, base vibration and analog-digital conversion. A new measuring methodology based on original algorithms of data processing and application of the well-known methods for statistic processing of a measuring signal is  proposed in the paper. The paper contains error estimations for measuring oscillation amplitudes justified by discreteness of a signal conversion in a photoelectric receptor and also by the influence of measuring circuit noise. The paper reveals that the applied methodologies make it possible to ensure measuring of amplitudes with an error of 0.2 second of arc and measuring of a period with an error of 10–4 s. The original measuring instrument including mechanical and optical devices and also an electric circuit of optical-to-electrical measuring signal conversion is described in the paper. 

  12. On the Rankin-Selberg method for higher genus string amplitudes

    CERN Document Server

    Florakis, Ioannis

    2017-01-01

    Closed string amplitudes at genus $h\\leq 3$ are given by integrals of Siegel modular functions on a fundamental domain of the Siegel upper half-plane. When the integrand is of rapid decay near the cusps, the integral can be computed by the Rankin-Selberg method, which consists of inserting an Eisenstein series $E_h(s)$ in the integrand, computing the integral by the orbit method, and finally extracting the residue at a suitable value of $s$. String amplitudes, however, typically involve integrands with polynomial or even exponential growth at the cusps, and a renormalization scheme is required to treat infrared divergences. Generalizing Zagier's extension of the Rankin-Selberg method at genus one, we develop the Rankin-Selberg method for Siegel modular functions of degree 2 and 3 with polynomial growth near the cusps. In particular, we show that the renormalized modular integral of the Siegel-Narain partition function of an even self-dual lattice of signature $(d,d)$ is proportional to a residue of the Langla...

  13. Time-of-flight trigger based on the use of the time-to-amplitude converter

    International Nuclear Information System (INIS)

    Ladygin, V.P.; Man'yakov, P.K.; Reznikov, S.G.

    2000-01-01

    The method of the time-of-flight trigger realization based on the use of the time-to-amplitude converter is described. Such a trigger has a short decision time and high efficiency of the useful event selection. (author)

  14. Analytic expressions of amplitudes by the cross-ratio identity method

    International Nuclear Information System (INIS)

    Zhou, Kang

    2017-01-01

    In order to obtain the analytic expression of an amplitude from a generic CHY-integrand, a new algorithm based on the so-called cross-ratio identities has been proposed recently. In this paper, we apply this new approach to a variety of theories including the non-linear sigma model, special Galileon theory, pure Yang-Mills theory, pure gravity, Born-Infeld theory, Dirac-Born-Infeld theory and its extension, Yang-Mills-scalar theory, and Einstein-Maxwell and Einstein-Yang-Mills theory. CHY-integrands of these theories which contain higher-order poles can be calculated conveniently by using the cross-ratio identity method, and all results above have been verified numerically. (orig.)

  15. Analytic expressions of amplitudes by the cross-ratio identity method

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Kang [Zhejiang University, Zhejiang Institute of Modern Physics, Hangzhou (China)

    2017-06-15

    In order to obtain the analytic expression of an amplitude from a generic CHY-integrand, a new algorithm based on the so-called cross-ratio identities has been proposed recently. In this paper, we apply this new approach to a variety of theories including the non-linear sigma model, special Galileon theory, pure Yang-Mills theory, pure gravity, Born-Infeld theory, Dirac-Born-Infeld theory and its extension, Yang-Mills-scalar theory, and Einstein-Maxwell and Einstein-Yang-Mills theory. CHY-integrands of these theories which contain higher-order poles can be calculated conveniently by using the cross-ratio identity method, and all results above have been verified numerically. (orig.)

  16. Laser beam complex amplitude measurement by phase diversity.

    Science.gov (United States)

    Védrenne, Nicolas; Mugnier, Laurent M; Michau, Vincent; Velluet, Marie-Thérèse; Bierent, Rudolph

    2014-02-24

    The control of the optical quality of a laser beam requires a complex amplitude measurement able to deal with strong modulus variations and potentially highly perturbed wavefronts. The method proposed here consists in an extension of phase diversity to complex amplitude measurements that is effective for highly perturbed beams. Named camelot for Complex Amplitude MEasurement by a Likelihood Optimization Tool, it relies on the acquisition and processing of few images of the beam section taken along the optical path. The complex amplitude of the beam is retrieved from the images by the minimization of a Maximum a Posteriori error metric between the images and a model of the beam propagation. The analytical formalism of the method and its experimental validation are presented. The modulus of the beam is compared to a measurement of the beam profile, the phase of the beam is compared to a conventional phase diversity estimate. The precision of the experimental measurements is investigated by numerical simulations.

  17. Perturbative versus Schwinger-propagator method for the calculation of amplitudes in a magnetic field

    International Nuclear Information System (INIS)

    Nieves, Jose F.; Pal, Palash B.

    2006-01-01

    We consider the calculation of amplitudes for processes that take place in a constant background magnetic field, first using the standard method for the calculation of an amplitude in an external field, and second utilizing the Schwinger propagator for charged particles in a magnetic field. We show that there are processes for which the Schwinger-propagator method does not yield the total amplitude. We explain why the two methods yield equivalent results in some cases and indicate when we can expect the equivalence to hold. We show these results in fairly general terms and illustrate them with specific examples as well

  18. Estimating and correcting the amplitude radiation pattern of a virtual source

    NARCIS (Netherlands)

    Van der Neut, J.; Bakulin, A.

    2009-01-01

    In the virtual source (VS) method we crosscorrelate seismic recordings at two receivers to create a new data set as if one of these receivers were a virtual source and the other a receiver. We focus on the amplitudes and kinematics of VS data, generated by an array of active sources at the surface

  19. The Software Cost Estimation Method Based on Fuzzy Ontology

    Directory of Open Access Journals (Sweden)

    Plecka Przemysław

    2014-12-01

    Full Text Available In the course of sales process of Enterprise Resource Planning (ERP Systems, it turns out that the standard system must be extended or changed (modified according to specific customer’s requirements. Therefore, suppliers face the problem of determining the cost of additional works. Most methods of cost estimation bring satisfactory results only at the stage of pre-implementation analysis. However, suppliers need to know the estimated cost as early as at the stage of trade talks. During contract negotiations, they expect not only the information about the costs of works, but also about the risk of exceeding these costs or about the margin of safety. One method that gives more accurate results at the stage of trade talks is the method based on the ontology of implementation costs. This paper proposes modification of the method involving the use of fuzzy attributes, classes, instances and relations in the ontology. The result provides not only the information about the value of work, but also about the minimum and maximum expected cost, and the most likely range of costs. This solution allows suppliers to effectively negotiate the contract and increase the chances of successful completion of the project.

  20. Adaptive Spectral Doppler Estimation

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Jensen, Jørgen Arendt

    2009-01-01

    . The methods can also provide better quality of the estimated power spectral density (PSD) of the blood signal. Adaptive spectral estimation techniques are known to pro- vide good spectral resolution and contrast even when the ob- servation window is very short. The 2 adaptive techniques are tested......In this paper, 2 adaptive spectral estimation techniques are analyzed for spectral Doppler ultrasound. The purpose is to minimize the observation window needed to estimate the spectrogram to provide a better temporal resolution and gain more flexibility when designing the data acquisition sequence...... and compared with the averaged periodogram (Welch’s method). The blood power spectral capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slow-time and depth. The blood amplitude and phase estimation technique (BAPES) is based on finding a set...

  1. Amplitude differences least squares method applied to temporal cardiac beat alignment

    International Nuclear Information System (INIS)

    Correa, R O; Laciar, E; Valentinuzzi, M E

    2007-01-01

    High resolution averaged ECG is an important diagnostic technique in post-infarcted and/or chagasic patients with high risk of ventricular tachycardia (VT). It calls for precise determination of the synchronism point (fiducial point) in each beat to be averaged. Cross-correlation (CC) between each detected beat and a reference beat is, by and large, the standard alignment procedure. However, the fiducial point determination is not precise in records contaminated with high levels of noise. Herein, we propose an alignment procedure based on the least squares calculation of the amplitude differences (LSAD) between the ECG samples and a reference or template beat. Both techniques, CC and LSAD, were tested in high resolution ECG's corrupted with white noise and 50 Hz line interference of varying amplitudes (RMS range: 0-100μV). Results point out that LSDA produced a lower alignment error in all contaminated records while in those blurred by power line interference better results were found only within the 0-40 μV range. It is concluded that the proposed method represents a valid alignment alternative

  2. A fast pulse phase estimation method for X-ray pulsar signals based on epoch folding

    Directory of Open Access Journals (Sweden)

    Xue Mengfan

    2016-06-01

    Full Text Available X-ray pulsar-based navigation (XPNAV is an attractive method for autonomous deep-space navigation in the future. The pulse phase estimation is a key task in XPNAV and its accuracy directly determines the navigation accuracy. State-of-the-art pulse phase estimation techniques either suffer from poor estimation accuracy, or involve the maximization of generally non-convex object function, thus resulting in a large computational cost. In this paper, a fast pulse phase estimation method based on epoch folding is presented. The statistical properties of the observed profile obtained through epoch folding are developed. Based on this, we recognize the joint probability distribution of the observed profile as the likelihood function and utilize a fast Fourier transform-based procedure to estimate the pulse phase. Computational complexity of the proposed estimator is analyzed as well. Experimental results show that the proposed estimator significantly outperforms the currently used cross-correlation (CC and nonlinear least squares (NLS estimators, while significantly reduces the computational complexity compared with NLS and maximum likelihood (ML estimators.

  3. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    Science.gov (United States)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of

  4. Fetal movement detection based on QRS amplitude variations in abdominal ECG recordings.

    Science.gov (United States)

    Rooijakkers, M J; de Lau, H; Rabotti, C; Oei, S G; Bergmans, J W M; Mischi, M

    2014-01-01

    Evaluation of fetal motility can give insight in fetal health, as a strong decrease can be seen as a precursor to fetal death. Typically, the assessment of fetal health by fetal movement detection relies on the maternal perception of fetal activity. The percentage of detected movements is strongly subject dependent and with undivided attention of the mother varies between 37% to 88%. Various methods to assist in fetal movement detection exist based on a wide spectrum of measurement techniques. However, these are typically unsuitable for ambulatory or long-term observation. In this paper, a novel method for fetal motion detection is presented based on amplitude and shape changes in the abdominally recorded fetal ECG. The proposed method has a sensitivity and specificity of 0.67 and 0.90, respectively, outperforming alternative fetal ECG-based methods from the literature.

  5. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  6. Comparison of evaluation results of piping thermal fatigue evaluation method based on equivalent stress amplitude

    International Nuclear Information System (INIS)

    Suzuki, Takafumi; Kasahara, Naoto

    2012-01-01

    In recent years, reports have increased about failure cases caused by high cycle thermal fatigue both at light water reactors and fast breeder reactors. One of the reasons of the cases is a turbulent mixing at a Tee-junction, where hot and cold temperature fluids are mixed, in a coolant system. In order to prevent thermal fatigue failures at Tee-junctions. The Japan Society of Mechanical Engineers published the guideline which is an evaluation method of high cycle thermal fatigue damage at nuclear pipes. In order to justify safety margin and make the procedure of the guideline concise, this paper proposes a new evaluation method of thermal fatigue damage with use of the 'equivalent stress amplitude.' Because this new method makes procedure of evaluation clear and concise, it will contribute to improving the guideline for thermal fatigue evaluation. (author)

  7. Relationship between eruption plume heights and seismic source amplitudes of eruption tremors and explosion events

    Science.gov (United States)

    Mori, A.; Kumagai, H.

    2016-12-01

    It is crucial to analyze and interpret eruption tremors and explosion events for estimating eruption size and understanding eruption phenomena. Kumagai et al. (EPS, 2015) estimated the seismic source amplitudes (As) and cumulative source amplitudes (Is) for eruption tremors and explosion events at Tungurahua, Ecuador, by the amplitude source location (ASL) method based on the assumption of isotropic S-wave radiation in a high-frequency band (5-10 Hz). They found scaling relations between As and Is for eruption tremors and explosion events. However, the universality of these relations is yet to be verified, and the physical meanings of As and Is are not clear. In this study, we analyzed the relations between As and Is for eruption tremors and explosion events at active volcanoes in Japan, and estimated As and Is by the ASL method. We obtained power-law relations between As and Is, in which the powers were different between eruption tremors and explosion events. These relations were consistent with the scaling relations at Tungurahua volcano. Then, we compared As with maximum eruption plume heights (H) during eruption tremors analyzed in this study, and found that H was proportional to 0.21 power of As. This relation is similar to the plume height model based on the physical process of plume rise, which indicates that H is proportional to 0.25 power of volumetric flow rate for plinian eruptions. This suggests that As may correspond to volumetric flow rate. If we assume a seismic source with volume changes and far-field S-wave, As is proportional to the source volume rate. This proportional relation and the plume height model give rise to the relation that H is proportional to 0.25 power of As. These results suggest that we may be able to estimate plume heights in realtime by estimating As during eruptions from seismic observations.

  8. Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods

    Science.gov (United States)

    Gong, W.; Duan, Q.; Huo, X.

    2017-12-01

    Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.

  9. Reliability analysis based on a novel density estimation method for structures with correlations

    Directory of Open Access Journals (Sweden)

    Baoyu LI

    2017-06-01

    Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.

  10. Application of Machine Learning Techniques for Amplitude and Phase Noise Characterization

    DEFF Research Database (Denmark)

    Zibar, Darko; de Carvalho, Luis Henrique Hecker; Piels, Molly

    2015-01-01

    In this paper, tools from machine learning community, such as Bayesian filtering and expectation maximization parameter estimation, are presented and employed for laser amplitude and phase noise characterization. We show that phase noise estimation based on Bayesian filtering outperforms...

  11. A METHOD USING GNSS LH-REFLECTED SIGNALS FOR SOIL ROUGHNESS ESTIMATION

    Directory of Open Access Journals (Sweden)

    Y. Jia

    2018-04-01

    Full Text Available Global Navigation Satellite System Reflectometry (GNSS-R is based on the concept of receiving GPS signals reflected by the ground using a passive receiver. The receiver can be on the ground or installed on a small aircraft or UAV and collects the electromagnetic field scattered from the surface of the Earth. The received signals are then analyzed to determine the characteristics of the surface. Many research has been reported showing the capability of the GNSS-R technique. However, the roughness of the surface impacts the phase and amplitude of the received signals, which is still a worthwhile study. This paper presented a method can be used by GNSS-R to estimate the surface roughness. First, the data was calculated in the specular reflection with the assumption of a flat surface with different permittivity. Since the power reflectivity can be evaluated as the ratio of left-hand (LH reflected signal to the direct right-hand (RH signal. Then a semi-empirical roughness model was applied to the data for testing. The results showed the method can distinguish the water and the soil surface. The sensitivity of the parameters was also analyzed. It indicates this method for soil roughness estimation can be used by GNSS-R LH reflected signals. In the next step, several experiments need to be done for improving the model and exploring the way of the estimation.

  12. a Method Using Gnss Lh-Reflected Signals for Soil Roughness Estimation

    Science.gov (United States)

    Jia, Y.; Li, W.; Chen, Y.; Lv, H.; Pei, Y.

    2018-04-01

    Global Navigation Satellite System Reflectometry (GNSS-R) is based on the concept of receiving GPS signals reflected by the ground using a passive receiver. The receiver can be on the ground or installed on a small aircraft or UAV and collects the electromagnetic field scattered from the surface of the Earth. The received signals are then analyzed to determine the characteristics of the surface. Many research has been reported showing the capability of the GNSS-R technique. However, the roughness of the surface impacts the phase and amplitude of the received signals, which is still a worthwhile study. This paper presented a method can be used by GNSS-R to estimate the surface roughness. First, the data was calculated in the specular reflection with the assumption of a flat surface with different permittivity. Since the power reflectivity can be evaluated as the ratio of left-hand (LH) reflected signal to the direct right-hand (RH) signal. Then a semi-empirical roughness model was applied to the data for testing. The results showed the method can distinguish the water and the soil surface. The sensitivity of the parameters was also analyzed. It indicates this method for soil roughness estimation can be used by GNSS-R LH reflected signals. In the next step, several experiments need to be done for improving the model and exploring the way of the estimation.

  13. Data Based Parameter Estimation Method for Circular-scanning SAR Imaging

    Directory of Open Access Journals (Sweden)

    Chen Gong-bo

    2013-06-01

    Full Text Available The circular-scanning Synthetic Aperture Radar (SAR is a novel working mode and its image quality is closely related to the accuracy of the imaging parameters, especially considering the inaccuracy of the real speed of the motion. According to the characteristics of the circular-scanning mode, a new data based method for estimating the velocities of the radar platform and the scanning-angle of the radar antenna is proposed in this paper. By referring to the basic conception of the Doppler navigation technique, the mathematic model and formulations for the parameter estimation are firstly improved. The optimal parameter approximation based on the least square criterion is then realized in solving those equations derived from the data processing. The simulation results verified the validity of the proposed scheme.

  14. Estimation of Circadian Body Temperature Rhythm Based on Heart Rate in Healthy, Ambulatory Subjects.

    Science.gov (United States)

    Sim, Soo Young; Joo, Kwang Min; Kim, Han Byul; Jang, Seungjin; Kim, Beomoh; Hong, Seungbum; Kim, Sungwan; Park, Kwang Suk

    2017-03-01

    Core body temperature is a reliable marker for circadian rhythm. As characteristics of the circadian body temperature rhythm change during diverse health problems, such as sleep disorder and depression, body temperature monitoring is often used in clinical diagnosis and treatment. However, the use of current thermometers in circadian rhythm monitoring is impractical in daily life. As heart rate is a physiological signal relevant to thermoregulation, we investigated the feasibility of heart rate monitoring in estimating circadian body temperature rhythm. Various heart rate parameters and core body temperature were simultaneously acquired in 21 healthy, ambulatory subjects during their routine life. The performance of regression analysis and the extended Kalman filter on daily body temperature and circadian indicator (mesor, amplitude, and acrophase) estimation were evaluated. For daily body temperature estimation, mean R-R interval (RRI), mean heart rate (MHR), or normalized MHR provided a mean root mean square error of approximately 0.40 °C in both techniques. The mesor estimation regression analysis showed better performance than the extended Kalman filter. However, the extended Kalman filter, combined with RRI or MHR, provided better accuracy in terms of amplitude and acrophase estimation. We suggest that this noninvasive and convenient method for estimating the circadian body temperature rhythm could reduce discomfort during body temperature monitoring in daily life. This, in turn, could facilitate more clinical studies based on circadian body temperature rhythm.

  15. Fast LCMV-based Methods for Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Glentis, George-Othon; Christensen, Mads Græsbøll

    2013-01-01

    peaks and require matrix inversions for each point in the search grid. In this paper, we therefore consider fast implementations of LCMV-based fundamental frequency estimators, exploiting the estimators' inherently low displacement rank of the used Toeplitz-like data covariance matrices, using...... with several orders of magnitude, but, as we show, further computational savings can be obtained by the adoption of an approximative IAA-based data covariance matrix estimator, reminiscent of the recently proposed Quasi-Newton IAA technique. Furthermore, it is shown how the considered pitch estimators can...... as such either the classic time domain averaging covariance matrix estimator, or, if aiming for an increased spectral resolution, the covariance matrix resulting from the application of the recent iterative adaptive approach (IAA). The proposed exact implementations reduce the required computational complexity...

  16. Accurate Frequency Estimation Based On Three-Parameter Sine-Fitting With Three FFT Samples

    Directory of Open Access Journals (Sweden)

    Liu Xin

    2015-09-01

    Full Text Available This paper presents a simple DFT-based golden section searching algorithm (DGSSA for the single tone frequency estimation. Because of truncation and discreteness in signal samples, Fast Fourier Transform (FFT and Discrete Fourier Transform (DFT are inevitable to cause the spectrum leakage and fence effect which lead to a low estimation accuracy. This method can improve the estimation accuracy under conditions of a low signal-to-noise ratio (SNR and a low resolution. This method firstly uses three FFT samples to determine the frequency searching scope, then – besides the frequency – the estimated values of amplitude, phase and dc component are obtained by minimizing the least square (LS fitting error of three-parameter sine fitting. By setting reasonable stop conditions or the number of iterations, the accurate frequency estimation can be realized. The accuracy of this method, when applied to observed single-tone sinusoid samples corrupted by white Gaussian noise, is investigated by different methods with respect to the unbiased Cramer-Rao Low Bound (CRLB. The simulation results show that the root mean square error (RMSE of the frequency estimation curve is consistent with the tendency of CRLB as SNR increases, even in the case of a small number of samples. The average RMSE of the frequency estimation is less than 1.5 times the CRLB with SNR = 20 dB and N = 512.

  17. Inertial sensor-based methods in walking speed estimation: a systematic review.

    Science.gov (United States)

    Yang, Shuozhi; Li, Qingguo

    2012-01-01

    Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm.

  18. Inertial Sensor-Based Methods in Walking Speed Estimation: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Qingguo Li

    2012-05-01

    Full Text Available Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm.

  19. Estimation of the flow resistances exerted in coronary arteries using a vessel length-based method.

    Science.gov (United States)

    Lee, Kyung Eun; Kwon, Soon-Sung; Ji, Yoon Cheol; Shin, Eun-Seok; Choi, Jin-Ho; Kim, Sung Joon; Shim, Eun Bo

    2016-08-01

    Flow resistances exerted in the coronary arteries are the key parameters for the image-based computer simulation of coronary hemodynamics. The resistances depend on the anatomical characteristics of the coronary system. A simple and reliable estimation of the resistances is a compulsory procedure to compute the fractional flow reserve (FFR) of stenosed coronary arteries, an important clinical index of coronary artery disease. The cardiac muscle volume reconstructed from computed tomography (CT) images has been used to assess the resistance of the feeding coronary artery (muscle volume-based method). In this study, we estimate the flow resistances exerted in coronary arteries by using a novel method. Based on a physiological observation that longer coronary arteries have more daughter branches feeding a larger mass of cardiac muscle, the method measures the vessel lengths from coronary angiogram or CT images (vessel length-based method) and predicts the coronary flow resistances. The underlying equations are derived from the physiological relation among flow rate, resistance, and vessel length. To validate the present estimation method, we calculate the coronary flow division over coronary major arteries for 50 patients using the vessel length-based method as well as the muscle volume-based one. These results are compared with the direct measurements in a clinical study. Further proving the usefulness of the present method, we compute the coronary FFR from the images of optical coherence tomography.

  20. Power System Real-Time Monitoring by Using PMU-Based Robust State Estimation Method

    DEFF Research Database (Denmark)

    Zhao, Junbo; Zhang, Gexiang; Das, Kaushik

    2016-01-01

    Accurate real-time states provided by the state estimator are critical for power system reliable operation and control. This paper proposes a novel phasor measurement unit (PMU)-based robust state estimation method (PRSEM) to real-time monitor a power system under different operation conditions...... the system real-time states with good robustness and can address several kinds of BD.......-based bad data (BD) detection method, which can handle the smearing effect and critical measurement errors, is presented. We evaluate PRSEM by using IEEE benchmark test systems and a realistic utility system. The numerical results indicate that, in short computation time, PRSEM can effectively track...

  1. An extensive study on a simple method estimating response spectrum based on a simulated spectrum

    International Nuclear Information System (INIS)

    Sato, H.; Komazaki, M.; Ohori, M.

    1977-01-01

    The basic description of the procedure will be briefly described in the paper. Corresponding to peaks of the response spectrum for the earthquake motion the component of the respective ground predominant period was taken. The acceleration amplification factor of a building structure for the respective predominant period above taken was obtained from the spectrum for the simulated earthquake with single predominant period. The rate of the respective component in summing these amplification factors was given by satisfying the ratio among the magnitude of the peaks of the spectrum. The summation was made by the principle of the square root of sum of squares. The procedure was easily applied to estimate the spectrum of the building appendage structure. The method is attempted to extend for multi-storey building structure and appendage to this building. Analysis is made as for a two storey structure system the mode of which for the first natural frequency is that the amplitude ratio of the upper mass to the lower is 2 to 1, so that the mode shape is a reversed triangle. The behavior of the system is dealt with by the normal coordinate. The amplification factors due to two ground predominant periods are estimated for the system with the first natural frequency. In this procedure the method developed for the single-degree-of-freedom system is directly applicable. The same method is used for the system with the second natural frequency. Thus estimated amplification factor for the mode of the respective natural frequency is summed again due to the principle of the square root of sum of squares after multiplying the excitation coefficient of each mode by the corresponding factor

  2. Pilot Test of a Novel Method for Assessing Community Response to Low-Amplitude Sonic Booms

    Science.gov (United States)

    Fidell, Sanford; Horonjeff, Richard D.; Harris, Michael

    2012-01-01

    A pilot test of a novel method for assessing residents annoyance to sonic booms was performed. During a two-week period, residents of the base housing area at Edwards Air Force Base provided data on their reactions to sonic booms using Smartphone-based interviews. Noise measurements were conducted at the same time. The report presents information about data collection methods and about test participants reactions to low-amplitude sonic booms. The latter information should not be viewed as definitive for several reasons. It may not be reliably generalized to the wider U.S. residential population (because it was not derived from a representative random sample) and the sample itself was not large.

  3. Investigating the Importance of the Pocket-estimation Method in Pocket-based Approaches: An Illustration Using Pocket-ligand Classification.

    Science.gov (United States)

    Caumes, Géraldine; Borrel, Alexandre; Abi Hussein, Hiba; Camproux, Anne-Claude; Regad, Leslie

    2017-09-01

    Small molecules interact with their protein target on surface cavities known as binding pockets. Pocket-based approaches are very useful in all of the phases of drug design. Their first step is estimating the binding pocket based on protein structure. The available pocket-estimation methods produce different pockets for the same target. The aim of this work is to investigate the effects of different pocket-estimation methods on the results of pocket-based approaches. We focused on the effect of three pocket-estimation methods on a pocket-ligand (PL) classification. This pocket-based approach is useful for understanding the correspondence between the pocket and ligand spaces and to develop pharmacological profiling models. We found pocket-estimation methods yield different binding pockets in terms of boundaries and properties. These differences are responsible for the variation in the PL classification results that can have an impact on the detected correspondence between pocket and ligand profiles. Thus, we highlighted the importance of the pocket-estimation method choice in pocket-based approaches. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A comparative study of amplitude calibrations for the East Asia VLBI Network: A priori and template spectrum methods

    Science.gov (United States)

    Cho, Ilje; Jung, Taehyun; Zhao, Guang-Yao; Akiyama, Kazunori; Sawada-Satoh, Satoko; Kino, Motoki; Byun, Do-Young; Sohn, Bong Won; Shibata, Katsunori M.; Hirota, Tomoya; Niinuma, Kotaro; Yonekura, Yoshinori; Fujisawa, Kenta; Oyama, Tomoaki

    2017-12-01

    We present the results of a comparative study of amplitude calibrations for the East Asia VLBI Network (EAVN) at 22 and 43 GHz using two different methods of an "a priori" and a "template spectrum", particularly on lower declination sources. Using observational data sets of early EAVN observations, we investigated the elevation-dependence of the gain values at seven stations of the KaVA (KVN and VERA Array) and three additional telescopes in Japan (Takahagi 32 m, Yamaguchi 32 m, and Nobeyama 45 m). By comparing the independently obtained gain values based on these two methods, we found that the gain values from each method were consistent within 10% at elevations higher than 10°. We also found that the total flux densities of two images produced from the different amplitude calibrations were in agreement within 10% at both 22 and 43 GHz. By using the template spectrum method, furthermore, the additional radio telescopes can participate in KaVA (i.e., EAVN), giving a notable sensitivity increase. Therefore, our results will constrain the detailed conditions in order to measure the VLBI amplitude reliably using EAVN, and discuss the potential of possible expansion to telescopes comprising EAVN.

  5. Separation of musical instruments based on amplitude and frequency comodulation

    Science.gov (United States)

    Jacobson, Barry D.; Cauwenberghs, Gert; Quatieri, Thomas F.

    2002-05-01

    In previous work, amplitude comodulation was investigated as a basis for monaural source separation. Amplitude comodulation refers to similarities in amplitude envelopes of individual spectral components emitted by particular types of sources. In many types of musical instruments, amplitudes of all resonant modes rise/fall, and start/stop together during the course of normal playing. We found that under certain well-defined conditions, a mixture of constant frequency, amplitude comodulated sources can unambiguously be decomposed into its constituents on the basis of these similarities. In this work, system performance was improved by relaxing the constant frequency requirement. String instruments, for example, which are normally played with vibrato, are both amplitude and frequency comodulated sources, and could not be properly tracked under the constant frequency assumption upon which our original algorithm was based. Frequency comodulation refers to similarities in frequency variations of individual harmonics emitted by these types of sources. The analytical difficulty is in defining a representation of the source which properly tracks frequency varying components. A simple, fixed filter bank can only track an individual spectral component for the duration in which it is within the passband of one of the filters. Alternatives are therefore explored which are amenable to real-time implementation.

  6. Two-Loop Splitting Amplitudes

    International Nuclear Information System (INIS)

    Bern, Z.

    2004-01-01

    Splitting amplitudes govern the behavior of scattering amplitudes at the momenta of external legs become collinear. In this talk we outline the calculation of two-loop splitting amplitudes via the unitarity sewing method. This method retains the simple factorization properties of light-cone gauge, but avoids the need for prescriptions such as the principal value or Mandelstam-Leibbrandt ones. The encountered loop momentum integrals are then evaluated using integration-by-parts and Lorentz invariance identities. We outline a variety of applications for these splitting amplitudes

  7. Two-loop splitting amplitudes

    International Nuclear Information System (INIS)

    Bern, Z.; Dixon, L.J.; Kosower, D.A.

    2004-01-01

    Splitting amplitudes govern the behavior of scattering amplitudes at the momenta of external legs become collinear. In this talk we outline the calculation of two-loop splitting amplitudes via the unitarity sewing method. This method retains the simple factorization properties of light-cone gauge, but avoids the need for prescriptions such as the principal value or Mandelstam-Leibbrandt ones. The encountered loop momentum integrals are then evaluated using integration-by-parts and Lorentz invariance identities. We outline a variety of applications for these splitting amplitudes

  8. Spectrophotometric method for estimation of amiloride in bulk and tablet dosage form

    Directory of Open Access Journals (Sweden)

    Aitha Vijaya Lakshmi

    2015-01-01

    Full Text Available Introduction: Amiloride chemically, 3,5-diamino-6-chloro-N-(diaminomethylene pyrazine-2-carboxamide. It is used in the management of congestive heart failure, available as Amifru tab, Amimide. It causes adverse effects like Nausea, diarrhea and dizziness. Materials: 0.1 N Hydrochloric acid, 0.1 N Sodium hydroxide and 1 mg/ml amiloride drug solution were required. Spectral and absorbance measurements were made using ELICO UV-160 double beam Spectrophotometer. Method: Amiloride drug solution concentration range of 25 to 125ug/ml in 0.1N HCl medium was scanned over the wave length range of 235-320 against blank prepared in 0.1N NaOH solution. Two wavelengths are selected one at positive peak 245 nm and another at negative peak 290 nm, the amplitude is calculated from these values. Results and Discussion: The sum of the absolute values at these wavelengths is called amplitude. The amplitude is proportional to the amount of drug. High accuracy, reproducibility and low t-values were reported from the calibration curve plotted with the amplitude verses amount of drug. So the proposed method is simple, less time consuming and it can be successfully adopted for the estimation of amiloride.

  9. A postprocessing method based on high-resolution spectral estimation for FDTD calculation of phononic band structures

    Energy Technology Data Exchange (ETDEWEB)

    Su Xiaoxing, E-mail: xxsu@bjtu.edu.c [School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044 (China); Li Jianbao; Wang Yuesheng [Institute of Engineering Mechanics, Beijing Jiaotong University, Beijing 100044 (China)

    2010-05-15

    If the energy bands of a phononic crystal are calculated by the finite difference time domain (FDTD) method combined with the fast Fourier transform (FFT), good estimation of the eigenfrequencies can only be ensured by the postprocessing of sufficiently long time series generated by a large number of FDTD iterations. In this paper, a postprocessing method based on the high-resolution spectral estimation via the Yule-Walker method is proposed to overcome this difficulty. Numerical simulation results for three-dimensional acoustic and two-dimensional elastic systems show that, compared with the classic FFT-based postprocessing method, the proposed method can give much better estimation of the eigenfrequencies when the FDTD is run with relatively few iterations.

  10. A postprocessing method based on high-resolution spectral estimation for FDTD calculation of phononic band structures

    International Nuclear Information System (INIS)

    Su Xiaoxing; Li Jianbao; Wang Yuesheng

    2010-01-01

    If the energy bands of a phononic crystal are calculated by the finite difference time domain (FDTD) method combined with the fast Fourier transform (FFT), good estimation of the eigenfrequencies can only be ensured by the postprocessing of sufficiently long time series generated by a large number of FDTD iterations. In this paper, a postprocessing method based on the high-resolution spectral estimation via the Yule-Walker method is proposed to overcome this difficulty. Numerical simulation results for three-dimensional acoustic and two-dimensional elastic systems show that, compared with the classic FFT-based postprocessing method, the proposed method can give much better estimation of the eigenfrequencies when the FDTD is run with relatively few iterations.

  11. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  12. Experimental demonstration of OFDM/OQAM transmission with DFT-based channel estimation for visible laser light communications

    Science.gov (United States)

    He, Jing; Shi, Jin; Deng, Rui; Chen, Lin

    2017-08-01

    Recently, visible light communication (VLC) based on light-emitting diodes (LEDs) is considered as a candidate technology for fifth-generation (5G) communications, VLC is free of electromagnetic interference and it can simplify the integration of VLC into heterogeneous wireless networks. Due to the data rates of VLC system limited by the low pumping efficiency, small output power and narrow modulation bandwidth, visible laser light communication (VLLC) system with laser diode (LD) has paid more attention. In addition, orthogonal frequency division multiplexing/offset quadrature amplitude modulation (OFDM/OQAM) is currently attracting attention in optical communications. Due to the non-requirement of cyclic prefix (CP) and time-frequency domain well-localized pulse shapes, it can achieve high spectral efficiency. Moreover, OFDM/OQAM has lower out-of-band power leakage so that it increases the system robustness against inter-carrier interference (ICI) and frequency offset. In this paper, a Discrete Fourier Transform (DFT)-based channel estimation scheme combined with the interference approximation method (IAM) is proposed and experimentally demonstrated for VLLC OFDM/OQAM system. The performance of VLLC OFDM/OQAM system with and without DFT-based channel estimation is investigated. Moreover, the proposed DFT-based channel estimation scheme and the intra-symbol frequency-domain averaging (ISFA)-based method are also compared for the VLLC OFDM/OQAM system. The experimental results show that, the performance of EVM using the DFT-based channel estimation scheme is improved about 3dB compared with the conventional IAM method. In addition, the DFT-based channel estimation scheme can resist the channel noise effectively than that of the ISFA-based method.

  13. A Timing Estimation Method Based-on Skewness Analysis in Vehicular Wireless Networks.

    Science.gov (United States)

    Cui, Xuerong; Li, Juan; Wu, Chunlei; Liu, Jian-Hang

    2015-11-13

    Vehicle positioning technology has drawn more and more attention in vehicular wireless networks to reduce transportation time and traffic accidents. Nowadays, global navigation satellite systems (GNSS) are widely used in land vehicle positioning, but most of them are lack precision and reliability in situations where their signals are blocked. Positioning systems base-on short range wireless communication are another effective way that can be used in vehicle positioning or vehicle ranging. IEEE 802.11p is a new real-time short range wireless communication standard for vehicles, so a new method is proposed to estimate the time delay or ranges between vehicles based on the IEEE 802.11p standard which includes three main steps: cross-correlation between the received signal and the short preamble, summing up the correlated results in groups, and finding the maximum peak using a dynamic threshold based on the skewness analysis. With the range between each vehicle or road-side infrastructure, the position of neighboring vehicles can be estimated correctly. Simulation results were presented in the International Telecommunications Union (ITU) vehicular multipath channel, which show that the proposed method provides better precision than some well-known timing estimation techniques, especially in low signal to noise ratio (SNR) environments.

  14. A Timing Estimation Method Based-on Skewness Analysis in Vehicular Wireless Networks

    Directory of Open Access Journals (Sweden)

    Xuerong Cui

    2015-11-01

    Full Text Available Vehicle positioning technology has drawn more and more attention in vehicular wireless networks to reduce transportation time and traffic accidents. Nowadays, global navigation satellite systems (GNSS are widely used in land vehicle positioning, but most of them are lack precision and reliability in situations where their signals are blocked. Positioning systems base-on short range wireless communication are another effective way that can be used in vehicle positioning or vehicle ranging. IEEE 802.11p is a new real-time short range wireless communication standard for vehicles, so a new method is proposed to estimate the time delay or ranges between vehicles based on the IEEE 802.11p standard which includes three main steps: cross-correlation between the received signal and the short preamble, summing up the correlated results in groups, and finding the maximum peak using a dynamic threshold based on the skewness analysis. With the range between each vehicle or road-side infrastructure, the position of neighboring vehicles can be estimated correctly. Simulation results were presented in the International Telecommunications Union (ITU vehicular multipath channel, which show that the proposed method provides better precision than some well-known timing estimation techniques, especially in low signal to noise ratio (SNR environments.

  15. Feature-Based Correlation and Topological Similarity for Interbeat Interval Estimation Using Ultrawideband Radar.

    Science.gov (United States)

    Sakamoto, Takuya; Imasaka, Ryohei; Taki, Hirofumi; Sato, Toru; Yoshioka, Mototaka; Inoue, Kenichi; Fukuda, Takeshi; Sakai, Hiroyuki

    2016-04-01

    The objectives of this paper are to propose a method that can accurately estimate the human heart rate (HR) using an ultrawideband (UWB) radar system, and to determine the performance of the proposed method through measurements. The proposed method uses the feature points of a radar signal to estimate the HR efficiently and accurately. Fourier- and periodicity-based methods are inappropriate for estimation of instantaneous HRs in real time because heartbeat waveforms are highly variable, even within the beat-to-beat interval. We define six radar waveform features that enable correlation processing to be performed quickly and accurately. In addition, we propose a feature topology signal that is generated from a feature sequence without using amplitude information. This feature topology signal is used to find unreliable feature points, and thus, to suppress inaccurate HR estimates. Measurements were taken using UWB radar, while simultaneously performing electrocardiography measurements in an experiment that was conducted on nine participants. The proposed method achieved an average root-mean-square error in the interbeat interval of 7.17 ms for the nine participants. The results demonstrate the effectiveness and accuracy of the proposed method. The significance of this study for biomedical research is that the proposed method will be useful in the realization of a remote vital signs monitoring system that enables accurate estimation of HR variability, which has been used in various clinical settings for the treatment of conditions such as diabetes and arterial hypertension.

  16. Method for improving the gamma-transition cascade spectra amplitude resolution during coincidence code computerized processing

    International Nuclear Information System (INIS)

    Sukhovoj, A.M.; Khitrov, V.A.

    1984-01-01

    A method of unfolding the differential γ-cascade spectra during radiation capture of slow neutrons based on the computeri-- zed processing of the results of measurements performed, by means of a spectrometer with two Ge(Li) detectors is suggested. The efficiency of the method is illustrated using as an example the spectrum of 35 Cl(n, γ) reaction corresponding to the 8580 keV peak. It is shown that the above approach permits to improve the resolution by 1.2-2.6 times without decrease in registration efficiency within the framework of the method of coincidence pulse amplitude summation

  17. An improved principal component analysis based region matching method for fringe direction estimation

    Science.gov (United States)

    He, A.; Quan, C.

    2018-04-01

    The principal component analysis (PCA) and region matching combined method is effective for fringe direction estimation. However, its mask construction algorithm for region matching fails in some circumstances, and the algorithm for conversion of orientation to direction in mask areas is computationally-heavy and non-optimized. We propose an improved PCA based region matching method for the fringe direction estimation, which includes an improved and robust mask construction scheme, and a fast and optimized orientation-direction conversion algorithm for the mask areas. Along with the estimated fringe direction map, filtered fringe pattern by automatic selective reconstruction modification and enhanced fast empirical mode decomposition (ASRm-EFEMD) is used for Hilbert spiral transform (HST) to demodulate the phase. Subsequently, windowed Fourier ridge (WFR) method is used for the refinement of the phase. The robustness and effectiveness of proposed method are demonstrated by both simulated and experimental fringe patterns.

  18. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  19. Amplitude-modulated fiber-ring laser

    DEFF Research Database (Denmark)

    Caputo, J. G.; Clausen, Carl A. Balslev; Sørensen, Mads Peter

    2000-01-01

    Soliton pulses generated by a fiber-ring laser are investigated by numerical simulation and perturbation methods. The mathematical modeling is based on the nonlinear Schrödinger equation with perturbative terms. We show that active mode locking with an amplitude modulator leads to a self......-starting of stable solitonic pulses from small random noise, provided the modulation depth is small. The perturbative analysis leads to a nonlinear coupled return map for the amplitude, phase, and position of the soliton pulses circulating in the fiber-ring laser. We established the validity of this approach...

  20. A Method of Nuclear Software Reliability Estimation

    International Nuclear Information System (INIS)

    Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol

    2011-01-01

    A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed

  1. Euclidean to Minkowski Bethe-Salpeter amplitude and observables

    International Nuclear Information System (INIS)

    Carbonell, J.; Frederico, T.; Karmanov, V.A.

    2017-01-01

    We propose a method to reconstruct the Bethe-Salpeter amplitude in Minkowski space given the Euclidean Bethe-Salpeter amplitude - or alternatively the light-front wave function - as input. The method is based on the numerical inversion of the Nakanishi integral representation and computing the corresponding weight function. This inversion procedure is, in general, rather unstable, and we propose several ways to considerably reduce the instabilities. In terms of the Nakanishi weight function, one can easily compute the BS amplitude, the LF wave function and the electromagnetic form factor. The latter ones are very stable in spite of residual instabilities in the weight function. This procedure allows both, to continue the Euclidean BS solution in the Minkowski space and to obtain a BS amplitude from a LF wave function. (orig.)

  2. Euclidean to Minkowski Bethe-Salpeter amplitude and observables

    Energy Technology Data Exchange (ETDEWEB)

    Carbonell, J. [Universite Paris-Sud, IN2P3-CNRS, Institut de Physique Nucleaire, Orsay Cedex (France); Frederico, T. [Instituto Tecnologico de Aeronautica, DCTA, Sao Jose dos Campos (Brazil); Karmanov, V.A. [Lebedev Physical Institute, Moscow (Russian Federation)

    2017-01-15

    We propose a method to reconstruct the Bethe-Salpeter amplitude in Minkowski space given the Euclidean Bethe-Salpeter amplitude - or alternatively the light-front wave function - as input. The method is based on the numerical inversion of the Nakanishi integral representation and computing the corresponding weight function. This inversion procedure is, in general, rather unstable, and we propose several ways to considerably reduce the instabilities. In terms of the Nakanishi weight function, one can easily compute the BS amplitude, the LF wave function and the electromagnetic form factor. The latter ones are very stable in spite of residual instabilities in the weight function. This procedure allows both, to continue the Euclidean BS solution in the Minkowski space and to obtain a BS amplitude from a LF wave function. (orig.)

  3. An encryption scheme based on phase-shifting digital holography and amplitude-phase disturbance

    International Nuclear Information System (INIS)

    Hua Li-Li; Xu Ning; Yang Geng

    2014-01-01

    In this paper, we propose an encryption scheme based on phase-shifting digital interferometry. According to the original system framework, we add a random amplitude mask and replace the Fourier transform by the Fresnel transform. We develop a mathematical model and give a discrete formula based on the scheme, which makes it easy to implement the scheme in computer programming. The experimental results show that the improved system has a better performance in security than the original encryption method. Moreover, it demonstrates a good capability of anti-noise and anti-shear robustness

  4. Double logarithmic asymptotics of quark amplitudes with flavour exchange

    International Nuclear Information System (INIS)

    Kirschner, R.

    1982-01-01

    Results on the quark scattering and annihilation amplitudes in the Regge region are presented. The perturbative contribution to those amplitudes in the double logarithmic approximation are calculated. In the calculations a method based on dispersion relations and gauge invariance is used. (M.F.W.)

  5. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien; Claudel, Christian G.

    2015-01-01

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  6. System and method for traffic signal timing estimation

    KAUST Repository

    Dumazert, Julien

    2015-12-30

    A method and system for estimating traffic signals. The method and system can include constructing trajectories of probe vehicles from GPS data emitted by the probe vehicles, estimating traffic signal cycles, combining the estimates, and computing the traffic signal timing by maximizing a scoring function based on the estimates. Estimating traffic signal cycles can be based on transition times of the probe vehicles starting after a traffic signal turns green.

  7. Amplitude Modulated Sinusoidal Signal Decomposition for Audio Coding

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jacobson, A.; Andersen, S. V.

    2006-01-01

    In this paper, we present a decomposition for sinusoidal coding of audio, based on an amplitude modulation of sinusoids via a linear combination of arbitrary basis vectors. The proposed method, which incorporates a perceptual distortion measure, is based on a relaxation of a nonlinear least......-squares minimization. Rate-distortion curves and listening tests show that, compared to a constant-amplitude sinusoidal coder, the proposed decomposition offers perceptually significant improvements in critical transient signals....

  8. A compressed sensing based method with support refinement for impulse noise cancelation in DSL

    KAUST Repository

    Quadeer, Ahmed Abdul

    2013-06-01

    This paper presents a compressed sensing based method to suppress impulse noise in digital subscriber line (DSL). The proposed algorithm exploits the sparse nature of the impulse noise and utilizes the carriers, already available in all practical DSL systems, for its estimation and cancelation. Specifically, compressed sensing is used for a coarse estimate of the impulse position, an a priori information based maximum aposteriori probability (MAP) metric for its refinement, followed by least squares (LS) or minimum mean square error (MMSE) estimation for estimating the impulse amplitudes. Simulation results show that the proposed scheme achieves higher rate as compared to other known sparse estimation algorithms in literature. The paper also demonstrates the superior performance of the proposed scheme compared to the ITU-T G992.3 standard that utilizes RS-coding for impulse noise refinement in DSL signals. © 2013 IEEE.

  9. A New Method for Estimating the Number of Harmonic Components in Noise with Application in High Resolution Radar

    Directory of Open Access Journals (Sweden)

    Radoi Emanuel

    2004-01-01

    Full Text Available In order to operate properly, the superresolution methods based on orthogonal subspace decomposition, such as multiple signal classification (MUSIC or estimation of signal parameters by rotational invariance techniques (ESPRIT, need accurate estimation of the signal subspace dimension, that is, of the number of harmonic components that are superimposed and corrupted by noise. This estimation is particularly difficult when the S/N ratio is low and the statistical properties of the noise are unknown. Moreover, in some applications such as radar imagery, it is very important to avoid underestimation of the number of harmonic components which are associated to the target scattering centers. In this paper, we propose an effective method for the estimation of the signal subspace dimension which is able to operate against colored noise with performances superior to those exhibited by the classical information theoretic criteria of Akaike and Rissanen. The capabilities of the new method are demonstrated through computer simulations and it is proved that compared to three other methods it carries out the best trade-off from four points of view, S/N ratio in white noise, frequency band of colored noise, dynamic range of the harmonic component amplitudes, and computing time.

  10. Validity and feasibility of a satellite imagery-based method for rapid estimation of displaced populations.

    Science.gov (United States)

    Checchi, Francesco; Stewart, Barclay T; Palmer, Jennifer J; Grundy, Chris

    2013-01-23

    Estimating the size of forcibly displaced populations is key to documenting their plight and allocating sufficient resources to their assistance, but is often not done, particularly during the acute phase of displacement, due to methodological challenges and inaccessibility. In this study, we explored the potential use of very high resolution satellite imagery to remotely estimate forcibly displaced populations. Our method consisted of multiplying (i) manual counts of assumed residential structures on a satellite image and (ii) estimates of the mean number of people per structure (structure occupancy) obtained from publicly available reports. We computed population estimates for 11 sites in Bangladesh, Chad, Democratic Republic of Congo, Ethiopia, Haiti, Kenya and Mozambique (six refugee camps, three internally displaced persons' camps and two urban neighbourhoods with a mixture of residents and displaced) ranging in population from 1,969 to 90,547, and compared these to "gold standard" reference population figures from census or other robust methods. Structure counts by independent analysts were reasonably consistent. Between one and 11 occupancy reports were available per site and most of these reported people per household rather than per structure. The imagery-based method had a precision relative to reference population figures of layout. For each site, estimates were produced in 2-5 working person-days. In settings with clearly distinguishable individual structures, the remote, imagery-based method had reasonable accuracy for the purposes of rapid estimation, was simple and quick to implement, and would likely perform better in more current application. However, it may have insurmountable limitations in settings featuring connected buildings or shelters, a complex pattern of roofs and multi-level buildings. Based on these results, we discuss possible ways forward for the method's development.

  11. Estimation of citicoline sodium in tablets by difference spectrophotometric method

    Directory of Open Access Journals (Sweden)

    Sagar Suman Panda

    2013-01-01

    Full Text Available Aim: The present work deals with development and validation of a novel, precise, and accurate spectrophotometric method for the estimation of citicoline sodium (CTS in tablets. This spectrophotometric method is based on the principle that CTS shows two different forms that differs in the absorption spectra in basic and acidic medium. Materials and Methods: The present work was being carried out on Shimadzu 1800 Double Beam UV-visible spectrophotometer. Difference spectra were generated using 10 mm quartz cells over the range of 200-400 nm. Solvents used were 0.1 M NaOH and 0.1 M HCl. Results: The maxima and minima in the difference spectra of CTS were found to be 239 nm and 283 nm, respectively. Amplitude was calculated from the maxima and minima of spectrum. The drug follows linearity in the range of 1-50 μ/ml (R 2 = 0.999. The average % recovery from the tablet formulation was found to be 98.47%. The method was validated as per International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use: ICH Q2(R1 Validation of Analytical Procedures: Text and Methodology guidelines. Conclusion: This method is simple and inexpensive. Hence it can be applied for determination of the drug in pharmaceutical dosage forms.

  12. Estimating and correcting the amplitude radiation pattern of a virtual source

    OpenAIRE

    Van der Neut, J.; Bakulin, A.

    2009-01-01

    In the virtual source (VS) method we crosscorrelate seismic recordings at two receivers to create a new data set as if one of these receivers were a virtual source and the other a receiver. We focus on the amplitudes and kinematics of VS data, generated by an array of active sources at the surface and recorded by an array of receivers in a borehole. The quality of the VS data depends on the radiation pattern of the virtual source, which in turn is controlled by the spatial aperture of the sur...

  13. Costate Estimation of PMP-Based Control Strategy for PHEV Using Legendre Pseudospectral Method

    Directory of Open Access Journals (Sweden)

    Hanbing Wei

    2016-01-01

    Full Text Available Costate value plays a significant role in the application of PMP-based control strategy for PHEV. It is critical for terminal SOC of battery at destination and corresponding equivalent fuel consumption. However, it is not convenient to choose the approximate costate in real driving condition. In the paper, the optimal control problem of PHEV based on PMP has been converted to nonlinear programming problem. By means of KKT condition costate can be approximated as KKT multipliers of NLP divided by the LGL weights. A kind of general costate estimation approach is proposed for predefined driving condition in this way. Dynamic model has been established in Matlab/Simulink in order to prove the effectiveness of the method. Simulation results demonstrate that the method presented in the paper can deduce the closer value of global optimal value than constant initial costate value. This approach can be used for initial costate and jump condition estimation of PMP-based control strategy for PHEV.

  14. Using GRACE Amplitude Data in Conjunction with the Spatial Distribution of Groundwater Recharge to Estimate the Components of the Terrestrial Water Storage Anomaly across the Contiguous United States

    Science.gov (United States)

    Sanford, W. E.; Reitz, M.; Zell, W.

    2017-12-01

    The GRACE satellite project by NASA has been mapping the terrestrial water storage anomaly (TWSA) across the globe since 2002. To date most of the studies using this data have focused on estimating long-term storage declines in groundwater aquifers or the cryosphere. In this study we are focusing on using the amplitude of the seasonal storage signal to estimate the sources and values of the different water components that are contributing to the TWSA signal across the contiguous United States (CONUS). Across the CONUS the TWSA seasonal amplitude observed by GRACE varies by a factor of ten or more (from 1 to 10+ cm of liquid water equivalent). For a seasonal sinusoidal recharge rate, the change in storage in either the soil (unsaturated zone beneath the root zone) or groundwater (by water-table fluctuation) is limited to the amplitude of the recharge rate divided by π or 2π, respectively. We compiled the GRACE signal for the 18 major HUC watersheds across the CONUS and compared them to estimates of seasonal recharge-rate amplitudes based on a recent map of recharge rates generated by the USGS. The ratios of the recharge to GRACE amplitudes suggest that all but two of the HUCs must have other substantial sources of storage change in addition to soil or groundwater. The most likely additional sources are (1) winter snowpack, (2) seasonal irrigation withdrawals, and/or (3) surface water (rivers or reservoirs). Estimates of the seasonal amplitudes of these three signals across the CONUS suggest they can explain the remaining GRACE seasonal signal that cannot be explained by soil or groundwater fluctuations. Each of these signals has its own unique spatial distribution, with snowpack limited to the northern states, surface water limited to large rivers or reservoirs, and irrigation as a dominant signal limited to arid to semi-arid agricultural regions. Use of the GRACE seasonal signal shows promise in constraining the hydraulic diffusivities of surficial aquifer

  15. Fatigue life assessment under multiaxial variable amplitude loading

    International Nuclear Information System (INIS)

    Morilhat, P.; Kenmeugne, B.; Vidal-Salle, E.; Robert, J.L.

    1996-06-01

    A variable amplitude multiaxial fatigue life prediction method is presented in this paper. It is based on a stress as input data are the stress tensor histories which may be calculated by FEM analysis or measured directly on the structure during the service loading. The different steps of he method are first presented then its experimental validation is realized for log and finite fatigue lives through biaxial variable amplitude loading tests using cruciform steel samples. (authors). 9 refs., 7 figs

  16. Feasibility Study on Tension Estimation Technique for Hanger Cables Using the FE Model-Based System Identification Method

    Directory of Open Access Journals (Sweden)

    Kyu-Sik Park

    2015-01-01

    Full Text Available Hanger cables in suspension bridges are partly constrained by horizontal clamps. So, existing tension estimation methods based on a single cable model are prone to higher errors as the cable gets shorter, making it more sensitive to flexural rigidity. Therefore, inverse analysis and system identification methods based on finite element models are suggested recently. In this paper, the applicability of system identification methods is investigated using the hanger cables of Gwang-An bridge. The test results show that the inverse analysis and systemic identification methods based on finite element models are more reliable than the existing string theory and linear regression method for calculating the tension in terms of natural frequency errors. However, the estimation error of tension can be varied according to the accuracy of finite element model in model based methods. In particular, the boundary conditions affect the results more profoundly when the cable gets shorter. Therefore, it is important to identify the boundary conditions through experiment if it is possible. The FE model-based tension estimation method using system identification method can take various boundary conditions into account. Also, since it is not sensitive to the number of natural frequency inputs, the availability of this system is high.

  17. True amplitude wave equation migration arising from true amplitude one-way wave equations

    Science.gov (United States)

    Zhang, Yu; Zhang, Guanquan; Bleistein, Norman

    2003-10-01

    One-way wave operators are powerful tools for use in forward modelling and inversion. Their implementation, however, involves introduction of the square root of an operator as a pseudo-differential operator. Furthermore, a simple factoring of the wave operator produces one-way wave equations that yield the same travel times as the full wave equation, but do not yield accurate amplitudes except for homogeneous media and for almost all points in heterogeneous media. Here, we present augmented one-way wave equations. We show that these equations yield solutions for which the leading order asymptotic amplitude as well as the travel time satisfy the same differential equations as the corresponding functions for the full wave equation. Exact representations of the square-root operator appearing in these differential equations are elusive, except in cases in which the heterogeneity of the medium is independent of the transverse spatial variables. Here, we address the fully heterogeneous case. Singling out depth as the preferred direction of propagation, we introduce a representation of the square-root operator as an integral in which a rational function of the transverse Laplacian appears in the integrand. This allows us to carry out explicit asymptotic analysis of the resulting one-way wave equations. To do this, we introduce an auxiliary function that satisfies a lower dimensional wave equation in transverse spatial variables only. We prove that ray theory for these one-way wave equations leads to one-way eikonal equations and the correct leading order transport equation for the full wave equation. We then introduce appropriate boundary conditions at z = 0 to generate waves at depth whose quotient leads to a reflector map and an estimate of the ray theoretical reflection coefficient on the reflector. Thus, these true amplitude one-way wave equations lead to a 'true amplitude wave equation migration' (WEM) method. In fact, we prove that applying the WEM imaging condition

  18. State Estimation for Tensegrity Robots

    Science.gov (United States)

    Caluwaerts, Ken; Bruce, Jonathan; Friesen, Jeffrey M.; Sunspiral, Vytas

    2016-01-01

    Tensegrity robots are a class of compliant robots that have many desirable traits when designing mass efficient systems that must interact with uncertain environments. Various promising control approaches have been proposed for tensegrity systems in simulation. Unfortunately, state estimation methods for tensegrity robots have not yet been thoroughly studied. In this paper, we present the design and evaluation of a state estimator for tensegrity robots. This state estimator will enable existing and future control algorithms to transfer from simulation to hardware. Our approach is based on the unscented Kalman filter (UKF) and combines inertial measurements, ultra wideband time-of-flight ranging measurements, and actuator state information. We evaluate the effectiveness of our method on the SUPERball, a tensegrity based planetary exploration robotic prototype. In particular, we conduct tests for evaluating both the robot's success in estimating global position in relation to fixed ranging base stations during rolling maneuvers as well as local behavior due to small-amplitude deformations induced by cable actuation.

  19. Amplitude-cyclic frequency decomposition of vibration signals for bearing fault diagnosis based on phase editing

    Science.gov (United States)

    Barbini, L.; Eltabach, M.; Hillis, A. J.; du Bois, J. L.

    2018-03-01

    In rotating machine diagnosis different spectral tools are used to analyse vibration signals. Despite the good diagnostic performance such tools are usually refined, computationally complex to implement and require oversight of an expert user. This paper introduces an intuitive and easy to implement method for vibration analysis: amplitude cyclic frequency decomposition. This method firstly separates vibration signals accordingly to their spectral amplitudes and secondly uses the squared envelope spectrum to reveal the presence of cyclostationarity in each amplitude level. The intuitive idea is that in a rotating machine different components contribute vibrations at different amplitudes, for instance defective bearings contribute a very weak signal in contrast to gears. This paper also introduces a new quantity, the decomposition squared envelope spectrum, which enables separation between the components of a rotating machine. The amplitude cyclic frequency decomposition and the decomposition squared envelope spectrum are tested on real word signals, both at stationary and varying speeds, using data from a wind turbine gearbox and an aircraft engine. In addition a benchmark comparison to the spectral correlation method is presented.

  20. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    Science.gov (United States)

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.

  1. Evaluation of a morphing based method to estimate muscle attachment sites of the lower extremity

    NARCIS (Netherlands)

    Pellikaan, P.; van der Krogt, Marjolein; Carbone, Vincenzo; Fluit, René; Vigneron, L.M.; van Deun, J.; Verdonschot, Nicolaas Jacobus Joseph; Koopman, Hubertus F.J.M.

    2014-01-01

    To generate subject-specific musculoskeletal models for clinical use, the location of muscle attachment sites needs to be estimated with accurate, fast and preferably automated tools. For this purpose, an automatic method was used to estimate the muscle attachment sites of the lower extremity, based

  2. Modulating functions-based method for parameters and source estimation in one-dimensional partial differential equations

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2016-01-01

    In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear

  3. QCD-based pion distribution amplitudes confronting experimental data

    International Nuclear Information System (INIS)

    Bakulev, A.P.; Mikhajlov, S.V.; Stefanis, N.G.

    2001-01-01

    We use QCD sum rules with nonlocal condensates to recalculate more accurately the moments and their confidence intervals of the twist-2 pion distribution amplitude including radiative corrections. We are thus able to construct an admissible set of pion distribution amplitudes which define a reliability region in the a 2 , a 4 plane of the Gegenbauer polynomial expansion coefficients. We emphasize that models like that of Chernyak and Zhitnitsky, as well as the asymptotic solution, are excluded from this set. We show that the determined a 2 , a 4 region strongly overlaps with that extracted from the CLEO data by Schmedding and Yakovlev and that this region is also not far from the results of the first direct measurement of the pion valence quark momentum distribution by the Fermilab E791 collaboration. Comparisons with recent lattice calculations and instanton-based models are briefly discussed

  4. Physics-based, Bayesian sequential detection method and system for radioactive contraband

    Science.gov (United States)

    Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E

    2014-03-18

    A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.

  5. A Novel Method Based on Oblique Projection Technology for Mixed Sources Estimation

    Directory of Open Access Journals (Sweden)

    Weijian Si

    2014-01-01

    Full Text Available Reducing the computational complexity of the near-field sources and far-field sources localization algorithms has been considered as a serious problem in the field of array signal processing. A novel algorithm caring for mixed sources location estimation based on oblique projection is proposed in this paper. The sources are estimated at two different stages and the sensor noise power is estimated and eliminated from the covariance which improve the accuracy of the estimation of mixed sources. Using the idea of compress, the range information of near-field sources is obtained by searching the partial area instead of the whole Fresnel area which can reduce the processing time. Compared with the traditional algorithms, the proposed algorithm has the lower computation complexity and has the ability to solve the two closed-spaced sources with high resolution and accuracy. The duplication of range estimation is also avoided. Finally, simulation results are provided to demonstrate the performance of the proposed method.

  6. A probabilistic method for the estimation of ocean surface currents from short time series of HF radar data

    Science.gov (United States)

    Guérin, Charles-Antoine; Grilli, Stéphan T.

    2018-01-01

    We present a new method for inverting ocean surface currents from beam-forming HF radar data. In contrast with the classical method, which inverts radial currents based on shifts of the main Bragg line in the radar Doppler spectrum, the method works in the temporal domain and inverts currents from the amplitude modulation of the I and Q radar time series. Based on this principle, we propose a Maximum Likelihood approach, which can be combined with a Bayesian inference method assuming a prior current distribution, to infer values of the radial surface currents. We assess the method performance by using synthetic radar signal as well as field data, and systematically comparing results with those of the Doppler method. The new method is found advantageous for its robustness to noise at long range, its ability to accommodate shorter time series, and the possibility to use a priori information to improve the estimates. Limitations are related to current sign errors at far-ranges and biased estimates for small current values and very short samples. We apply the new technique to a data set from a typical 13.5 MHz WERA radar, acquired off of Vancouver Island, BC, and show that it can potentially improve standard synoptic current mapping.

  7. Complex-based OCT angiography algorithm recovers microvascular information better than amplitude- or phase-based algorithms in phase-stable systems.

    Science.gov (United States)

    Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K

    2017-12-19

    Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is  algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.

  8. Adhesive bond strength evaluation in composite materials by laser-generated high amplitude ultrasound

    International Nuclear Information System (INIS)

    Perton, M; Blouin, A; Monchalin, J-P

    2011-01-01

    Adhesive bonding of composites laminates is highly efficient but is not used for joining primary aircraft structures, since there is presently no nondestructive inspection technique to ensure the quality of the bond. We are developing a technique based on the propagation of high amplitude ultrasonic waves to evaluate the adhesive bond strength. Large amplitude compression waves are generated by a short pulse powerful laser under water confinement and are converted after reflection by the assembly back surface into tensile waves. The resulting tensile stresses can cause a delamination inside the laminates or at the bond interfaces. The adhesion strength is evaluated by increasing the laser pulse energy until disbond. A good bond is unaffected by a certain level of stress whereas a weaker one is damaged. The method is shown completely non invasive throughout the whole composite assembly. The sample back surface velocity is measured by an optical interferometer and used to estimate stress history inside the sample. The depth and size of the disbonds are revealed by a post-test inspection by the well established laser-ultrasonic technique. Experimental results show that the proposed method is able to differentiate weak bond from strong bonds and to estimate quantitatively their bond strength.

  9. Color guided amplitudes

    Energy Technology Data Exchange (ETDEWEB)

    Broedel, Johannes [Stanford Institute for Theoretical Physics and Department of Physics, Stanford University, Stanford, CA (United States); Dixon, Lance J. [SLAC National Accelerator Laboratory, Stanford University, Stanford, CA (United States)

    2012-07-01

    Amplitudes in gauge thoeries obtain contributions from color and kinematics. While these two parts of the amplitude seem to exhibit different symmetry structures, it turns out that they can be reorganized in a way to behave equally, which leads to the so-called color-kinematic dual representations of amplitudes. Astonishingly, the existence of those representations allows squaring to related gravitational theories right away. Contrary to the Kawaii-Levellen-Tye relations, which have been used to relate gauge theories and gravity previously, this method is applicable not only to tree amplitudes but also at loop level. In this talk, the basic technique is introduced followed by a discussion of the existence of color-kinematic dual representations for amplitudes derived from gauge theory actions which are deformed by higher-operator insertions. In addition, it is commented on the implications for deformed gravitational theories.

  10. An efficient modularized sample-based method to estimate the first-order Sobol' index

    International Nuclear Information System (INIS)

    Li, Chenzhao; Mahadevan, Sankaran

    2016-01-01

    Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.

  11. Hidden beauty in multiloop amplitudes

    International Nuclear Information System (INIS)

    Cachazo, Freddy; Spradlin, Marcus; Volovich, Anastasia

    2006-01-01

    Planar L-loop maximally helicity violating amplitudes in N = 4 supersymmetric Yang-Mills theory are believed to possess the remarkable property of satisfying iteration relations in L. We propose a simple new method for studying iteration relations for four-particle amplitudes which involves the use of certain linear differential operators and eliminates the need to fully evaluate any loop integrals. We carry out this procedure in explicit detail for the two-loop amplitude and prove that this method can be applied to any multiloop integral, allowing a conjectured iteration relation for any given amplitude to be tested up to polynomials in logarithms

  12. Real-Time Estimation for Cutting Tool Wear Based on Modal Analysis of Monitored Signals

    Directory of Open Access Journals (Sweden)

    Yongjiao Chi

    2018-05-01

    Full Text Available There is a growing body of literature that recognizes the importance of product safety and the quality problems during processing. The working status of cutting tools may lead to project delay and cost overrun if broken down accidentally, and tool wear is crucial to processing precision in mechanical manufacturing, therefore, this study contributes to this growing area of research by monitoring condition and estimating wear. In this research, an effective method for tool wear estimation was constructed, in which, the signal features of machining process were extracted by ensemble empirical mode decomposition (EEMD and were used to estimate the tool wear. Based on signal analysis, vibration signals that had better linear relationship with tool wearing process were decomposed, then the intrinsic mode functions (IMFs, frequency spectrums of IMFs and the features relating to amplitude changes of frequency spectrum were obtained. The trend that tool wear changes with the features was fitted by Gaussian fitting function to estimate the tool wear. Experimental investigation was used to verify the effectiveness of this method and the results illustrated the correlation between tool wear and the modal features of monitored signals.

  13. A Lossy Counting-Based State of Charge Estimation Method and Its Application to Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2015-12-01

    Full Text Available Estimating the residual capacity or state-of-charge (SoC of commercial batteries on-line without destroying them or interrupting the power supply, is quite a challenging task for electric vehicle (EV designers. Many Coulomb counting-based methods have been used to calculate the remaining capacity in EV batteries or other portable devices. The main disadvantages of these methods are the cumulative error and the time-varying Coulombic efficiency, which are greatly influenced by the operating state (SoC, temperature and current. To deal with this problem, we propose a lossy counting-based Coulomb counting method for estimating the available capacity or SoC. The initial capacity of the tested battery is obtained from the open circuit voltage (OCV. The charging/discharging efficiencies, used for compensating the Coulombic losses, are calculated by the lossy counting-based method. The measurement drift, resulting from the current sensor, is amended with the distorted Coulombic efficiency matrix. Simulations and experimental results show that the proposed method is both effective and convenient.

  14. Nonlinear estimation-based dipole source localization for artificial lateral line systems

    International Nuclear Information System (INIS)

    Abdulsadda, Ahmad T; Tan Xiaobo

    2013-01-01

    As a flow-sensing organ, the lateral line system plays an important role in various behaviors of fish. An engineering equivalent of a biological lateral line is of great interest to the navigation and control of underwater robots and vehicles. A vibrating sphere, also known as a dipole source, can emulate the rhythmic movement of fins and body appendages, and has been widely used as a stimulus in the study of biological lateral lines. Dipole source localization has also become a benchmark problem in the development of artificial lateral lines. In this paper we present two novel iterative schemes, referred to as Gauss–Newton (GN) and Newton–Raphson (NR) algorithms, for simultaneously localizing a dipole source and estimating its vibration amplitude and orientation, based on the analytical model for a dipole-generated flow field. The performance of the GN and NR methods is first confirmed with simulation results and the Cramer–Rao bound (CRB) analysis. Experiments are further conducted on an artificial lateral line prototype, consisting of six millimeter-scale ionic polymer–metal composite sensors with intra-sensor spacing optimized with CRB analysis. Consistent with simulation results, the experimental results show that both GN and NR schemes are able to simultaneously estimate the source location, vibration amplitude and orientation with comparable precision. Specifically, the maximum localization error is less than 5% of the body length (BL) when the source is within the distance of one BL. Experimental results have also shown that the proposed schemes are superior to the beamforming method, one of the most competitive approaches reported in literature, in terms of accuracy and computational efficiency. (paper)

  15. Evaluation of a morphing based method to estimate muscle attachment sites of the lower extremity.

    Science.gov (United States)

    Pellikaan, P; van der Krogt, M M; Carbone, V; Fluit, R; Vigneron, L M; Van Deun, J; Verdonschot, N; Koopman, H F J M

    2014-03-21

    To generate subject-specific musculoskeletal models for clinical use, the location of muscle attachment sites needs to be estimated with accurate, fast and preferably automated tools. For this purpose, an automatic method was used to estimate the muscle attachment sites of the lower extremity, based on the assumption of a relation between the bone geometry and the location of muscle attachment sites. The aim of this study was to evaluate the accuracy of this morphing based method. Two cadaver dissections were performed to measure the contours of 72 muscle attachment sites on the pelvis, femur, tibia and calcaneus. The geometry of the bones including the muscle attachment sites was morphed from one cadaver to the other and vice versa. For 69% of the muscle attachment sites, the mean distance between the measured and morphed muscle attachment sites was smaller than 15 mm. Furthermore, the muscle attachment sites that had relatively large distances had shown low sensitivity to these deviations. Therefore, this morphing based method is a promising tool for estimating subject-specific muscle attachment sites in the lower extremity in a fast and automated manner. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Method of summation of amplitudes of coinciding pulses from Ge(Li) detectors used to study cascades of gamma-transitions in (n,#betta#) reaction

    International Nuclear Information System (INIS)

    Bogdzel', A.A.; Vasil'eva, Eh.V.; Elizarov, O.I.

    1982-01-01

    Main performanes and peculiarities of spectrometer based on the coincidence pulse amplitude total-count method and containing two Ge(La) detectors with transmission neutron spectrometer - IBR-30 pulse reactor are considered. It is shown on the 35 Cl(n, #betta#) reaction that the method of summalion of amplitudes of coinciding pulses from the Ge(Li) detector can be used to study the cascades of two #betta#-transitions with a total energy similar to the neutron binding energy. The shape of the response function of this spectrometer was studied versus the energies of #betta#-transition cascades

  17. eAMI: A Qualitative Quantification of Periodic Breathing Based on Amplitude of Oscillations

    Science.gov (United States)

    Fernandez Tellez, Helio; Pattyn, Nathalie; Mairesse, Olivier; Dolenc-Groselj, Leja; Eiken, Ola; Mekjavic, Igor B.; Migeotte, P. F.; Macdonald-Nethercott, Eoin; Meeusen, Romain; Neyt, Xavier

    2015-01-01

    Study Objectives: Periodic breathing is sleep disordered breathing characterized by instability in the respiratory pattern that exhibits an oscillatory behavior. Periodic breathing is associated with increased mortality, and it is observed in a variety of situations, such as acute hypoxia, chronic heart failure, and damage to respiratory centers. The standard quantification for the diagnosis of sleep related breathing disorders is the apnea-hypopnea index (AHI), which measures the proportion of apneic/hypopneic events during polysomnography. Determining the AHI is labor-intensive and requires the simultaneous recording of airflow and oxygen saturation. In this paper, we propose an automated, simple, and novel methodology for the detection and qualification of periodic breathing: the estimated amplitude modulation index (eAMI). Patients or Participants: Antarctic cohort (3,800 meters): 13 normal individuals. Clinical cohort: 39 different patients suffering from diverse sleep-related pathologies. Measurements and Results: When tested in a population with high levels of periodic breathing (Antarctic cohort), eAMI was closely correlated with AHI (r = 0.95, P Dolenc-Groselj L, Eiken O, Mekjavic IB, Migeotte PF, Macdonald-Nethercott E, Meeusen R, Neyt X. eAMI: a qualitative quantification of periodic breathing based on amplitude of oscillations. SLEEP 2015;38(3):381–389. PMID:25581914

  18. Vce-based methods for temperature estimation of high power IGBT modules during power cycling - A comparison

    DEFF Research Database (Denmark)

    Amoiridis, Anastasios; Anurag, Anup; Ghimire, Pramod

    2015-01-01

    . This experimental work evaluates the validity and accuracy of two Vce based methods applied on high power IGBT modules during power cycling tests. The first method estimates the chip temperature when low sense current is applied and the second method when normal load current is present. Finally, a correction factor......Temperature estimation is of great importance for performance and reliability of IGBT power modules in converter operation as well as in active power cycling tests. It is common to be estimated through Thermo-Sensitive Electrical Parameters such as the forward voltage drop (Vce) of the chip...

  19. On the Methods for Estimating the Corneoscleral Limbus.

    Science.gov (United States)

    Jesus, Danilo A; Iskander, D Robert

    2017-08-01

    The aim of this study was to develop computational methods for estimating limbus position based on the measurements of three-dimensional (3-D) corneoscleral topography and ascertain whether corneoscleral limbus routinely estimated from the frontal image corresponds to that derived from topographical information. Two new computational methods for estimating the limbus position are proposed: One based on approximating the raw anterior eye height data by series of Zernike polynomials and one that combines the 3-D corneoscleral topography with the frontal grayscale image acquired with the digital camera in-built in the profilometer. The proposed methods are contrasted against a previously described image-only-based procedure and to a technique of manual image annotation. The estimates of corneoscleral limbus radius were characterized with a high precision. The group average (mean ± standard deviation) of the maximum difference between estimates derived from all considered methods was 0.27 ± 0.14 mm and reached up to 0.55 mm. The four estimating methods lead to statistically significant differences (nonparametric ANOVA (the Analysis of Variance) test, p 0.05). Precise topographical limbus demarcation is possible either from the frontal digital images of the eye or from the 3-D topographical information of corneoscleral region. However, the results demonstrated that the corneoscleral limbus estimated from the anterior eye topography does not always correspond to that obtained through image-only based techniques. The experimental findings have shown that 3-D topography of anterior eye, in the absence of a gold standard, has the potential to become a new computational methodology for estimating the corneoscleral limbus.

  20. Estimation of the domain containing all compact invariant sets of a system modelling the amplitude of a plasma instability

    International Nuclear Information System (INIS)

    Krishchenko, Alexander; Starkov, Konstantin

    2007-01-01

    In this Letter we describe localization results of all compact invariant sets of a system modelling the amplitude of a plasma instability proposed by Pikovski, Rabinovich and Trakhtengerts. We derive ellipsoidal and polytopic localization sets for a number of domains in the 4-dimensional parametrical space of this system. Other localization sets have been obtained by using paraboloids of a revolution, a circular cylinder and an elliptic paraboloid. Our approach is based on the solution of the first order extremum problem. A comparison of our method with the method of semipermeable surfaces is presented as well

  1. Estimation of the domain containing all compact invariant sets of a system modelling the amplitude of a plasma instability

    Energy Technology Data Exchange (ETDEWEB)

    Krishchenko, Alexander [Bauman Moscow State Technical University, 2nd Baumanskaya str., 5, Moscow 105005 (Russian Federation)]. E-mail: apkri@bmstu.ru; Starkov, Konstantin [CITEDI-IPN, Av. del Parque 1310, Mesa de Otay, Tijuana, BC (Mexico)]. E-mail: konst@citedi.mx

    2007-07-16

    In this Letter we describe localization results of all compact invariant sets of a system modelling the amplitude of a plasma instability proposed by Pikovski, Rabinovich and Trakhtengerts. We derive ellipsoidal and polytopic localization sets for a number of domains in the 4-dimensional parametrical space of this system. Other localization sets have been obtained by using paraboloids of a revolution, a circular cylinder and an elliptic paraboloid. Our approach is based on the solution of the first order extremum problem. A comparison of our method with the method of semipermeable surfaces is presented as well.

  2. A method for state of energy estimation of lithium-ion batteries based on neural network model

    International Nuclear Information System (INIS)

    Dong, Guangzhong; Zhang, Xu; Zhang, Chenbin; Chen, Zonghai

    2015-01-01

    The state-of-energy is an important evaluation index for energy optimization and management of power battery systems in electric vehicles. Unlike the state-of-charge which represents the residual energy of the battery in traditional applications, state-of-energy is integral result of battery power, which is the product of current and terminal voltage. On the other hand, like state-of-charge, the state-of-energy has an effect on terminal voltage. Therefore, it is hard to solve the nonlinear problems between state-of-energy and terminal voltage, which will complicate the estimation of a battery's state-of-energy. To address this issue, a method based on wavelet-neural-network-based battery model and particle filter estimator is presented for the state-of-energy estimation. The wavelet-neural-network based battery model is used to simulate the entire dynamic electrical characteristics of batteries. The temperature and discharge rate are also taken into account to improve model accuracy. Besides, in order to suppress the measurement noises of current and voltage, a particle filter estimator is applied to estimate cell state-of-energy. Experimental results on LiFePO_4 batteries indicate that the wavelet-neural-network based battery model simulates battery dynamics robustly with high accuracy and the estimation value based on the particle filter estimator converges to the real state-of-energy within an error of ±4%. - Highlights: • State-of-charge is replaced by state-of-energy to determine cells residual energy. • The battery state-space model is established based on a neural network. • Temperature and current influence are considered to improve the model accuracy. • The particle filter is used for state-of-energy estimation to improve accuracy. • The robustness of new method is validated under dynamic experimental conditions.

  3. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System.

    Science.gov (United States)

    Zhao, Anbang; Ma, Lin; Ma, Xuefei; Hui, Juan

    2017-02-20

    In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS) is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  4. An Improved Azimuth Angle Estimation Method with a Single Acoustic Vector Sensor Based on an Active Sonar Detection System

    Directory of Open Access Journals (Sweden)

    Anbang Zhao

    2017-02-01

    Full Text Available In this paper, an improved azimuth angle estimation method with a single acoustic vector sensor (AVS is proposed based on matched filtering theory. The proposed method is mainly applied in an active sonar detection system. According to the conventional passive method based on complex acoustic intensity measurement, the mathematical and physical model of this proposed method is described in detail. The computer simulation and lake experiments results indicate that this method can realize the azimuth angle estimation with high precision by using only a single AVS. Compared with the conventional method, the proposed method achieves better estimation performance. Moreover, the proposed method does not require complex operations in frequencydomain and achieves computational complexity reduction.

  5. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    Science.gov (United States)

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  6. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.

  7. A predictive estimation method for carbon dioxide transport by data-driven modeling with a physically-based data model.

    Science.gov (United States)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun

    2017-11-01

    In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A Parameter Estimation Method for Nonlinear Systems Based on Improved Boundary Chicken Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Shaolong Chen

    2016-01-01

    Full Text Available Parameter estimation is an important problem in nonlinear system modeling and control. Through constructing an appropriate fitness function, parameter estimation of system could be converted to a multidimensional parameter optimization problem. As a novel swarm intelligence algorithm, chicken swarm optimization (CSO has attracted much attention owing to its good global convergence and robustness. In this paper, a method based on improved boundary chicken swarm optimization (IBCSO is proposed for parameter estimation of nonlinear systems, demonstrated and tested by Lorenz system and a coupling motor system. Furthermore, we have analyzed the influence of time series on the estimation accuracy. Computer simulation results show it is feasible and with desirable performance for parameter estimation of nonlinear systems.

  9. Estimation of a beam centering error in the JAERI AVF cyclotron

    International Nuclear Information System (INIS)

    Fukuda, M.; Okumura, S.; Arakawa, K.; Ishibori, I.; Matsumura, A.; Nakamura, N.; Nara, T.; Agematsu, T.; Tamura, H.; Karasawa, T.

    1999-01-01

    A method for estimating a beam centering error from a beam density distribution obtained by a single radial probe has been developed. Estimation of the centering error is based on an analysis of radial beam positions in the direction of the radial probe. Radial motion of a particle is described as betatron oscillation around an accelerated equilibrium orbit. By fitting the radial beam positions of several consecutive turns to an equation of the radial motion, not only amplitude of the centering error but also frequency of the radial betatron oscillation and energy gain per turn can be evaluated simultaneously. The estimated centering error amplitude was consistent with a result of an orbit simulation. This method was exceedingly helpful for minimizing the centering error of a 10 MeV proton beam during the early stages of acceleration. A well-centered beam was obtained by correcting the magnetic field with a first harmonic produced by two pairs of harmonic coils. In order to push back an orbit center to a magnet center, currents of the harmonic coils were optimized on the basis of the estimated centering error amplitude. (authors)

  10. Correlation between vibration amplitude and tool wear in turning: Numerical and experimental analysis

    Directory of Open Access Journals (Sweden)

    Balla Srinivasa Prasad

    2017-02-01

    Full Text Available In this paper, a correlation between vibration amplitude and tool wear when in dry turning of AISI 4140 steel using uncoated carbide insert DNMA 432 is analyzed via experiments and finite element simulations. 3D Finite element simulations results are utilized to predict the evolution of cutting forces, vibration displacement amplitudes and tool wear in vibration induced turning. In the present paper, the primary concern is to find the relative vibration and tool wear with the variation of process parameters. These changes lead to accelerated tool wear and even breakage. The cutting forces in the feed direction are also predicted and compared with the experimental trends. A laser Doppler vibrometer is used to detect vibration amplitudes and the usage of Kistler 9272 dynamometer for recording the cutting forces during the cutting process is well demonstrated. A sincere effort is put to investigate the influence of spindle speed, feed rate, depth of cut on vibration amplitude and tool flank wear at different levels of workpiece hardness. Empirical models have been developed using second order polynomial equations for correlating the interaction and higher order influences of various process parameters. Analysis of variance (ANOVA is carried out to identify the significant factors that are affecting the vibration amplitude and tool flank wear. Response surface methodology (RSM is implemented to investigate the progression of flank wear and displacement amplitude based on experimental data. While measuring the displacement amplitude, R-square values for experimental and numerical methods are 98.6 and 97.8. Based on the R-square values of ANOVA it is found that the numerical values show good agreement with the experimental values and are helpful in estimating displacement amplitude. In the case of predicting the tool wear, R-square values were found to be 97.69 and 96.08, respectively for numerical and experimental measures while determining the tool

  11. Numerical discretization-based estimation methods for ordinary differential equation models via penalized spline smoothing with applications in biomedical research.

    Science.gov (United States)

    Wu, Hulin; Xue, Hongqi; Kumar, Arun

    2012-06-01

    Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches. © 2012, The International Biometric Society.

  12. Power system frequency estimation based on an orthogonal decomposition method

    Science.gov (United States)

    Lee, Chih-Hung; Tsai, Men-Shen

    2018-06-01

    In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.

  13. Joint Pitch and DOA Estimation Using the ESPRIT method

    DEFF Research Database (Denmark)

    Wu, Yuntao; Amir, Leshem; Jensen, Jesper Rindom

    2015-01-01

    In this paper, the problem of joint multi-pitch and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signals is considered. A spatio-temporal matrix signal model for a uniform linear array is defined, and then the ESPRIT method based on subspace techniques that exploits...... the invariance property in the time domain is first used to estimate the multi pitch frequencies of multiple harmonic signals. Followed by the estimated pitch frequencies, the DOA estimations based on the ESPRIT method are also presented by using the shift invariance structure in the spatial domain. Compared...... to the existing stateof-the-art algorithms, the proposed method based on ESPRIT without 2-D searching is computationally more efficient but performs similarly. An asymptotic performance analysis of the DOA and pitch estimation of the proposed method are also presented. Finally, the effectiveness of the proposed...

  14. Analytic computations of massive one-loop amplitudes

    International Nuclear Information System (INIS)

    Badger, Simon; Yundin, Valery; Sattler, Ralf

    2010-06-01

    We show some new applications of on-shell methods to calculate compact helicity amplitudes for t anti t production through gluon fusion. The rational and mass renormalisation contributions are extracted from two independent Feynman diagram based approaches. (orig.)

  15. MR-based water content estimation in cartilage: design and validation of a method

    DEFF Research Database (Denmark)

    Shiguetomi Medina, Juan Manuel; Kristiansen, Maja Sophie; Ringgaard, Steffen

    Purpose: Design and validation of an MR-based method that allows the calculation of the water content in cartilage tissue. Methods and Materials: Cartilage tissue T1 map based water content MR sequences were used on a 37 Celsius degree stable system. The T1 map intensity signal was analyzed on 6...... cartilage samples from living animals (pig) and on 8 gelatin samples which water content was already known. For the data analysis a T1 intensity signal map software analyzer used. Finally, the method was validated after measuring and comparing 3 more cartilage samples in a living animal (pig). The obtained...... map based water content sequences can provide information that, after being analyzed using a T1-map analysis software, can be interpreted as the water contained inside a cartilage tissue. The amount of water estimated using this method was similar to the one obtained at the dry-freeze procedure...

  16. Estimation of functional failure probability of passive systems based on subset simulation method

    International Nuclear Information System (INIS)

    Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)

  17. Amplitude based feedback control for NTM stabilisation at ASDEX Upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Rapson, Christopher, E-mail: chris.rapson@ipp.mpg.de; Giannone, Louis; Maraschek, Marc; Reich, Matthias; Stober, Joerg; Treutterer, Wolfgang

    2014-05-15

    Highlights: • Two algorithms have been developed which use the NTM amplitude to control ECCD deposition and stabilise NTMs. • Both algorithms were tested and tuned in a simulation of the full feedback loop including an MRE. • Both algorithms have been successfully deployed in ASDEX Upgrade experiments. • Use of the NTM amplitude adds considerable robustness, which is necessary when trying to target ECCD to within 1 cm of the island location. • This is part of ongoing work to reliably and quickly stabilise NTMs in any plasma scenario. - Abstract: Neoclassical Tearing Modes (NTMs) degrade the confinement in tokamak plasmas at high beta, placing a major limitation on the projected fusion performance. Furthermore, NTMs can lead to disruptions with even more severe consequences. Therefore methods to stabilise NTMs are being developed with high priority at several research institutes worldwide. The favoured method is to deposit Electron Cyclotron Current Drive (ECCD) precisely at the mode location by controlling a movable mirror in the ECCD launcher. This method requires both the mode location and the deposition location to be known with high accuracy in real time. The required accuracy is given by half of the marginal island width, or approximately 1 cm for a m/n = 3/2 NTM at ASDEX Upgrade. Despite considerable development on a range of diagnostics, it remains challenging to provide the necessary accuracy reliably and in real time. To relax the accuracy requirements and add robustness, the feedback controller can additionally consider the effect of ECCD on the NTM amplitude directly. Then the optimal deposition location is simply where the NTM amplitude is minimised. The simplest implementation sweeps the ECCD beam across the expected NTM location. After the sweep, the beam can be returned to the optimal location and held there to stabilise the NTM. Unfortunately, waiting for a full sweep takes too long. Therefore a second method assesses the NTM growth every

  18. Modulating functions-based method for parameters and source estimation in one-dimensional partial differential equations

    KAUST Repository

    Asiri, Sharefa M.

    2016-10-20

    In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear in unknown parameters. The well-posedness of the modulating functions-based solution is proved. The wave and the fifth-order KdV equations are used as examples to show the effectiveness of the proposed method in both noise-free and noisy cases.

  19. Optimisation of amplitude distribution of magnetic Barkhausen noise

    Science.gov (United States)

    Pal'a, Jozef; Jančárik, Vladimír

    2017-09-01

    The magnetic Barkhausen noise (MBN) measurement method is a widely used non-destructive evaluation technique used for inspection of ferromagnetic materials. Besides other influences, the excitation yoke lift-off is a significant issue of this method deteriorating the measurement accuracy. In this paper, the lift-off effect is analysed mainly on grain oriented Fe-3%Si steel subjected to various heat treatment conditions. Based on investigation of relationship between the amplitude distribution of MBN and lift-off, an approach to suppress the lift-off effect is proposed. Proposed approach utilizes the digital feedback optimising the measurement based on the amplitude distribution of MBN. The results demonstrated that the approach can highly suppress the lift-off effect up to 2 mm.

  20. Scaling of saturation amplitudes in baroclinic instability

    International Nuclear Information System (INIS)

    Shepherd, T.G.

    1994-01-01

    By using finite-amplitude conservation laws for pseudomomentum and pseudoenergy, rigorous upper bounds have been derived on the saturation amplitudes in baroclinic instability for layered and continuously-stratified quasi-geostrophic models. Bounds have been obtained for both the eddy energy and the eddy potential enstrophy. The bounds apply to conservative (inviscid, unforced) flow, as well as to forced-dissipative flow when the dissipation is proportional to the potential vorticity. This approach provides an efficient way of extracting an analytical estimate of the dynamical scalings of the saturation amplitudes in terms of crucial non-dimensional parameters. A possible use is in constructing eddy parameterization schemes for zonally-averaged climate models. The scaling dependences are summarized, and compared with those derived from weakly-nonlinear theory and from baroclinic-adjustment estimates

  1. Symmetrized complex amplitudes for He double photoionization from the time-dependent close coupling and exterior complex scaling methods

    International Nuclear Information System (INIS)

    Horner, D.A.; Colgan, J.; Martin, F.; McCurdy, C.W.; Pindzola, M.S.; Rescigno, T.N.

    2004-01-01

    Symmetrized complex amplitudes for the double photoionization of helium are computed by the time-dependent close-coupling and exterior complex scaling methods, and it is demonstrated that both methods are capable of the direct calculation of these amplitudes. The results are found to be in excellent agreement with each other and in very good agreement with results of other ab initio methods and experiment

  2. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    Science.gov (United States)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  3. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  4. Calculation of chiral determinants and multiloop amplitudes by cutting and sewing method

    International Nuclear Information System (INIS)

    Losev, A.

    1989-01-01

    Functional integrals over fermions on open Riemann surfaces are determined up to a multiplicative constant by conservation laws. Using a cutting and sewing method these constants are found. Multiloop statsums and amplitudes as a product of anomaly-free expressions in Schottky parametrization and statsums on spheres are obtained. 5 refs

  5. Residential building energy estimation method based on the application of artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Marshall, S.; Kajl, S.

    1999-07-01

    The energy requirements of a residential building five to twenty-five stories high can be measured using a newly proposed analytical method based on artificial intelligence. The method is fast and provides a wide range of results such as total energy consumption values, power surges, and heating or cooling consumption values. A series of database were created to take into account the particularities which influence the energy consumption of a building. In this study, DOE-2 software was created for use in 8 apartment models. A total of 27 neural networks were used, 3 for the estimation of energy consumption in the corridor, and 24 for inside the apartments. Three user interfaces were created to facilitate the estimation of energy consumption. These were named the Energy Estimation Assistance System (EEAS) interfaces and are only accessible using MATLAB software. The input parameters for EEAS are: climatic region, exterior wall resistance, roofing resistance, type of windows, infiltration, number of storeys, and corridor ventilation system operating schedule. By changing the parameters, the EEAS can determine annual heating, cooling and basic energy consumption levels for apartments and corridors. 2 tabs., 2 figs.

  6. Large Amplitude Oscillatory Extension of Soft Polymeric Networks

    DEFF Research Database (Denmark)

    Bejenariu, Anca Gabriela; Rasmussen, Henrik K.; Skov, Anne Ladegaard

    2010-01-01

    sing a filament stretching rheometer (FSR) surrounded by a thermostatic chamber and equipped with a micrometric laser it is possible to measure large amplitude oscillatory elongation (LAOE) on elastomeric based networks with no base flow as in the LAOE method for polymer melts. Poly(dimethylsilox...

  7. The application of particle filters in single trial event-related potential estimation

    International Nuclear Information System (INIS)

    Mohseni, Hamid R; Nazarpour, Kianoush; Sanei, Saeid; Wilding, Edward L

    2009-01-01

    In this paper, an approach for the estimation of single trial event-related potentials (ST-ERPs) using particle filters (PFs) is presented. The method is based on recursive Bayesian mean square estimation of ERP wavelet coefficients using their previous estimates as prior information. To enable a performance evaluation of the approach in the Gaussian and non-Gaussian distributed noise conditions, we added Gaussian white noise (GWN) and real electroencephalogram (EEG) signals recorded during rest to the simulated ERPs. The results were compared to that of the Kalman filtering (KF) approach demonstrating the robustness of the PF over the KF to the added GWN noise. The proposed method also outperforms the KF when the assumption about the Gaussianity of the noise is violated. We also applied this technique to real EEG potentials recorded in an odd-ball paradigm and investigated the correlation between the amplitude and the latency of the estimated ERP components. Unlike the KF method, for the PF there was a statistically significant negative correlation between amplitude and latency of the estimated ERPs, matching previous neurophysiological findings

  8. State of charge estimation of lithium-ion batteries based on an improved parameter identification method

    International Nuclear Information System (INIS)

    Xia, Bizhong; Chen, Chaoren; Tian, Yong; Wang, Mingwang; Sun, Wei; Xu, Zhihui

    2015-01-01

    The SOC (state of charge) is the most important index of the battery management systems. However, it cannot be measured directly with sensors and must be estimated with mathematical techniques. An accurate battery model is crucial to exactly estimate the SOC. In order to improve the model accuracy, this paper presents an improved parameter identification method. Firstly, the concept of polarization depth is proposed based on the analysis of polarization characteristics of the lithium-ion batteries. Then, the nonlinear least square technique is applied to determine the model parameters according to data collected from pulsed discharge experiments. The results show that the proposed method can reduce the model error as compared with the conventional approach. Furthermore, a nonlinear observer presented in the previous work is utilized to verify the validity of the proposed parameter identification method in SOC estimation. Finally, experiments with different levels of discharge current are carried out to investigate the influence of polarization depth on SOC estimation. Experimental results show that the proposed method can improve the SOC estimation accuracy as compared with the conventional approach, especially under the conditions of large discharge current. - Highlights: • The polarization characteristics of lithium-ion batteries are analyzed. • The concept of polarization depth is proposed to improve model accuracy. • A nonlinear least square technique is applied to determine the model parameters. • A nonlinear observer is used as the SOC estimation algorithm. • The validity of the proposed method is verified by experimental results.

  9. Comparing Three Approaches of Evapotranspiration Estimation in Mixed Urban Vegetation: Field-Based, Remote Sensing-Based and Observational-Based Methods

    Directory of Open Access Journals (Sweden)

    Hamideh Nouri

    2016-06-01

    Full Text Available Despite being the driest inhabited continent, Australia has one of the highest per capita water consumptions in the world. In addition, instead of having fit-for-purpose water supplies (using different qualities of water for different applications, highly treated drinking water is used for nearly all of Australia’s urban water supply needs, including landscape irrigation. The water requirement of urban landscapes, particularly urban parklands, is of growing concern. The estimation of evapotranspiration (ET and subsequently plant water requirements in urban vegetation needs to consider the heterogeneity of plants, soils, water, and climate characteristics. This research contributes to a broader effort to establish sustainable irrigation practices within the Adelaide Parklands in Adelaide, South Australia. In this paper, two practical ET estimation approaches are compared to a detailed Soil Water Balance (SWB analysis over a one year period. One approach is the Water Use Classification of Landscape Plants (WUCOLS method, which is based on expert opinion on the water needs of different classes of landscape plants. The other is a remote sensing approach based on the Enhanced Vegetation Index (EVI from Moderate Resolution Imaging Spectroradiometer (MODIS sensors on the Terra satellite. Both methods require knowledge of reference ET calculated from meteorological data. The SWB determined that plants consumed 1084 mm·yr−1 of water in ET with an additional 16% lost to drainage past the root zone, an amount sufficient to keep salts from accumulating in the root zone. ET by MODIS EVI was 1088 mm·yr−1, very close to the SWB estimate, while WUCOLS estimated the total water requirement at only 802 mm·yr−1, 26% lower than the SWB estimate and 37% lower than the amount actually added including the drainage fraction. Individual monthly ET by MODIS was not accurate, but these errors were cancelled out to give good agreement on an annual time step. We

  10. Amplitude control of the track-induced self-excited vibration for a maglev system.

    Science.gov (United States)

    Zhou, Danfeng; Li, Jie; Zhang, Kun

    2014-09-01

    The Electromagnet Suspension (EMS) maglev train uses controlled electromagnetic forces to achieve suspension, and self-excited vibration may occur due to the flexibility of the track. In this article, the harmonic balance method is applied to investigate the amplitude of the self-excited vibration, and it is found that the amplitude of the vibration depends on the voltage of the power supplier. Based on this observation, a vibration amplitude control method, which controls the amplitude of the vibration by adjusting the voltage of the power supplier, is proposed to attenuate the vibration. A PI controller is designed to control the amplitude of the vibration at a given level. The effectiveness of this method shows a good prospect for its application to commercial maglev systems. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  11. [A method to estimate the short-term fractal dimension of heart rate variability based on wavelet transform].

    Science.gov (United States)

    Zhonggang, Liang; Hong, Yan

    2006-10-01

    A new method of calculating fractal dimension of short-term heart rate variability signals is presented. The method is based on wavelet transform and filter banks. The implementation of the method is: First of all we pick-up the fractal component from HRV signals using wavelet transform. Next, we estimate the power spectrum distribution of fractal component using auto-regressive model, and we estimate parameter 7 using the least square method. Finally according to formula D = 2- (gamma-1)/2 estimate fractal dimension of HRV signal. To validate the stability and reliability of the proposed method, using fractional brown movement simulate 24 fractal signals that fractal value is 1.6 to validate, the result shows that the method has stability and reliability.

  12. Impact of a new respiratory amplitude-based gating technique in evaluation of upper abdominal PET lesions

    Energy Technology Data Exchange (ETDEWEB)

    Van Der Gucht, Axel, E-mail: axel.vandergucht@gmail.com [Department of Nuclear Medicine, Centre Hospitalier Princesse Grace, Monaco (Monaco); Serrano, Benjamin [Department of Medical Physics, Centre Hospitalier Princesse Grace, Monaco (Monaco); Hugonnet, Florent; Paulmier, Benoît [Department of Nuclear Medicine, Centre Hospitalier Princesse Grace, Monaco (Monaco); Garnier, Nicolas [Department of Medical Physics, Centre Hospitalier Princesse Grace, Monaco (Monaco); Faraggi, Marc [Department of Nuclear Medicine, Centre Hospitalier Princesse Grace, Monaco (Monaco)

    2014-03-15

    PET acquisition requires several minutes which can lead to respiratory motion blurring, to increase partial volume effect and SUV under-estimation. To avoid these artifacts, conventional 10-min phase-based respiratory gating (PBRG) can be performed but is time-consuming and difficult with a non-compliant patient. We evaluated an automatic amplitude-based gating method (AABG) which keeps 35% of the counts at the end of expiration to minimize respiratory motion. We estimated the impact of AABG on upper abdominal lesion detectability, quantification and patient management. Methods: We consecutively included 31 patients (82 hepatic and 25 perihepatic known lesions). Each patient underwent 3 acquisitions on a Siemens Biograph mCT (4 rings and time-of-flight): a standard free-breathing whole-body (SWB, 5–7 steps/2.5 min per step, 3.3 ± 0.4 MBq/kg of 18F-FDG), a 10-min PBRG with six bins and a 5-min AABG method. All gated acquisitions were performed with an ANZAI respiratory gating system. SUV{sub max} and target to background ratio (TBR, defined as the maximum SUV of the lesion divided by the mean SUV of a region of interest drawn in healthy liver) were compared. Results: All 94 lesions in SWB images were detected in the gated images. 10-min PBRG and 5-min AABG acquisitions respectively revealed 9 and 13 new lesions and relocated 7 and 8 lesions. Four lesions revealed by 5-min AABG were missed by 10-min PBRG in 3 non-compliant patients. Both gated methods failed to relocate 2 lesions seen on SWB acquisition. Compared to SWB, TBR increased significantly with 10-min PBRG and with 5-min AABG (respectively 41 ± 59%, p = 4.10–3 and 66 ± 75%, p = 6.10–5) whereas SUV{sub max} did not (respectively 14 ± 43%, p = 0.29 with 10-min PBRG, and 24 ± 46%, p = 0.11 with 5-min AABG). Conclusion: The AABG is a fast and a user-friendly respiratory gating method to increase detectability and quantification of upper abdominal lesions compared to the conventional PBRG procedure and

  13. NASA Software Cost Estimation Model: An Analogy Based Estimation Model

    Science.gov (United States)

    Hihn, Jairus; Juster, Leora; Menzies, Tim; Mathew, George; Johnson, James

    2015-01-01

    The cost estimation of software development activities is increasingly critical for large scale integrated projects such as those at DOD and NASA especially as the software systems become larger and more complex. As an example MSL (Mars Scientific Laboratory) developed at the Jet Propulsion Laboratory launched with over 2 million lines of code making it the largest robotic spacecraft ever flown (Based on the size of the software). Software development activities are also notorious for their cost growth, with NASA flight software averaging over 50% cost growth. All across the agency, estimators and analysts are increasingly being tasked to develop reliable cost estimates in support of program planning and execution. While there has been extensive work on improving parametric methods there is very little focus on the use of models based on analogy and clustering algorithms. In this paper we summarize our findings on effort/cost model estimation and model development based on ten years of software effort estimation research using data mining and machine learning methods to develop estimation models based on analogy and clustering. The NASA Software Cost Model performance is evaluated by comparing it to COCOMO II, linear regression, and K-­ nearest neighbor prediction model performance on the same data set.

  14. DETERMINISTIC COMPONENTS IN THE LIGHT CURVE AMPLITUDE OF Y OPH

    International Nuclear Information System (INIS)

    Pop, Alexandru; Turcu, Vlad; Vamos, Calin

    2010-01-01

    About two decades after the discovery of the amplitude decline of the light curve of the classical Cepheid Y Oph, its study is resumed using an increased amount of homogenized data and an extended time base. In our approach, the investigation of different time series concerning the light curve amplitude of Y Oph is not only the reason for the present study, but also a stimulus for developing a coherent methodology for studying long- and short-term variability phenomena in variable stars, taking into account the details of concrete observing conditions: amount of data, data sampling, time base, and individual errors of observational data. The statistical significance of this decreasing trend was estimated by assuming its linearity. We approached the decision-making process by formulating adequate null and alternative hypotheses, and testing the value of the regression line slope for different data sets via Monte Carlo simulations. A variability analysis, through various methods, of the original data and of the residuals obtained after removing the linear trend was performed. We also proposed a new statistical test, based on amplitude spectrum analysis and Monte Carlo simulations, intended to evaluate how detectible is a given (linear) trend in well-defined observing conditions: the trend detection probability. The main conclusion of our study on Y Oph is that, even if the false alarm probability is low enough to consider the decreasing trend to be statistically significant, the available data do not allow us to obtain a reasonably powerful test. We are able to confirm the light curve amplitude decline, and the order of magnitude of its slope with a better statistical substantiation. According to the obtained values of the trend detection probability, it seems that the trend we are dealing with is marked by a low detectibility. Our attempt to find signs of possible variability phenomena at shorter timescales ended by emphasizing the relative constancy of our data

  15. An estimate of the terrestrial carbon budget of Russia using inventory-based, eddy covariance and inversion methods

    Directory of Open Access Journals (Sweden)

    A. J. Dolman

    2012-12-01

    Full Text Available We determine the net land to atmosphere flux of carbon in Russia, including Ukraine, Belarus and Kazakhstan, using inventory-based, eddy covariance, and inversion methods. Our high boundary estimate is −342 Tg C yr−1 from the eddy covariance method, and this is close to the upper bounds of the inventory-based Land Ecosystem Assessment and inverse models estimates. A lower boundary estimate is provided at −1350 Tg C yr−1 from the inversion models. The average of the three methods is −613.5 Tg C yr−1. The methane emission is estimated separately at 41.4 Tg C yr−1.

    These three methods agree well within their respective error bounds. There is thus good consistency between bottom-up and top-down methods. The forests of Russia primarily cause the net atmosphere to land flux (−692 Tg C yr−1 from the LEA. It remains however remarkable that the three methods provide such close estimates (−615, −662, −554 Tg C yr–1 for net biome production (NBP, given the inherent uncertainties in all of the approaches. The lack of recent forest inventories, the few eddy covariance sites and associated uncertainty with upscaling and undersampling of concentrations for the inversions are among the prime causes of the uncertainty. The dynamic global vegetation models (DGVMs suggest a much lower uptake at −91 Tg C yr−1, and we argue that this is caused by a high estimate of heterotrophic respiration compared to other methods.

  16. Comparative analysis of gradient-field-based orientation estimation methods and regularized singular-value decomposition for fringe pattern processing.

    Science.gov (United States)

    Sun, Qi; Fu, Shujun

    2017-09-20

    Fringe orientation is an important feature of fringe patterns and has a wide range of applications such as guiding fringe pattern filtering, phase unwrapping, and abstraction. Estimating fringe orientation is a basic task for subsequent processing of fringe patterns. However, various noise, singular and obscure points, and orientation data degeneration lead to inaccurate calculations of fringe orientation. Thus, to deepen the understanding of orientation estimation and to better guide orientation estimation in fringe pattern processing, some advanced gradient-field-based orientation estimation methods are compared and analyzed. At the same time, following the ideas of smoothing regularization and computing of bigger gradient fields, a regularized singular-value decomposition (RSVD) technique is proposed for fringe orientation estimation. To compare the performance of these gradient-field-based methods, quantitative results and visual effect maps of orientation estimation are given on simulated and real fringe patterns that demonstrate that the RSVD produces the best estimation results at a cost of relatively less time.

  17. Fatigue Crack Propagation Under Variable Amplitude Loading Analyses Based on Plastic Energy Approach

    Directory of Open Access Journals (Sweden)

    Sofiane Maachou

    2014-04-01

    Full Text Available Plasticity effects at the crack tip had been recognized as “motor” of crack propagation, the growth of cracks is related to the existence of a crack tip plastic zone, whose formation and intensification is accompanied by energy dissipation. In the actual state of knowledge fatigue crack propagation is modeled using crack closure concept. The fatigue crack growth behavior under constant amplitude and variable amplitude loading of the aluminum alloy 2024 T351 are analyzed using in terms energy parameters. In the case of VAL (variable amplitude loading tests, the evolution of the hysteretic energy dissipated per block is shown similar with that observed under constant amplitude loading. A linear relationship between the crack growth rate and the hysteretic energy dissipated per block is obtained at high growth rates. For lower growth rates values, the relationship between crack growth rate and hysteretic energy dissipated per block can represented by a power law. In this paper, an analysis of fatigue crack propagation under variable amplitude loading based on energetic approach is proposed.

  18. Real topological string amplitudes

    Energy Technology Data Exchange (ETDEWEB)

    Narain, K.S. [The Abdus Salam International Centre for Theoretical Physics (ICTP),Strada Costiera 11, Trieste, 34151 (Italy); Piazzalunga, N. [Simons Center for Geometry and Physics, State University of New York,Stony Brook, NY, 11794-3636 (United States); International School for Advanced Studies (SISSA) and INFN, Sez. di Trieste,via Bonomea 265, Trieste, 34136 (Italy); Tanzini, A. [International School for Advanced Studies (SISSA) and INFN, Sez. di Trieste,via Bonomea 265, Trieste, 34136 (Italy)

    2017-03-15

    We discuss the physical superstring correlation functions in type I theory (or equivalently type II with orientifold) that compute real topological string amplitudes. We consider the correlator corresponding to holomorphic derivative of the real topological amplitude G{sub χ}, at fixed worldsheet Euler characteristic χ. This corresponds in the low-energy effective action to N=2 Weyl multiplet, appropriately reduced to the orientifold invariant part, and raised to the power g{sup ′}=−χ+1. We show that the physical string correlator gives precisely the holomorphic derivative of topological amplitude. Finally, we apply this method to the standard closed oriented case as well, and prove a similar statement for the topological amplitude F{sub g}.

  19. Application of the multigrid amplitude function method for time-dependent transport equation using MOC

    International Nuclear Information System (INIS)

    Tsujita, K.; Endo, T.; Yamamoto, A.

    2013-01-01

    An efficient numerical method for time-dependent transport equation, the mutigrid amplitude function (MAF) method, is proposed. The method of characteristics (MOC) is being widely used for reactor analysis thanks to the advances of numerical algorithms and computer hardware. However, efficient kinetic calculation method for MOC is still desirable since it requires significant computation time. Various efficient numerical methods for solving the space-dependent kinetic equation, e.g., the improved quasi-static (IQS) and the frequency transform methods, have been developed so far mainly for diffusion calculation. These calculation methods are known as effective numerical methods and they offer a way for faster computation. However, they have not been applied to the kinetic calculation method using MOC as the authors' knowledge. Thus, the MAF method is applied to the kinetic calculation using MOC aiming to reduce computation time. The MAF method is a unified numerical framework of conventional kinetic calculation methods, e.g., the IQS, the frequency transform, and the theta methods. Although the MAF method is originally developed for the space-dependent kinetic calculation based on the diffusion theory, it is extended to transport theory in the present study. The accuracy and computational time are evaluated though the TWIGL benchmark problem. The calculation results show the effectiveness of the MAF method. (authors)

  20. Novel Direction Of Arrival Estimation Method Based on Coherent Accumulation Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Li Lei

    2015-04-01

    Full Text Available Based on coherent accumulation matrix reconstruction, a novel Direction Of Arrival (DOA estimation decorrelation method of coherent signals is proposed using a small sample. First, the Signal to Noise Ratio (SNR is improved by performing coherent accumulation operation on an array of observed data. Then, according to the structure characteristics of the accumulated snapshot vector, the equivalent covariance matrix, whose rank is the same as the number of array elements, is constructed. The rank of this matrix is proved to be determined just by the number of incident signals, which realize the decorrelation of coherent signals. Compared with spatial smoothing method, the proposed method performs better by effectively avoiding aperture loss with high-resolution characteristics and low computational complexity. Simulation results demonstrate the efficiency of the proposed method.

  1. A SOFTWARE RELIABILITY ESTIMATION METHOD TO NUCLEAR SAFETY SOFTWARE

    Directory of Open Access Journals (Sweden)

    GEE-YONG PARK

    2014-02-01

    Full Text Available A method for estimating software reliability for nuclear safety software is proposed in this paper. This method is based on the software reliability growth model (SRGM, where the behavior of software failure is assumed to follow a non-homogeneous Poisson process. Two types of modeling schemes based on a particular underlying method are proposed in order to more precisely estimate and predict the number of software defects based on very rare software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating software test cases as a covariate into the model. It was identified that these models are capable of reasonably estimating the remaining number of software defects which directly affects the reactor trip functions. The software reliability might be estimated from these modeling equations, and one approach of obtaining software reliability value is proposed in this paper.

  2. Comparison of 3 estimation methods of mycophenolic acid AUC based on a limited sampling strategy in renal transplant patients.

    Science.gov (United States)

    Hulin, Anne; Blanchet, Benoît; Audard, Vincent; Barau, Caroline; Furlan, Valérie; Durrbach, Antoine; Taïeb, Fabrice; Lang, Philippe; Grimbert, Philippe; Tod, Michel

    2009-04-01

    A significant relationship between mycophenolic acid (MPA) area under the plasma concentration-time curve (AUC) and the risk for rejection has been reported. Based on 3 concentration measurements, 3 approaches have been proposed for the estimation of MPA AUC, involving either a multilinear regression approach model (MLRA) or a Bayesian estimation using either gamma absorption or zero-order absorption population models. The aim of the study was to compare the 3 approaches for the estimation of MPA AUC in 150 renal transplant patients treated with mycophenolate mofetil and tacrolimus. The population parameters were determined in 77 patients (learning study). The AUC estimation methods were compared in the learning population and in 73 patients from another center (validation study). In the latter study, the reference AUCs were estimated by the trapezoidal rule on 8 measurements. MPA concentrations were measured by liquid chromatography. The gamma absorption model gave the best fit. In the learning study, the AUCs estimated by both Bayesian methods were very similar, whereas the multilinear approach was highly correlated but yielded estimates about 20% lower than Bayesian methods. This resulted in dosing recommendations differing by 250 mg/12 h or more in 27% of cases. In the validation study, AUC estimates based on the Bayesian method with gamma absorption model and multilinear regression approach model were, respectively, 12% higher and 7% lower than the reference values. To conclude, the bicompartmental model with gamma absorption rate gave the best fit. The 3 AUC estimation methods are highly correlated but not concordant. For a given patient, the same estimation method should always be used.

  3. Model-based dynamic multi-parameter method for peak power estimation of lithium-ion batteries

    NARCIS (Netherlands)

    Sun, F.; Xiong, R.; He, H.; Li, W.; Aussems, J.E.E.

    2012-01-01

    A model-based dynamic multi-parameter method for peak power estimation is proposed for batteries and battery management systems (BMSs) used in hybrid electric vehicles (HEVs). The available power must be accurately calculated in order to not damage the battery by over charging or over discharging or

  4. Integration of sampling based battery state of health estimation method in electric vehicles

    International Nuclear Information System (INIS)

    Ozkurt, Celil; Camci, Fatih; Atamuradov, Vepa; Odorry, Christopher

    2016-01-01

    Highlights: • Presentation of a prototype system with full charge discharge cycling capability. • Presentation of SoH estimation results for systems degraded in the lab. • Discussion of integration alternatives of the presented method in EVs. • Simulation model based on presented SoH estimation for a real EV battery system. • Optimization of number of battery cells to be selected for SoH test. - Abstract: Battery cost is one of the crucial parameters affecting high deployment of Electric Vehicles (EVs) negatively. Accurate State of Health (SoH) estimation plays an important role in reducing the total ownership cost, availability, and safety of the battery avoiding early disposal of the batteries and decreasing unexpected failures. A circuit design for SoH estimation in a battery system that bases on selected battery cells and its integration to EVs are presented in this paper. A prototype microcontroller has been developed and used for accelerated aging tests for a battery system. The data collected in the lab tests have been utilized to simulate a real EV battery system. Results of accelerated aging tests and simulation have been presented in the paper. The paper also discusses identification of the best number of battery cells to be selected for SoH estimation test. In addition, different application options of the presented approach for EV batteries have been discussed in the paper.

  5. Differences in characteristics of raters who use the visual estimation method in hospitals based on their training experiences.

    Science.gov (United States)

    Kawasaki, Yui; Tamaura, Yuki; Akamatsu, Rie; Sakai, Masashi; Fujiwara, Keiko

    2018-02-07

    Despite a clinical need, only a few studies have provided information concerning visual estimation training for raters to improve the validity of their evaluations. This study aims to describe the differences in the characteristics of raters who evaluated patients' dietary intake in hospitals using the visual estimation method based on their training experiences. We collected data from three hospitals in Tokyo from August to September 2016. The participants were 199 nursing staff members, and they completed a self-administered questionnaire on demographic data; working career; training in the visual estimation method; knowledge, attitude, and practice associated with nutritional care; and self-evaluation of method validity of and skills of visual estimation. We classified participants into two groups, experienced and inexperienced, based on whether they had received training. Square test, Mann-Whitney U test, and univariable and multivariable logistic regression analysis were used to describe the differences between these two groups in terms of their characteristics; knowledge, attitude, and practice associated with nutritional care; and self-evaluation of method validity and tips used in the visual estimation method. Of the 158 staff members (79.4%) (118 nurses and 40 nursing assistants) who agreed to participate in the analysis, thirty-three participants (20.9%) were trained in the visual estimation method. Participants who had received training had better knowledge (2.70 ± 0.81, score range was 1-5) than those who had not received any training (2.34 ± 0.74, p = 0.03). Score of self-evaluation of method validity of the visual estimation method was higher in the experienced group (3.78 ± 0.61, score range was 1-5) than the inexperienced group (3.40 ± 0.66, p trained had adequate knowledge (OR: 2.78, 95% CI: 1.05-7.35) and frequently used tips in visual estimation (OR: 1.85, 95% CI: 1.26-2.73). Trained participants had more required knowledge and

  6. A comparison of analysis methods to estimate contingency strength.

    Science.gov (United States)

    Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T

    2018-05-09

    To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.

  7. An Empirical Study of Atmospheric Correction Procedures for Regional Infrasound Amplitudes with Ground Truth.

    Science.gov (United States)

    Howard, J. E.

    2014-12-01

    This study focusses on improving methods of accounting for atmospheric effects on infrasound amplitudes observed on arrays at regional distances in the southwestern United States. Recordings at ranges of 150 to nearly 300 km from a repeating ground truth source of small HE explosions are used. The explosions range in actual weight from approximately 2000-4000 lbs. and are detonated year-round which provides signals for a wide range of atmospheric conditions. Three methods of correcting the observed amplitudes for atmospheric effects are investigated with the data set. The first corrects amplitudes for upper stratospheric wind as developed by Mutschlecner and Whitaker (1999) and uses the average wind speed between 45-55 km altitudes in the direction of propagation to derive an empirical correction formula. This approach was developed using large chemical and nuclear explosions and is tested with the smaller explosions for which shorter wavelengths cause the energy to be scattered by the smaller scale structure of the atmosphere. The second approach isa semi-empirical method using ray tracing to determine wind speed at ray turning heights where the wind estimates replace the wind values in the existing formula. Finally, parabolic equation (PE) modeling is used to predict the amplitudes at the arrays at 1 Hz. The PE amplitudes are compared to the observed amplitudes with a narrow band filter centered at 1 Hz. An analysis is performed of the conditions under which the empirical and semi-empirical methods fail and full wave methods must be used.

  8. Ground motion estimation for the elevated bridges of the Kyushu Shinkansen derailment caused by the foreshock of the 2016 Kumamoto earthquake based on the site-effect substitution method

    Science.gov (United States)

    Hata, Yoshiya; Yabe, Masaaki; Kasai, Akira; Matsuzaki, Hiroshi; Takahashi, Yoshikazu; Akiyama, Mitsuyoshi

    2016-12-01

    An earthquake of JMA magnitude 6.5 (first event) hit Kumamoto Prefecture, Japan, at 21:26 JST, April 14, 2016. Subsequently, an earthquake of JMA magnitude 7.3 (second event) hit Kumamoto and Oita Prefectures at 01:46 JST, April 16, 2016. An out-of-service Kyushu Shinkansen train carrying no passengers traveling on elevated bridges was derailed by the first event. This was the third derailment caused by an earthquake in the history of the Japanese Shinkansen, after one caused by the 2004 Mid-Niigata Prefecture Earthquake and another triggered by the 2011 Tohoku Earthquake. To analyze the mechanism of this third derailment, it is crucial to evaluate the strong ground motion at the derailment site with high accuracy. For this study, temporary earthquake observations were first carried out at a location near the bridge site; these observations were conducted because although the JMA Kumamoto Station site and the derailment site are closely located, the ground response characteristics at these sites differ. Next, empirical site amplification and phase effects were evaluated based on the obtained observation records. Finally, seismic waveforms during the first event at the bridge site of interest were estimated based on the site-effect substitution method. The resulting estimated acceleration and velocity waveforms for the derailment site include much larger amplitudes than the waveforms recorded at the JMA Kumamoto and MLIT Kumamoto station sites. The reliability of these estimates is confirmed by the finding that the same methods reproduce strong ground motions at the MLIT Kumamoto Station site accurately. These estimated ground motions will be useful for reasonable safety assessment of anti-derailment devices on elevated railway bridges.[Figure not available: see fulltext.

  9. A note on probabilistic computation of earthquake response spectrum amplitudes

    International Nuclear Information System (INIS)

    Anderson, J.G.; Trifunac, M.D.

    1979-01-01

    This paper analyzes a method for computation of Pseudo Relative Velocity (PSV) spectrum and Absolute Acceleration (SA) spectrum so that the amplitudes and the shapes of these spectra reflect the geometrical characteristics of the seismic environment of the site. The estimated spectra also incorporate the geologic characteristics at the site, direction of ground motion and the probability of exceeding these motions. An example of applying this method in a realistic setting is presented and the uncertainties of the results are discussed. (Auth.)

  10. On statistical properties of wave amplitudes in stormy sea. Effect of short-crestedness; Daihakoji no haro no tokeiteki seishitsu ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Yoshimoto, H. [Ship Research Inst., Tokyo (Japan)

    1996-12-31

    Since ocean waves encountered by ocean vessels or offshore structures in actual sea areas present extremely irregular variations, a stochastic method is necessary to estimate their statistical properties. This paper first shows a calculation method for probability density function for water level variation which strictly incorporates a secondary non-linear effect containing directional dispersibility by modeling ocean waves as short-crested irregular waves. Then, the paper specifically elucidates effects of the directional dispersibility of ocean waves on statistical amount of amplitudes by deriving the statistical amount of the amplitudes based on the probability density function of the water level variation and by using a numerical simulation. The paper finally takes up data of waves in stormy sea observed in an experiment in an actual sea area, compares the result with that of theoretical calculations, and evaluates reasonability of this method. With this estimation method, individual secondary components or components of difference and sum may be subjected to influence of the directional dispersibility, but they do not differ much from the case of long-crested irregular waves on the whole. 21 refs., 11 figs., 2 tabs.

  11. Secure communication based on multi-input multi-output chaotic system with large message amplitude

    International Nuclear Information System (INIS)

    Zheng, G.; Boutat, D.; Floquet, T.; Barbot, J.P.

    2009-01-01

    This paper deals with the problem of secure communication based on multi-input multi-output (MIMO) chaotic systems. Single input secure communication based on chaos can be easily extended to multiple ones by some combinations technologies, however all the combined inputs possess the same risk to be broken. In order to reduce this risk, a new secure communication scheme based on chaos with MIMO is discussed in this paper. Moreover, since the amplitude of messages in traditional schemes is limited because it would affect the quality of synchronization, the proposed scheme is also improved into an amplitude-independent one.

  12. Determination of the scattering amplitude

    International Nuclear Information System (INIS)

    Gangal, A.D.; Kupsch, J.

    1984-01-01

    The problem to determine the elastic scattering amplitude from the differential cross-section by the unitarity equation is reexamined. We prove that the solution is unique and can be determined by a convergent iteration if the parameter lambda=sin μ of Newton and Martin is bounded by lambda 2 approx.=0.86. The method is based on a fixed point theorem for holomorphic mappings in a complex Banach space. (orig.)

  13. Probabilistic multiobjective wind-thermal economic emission dispatch based on point estimated method

    International Nuclear Information System (INIS)

    Azizipanah-Abarghooee, Rasoul; Niknam, Taher; Roosta, Alireza; Malekpour, Ahmad Reza; Zare, Mohsen

    2012-01-01

    In this paper, wind power generators are being incorporated in the multiobjective economic emission dispatch problem which minimizes wind-thermal electrical energy cost and emissions produced by fossil-fueled power plants, simultaneously. Large integration of wind energy sources necessitates an efficient model to cope with uncertainty arising from random wind variation. Hence, a multiobjective stochastic search algorithm based on 2m point estimated method is implemented to analyze the probabilistic wind-thermal economic emission dispatch problem considering both overestimation and underestimation of available wind power. 2m point estimated method handles the system uncertainties and renders the probability density function of desired variables efficiently. Moreover, a new population-based optimization algorithm called modified teaching-learning algorithm is proposed to determine the set of non-dominated optimal solutions. During the simulation, the set of non-dominated solutions are kept in an external memory (repository). Also, a fuzzy-based clustering technique is implemented to control the size of the repository. In order to select the best compromise solution from the repository, a niching mechanism is utilized such that the population will move toward a smaller search space in the Pareto-optimal front. In order to show the efficiency and feasibility of the proposed framework, three different test systems are represented as case studies. -- Highlights: ► WPGs are being incorporated in the multiobjective economic emission dispatch problem. ► 2m PEM handles the system uncertainties. ► A MTLBO is proposed to determine the set of non-dominated (Pareto) optimal solutions. ► A fuzzy-based clustering technique is implemented to control the size of the repository.

  14. Simple method for direct crown base height estimation of individual conifer trees using airborne LiDAR data.

    Science.gov (United States)

    Luo, Laiping; Zhai, Qiuping; Su, Yanjun; Ma, Qin; Kelly, Maggi; Guo, Qinghua

    2018-05-14

    Crown base height (CBH) is an essential tree biophysical parameter for many applications in forest management, forest fuel treatment, wildfire modeling, ecosystem modeling and global climate change studies. Accurate and automatic estimation of CBH for individual trees is still a challenging task. Airborne light detection and ranging (LiDAR) provides reliable and promising data for estimating CBH. Various methods have been developed to calculate CBH indirectly using regression-based means from airborne LiDAR data and field measurements. However, little attention has been paid to directly calculate CBH at the individual tree scale in mixed-species forests without field measurements. In this study, we propose a new method for directly estimating individual-tree CBH from airborne LiDAR data. Our method involves two main strategies: 1) removing noise and understory vegetation for each tree; and 2) estimating CBH by generating percentile ranking profile for each tree and using a spline curve to identify its inflection points. These two strategies lend our method the advantages of no requirement of field measurements and being efficient and effective in mixed-species forests. The proposed method was applied to a mixed conifer forest in the Sierra Nevada, California and was validated by field measurements. The results showed that our method can directly estimate CBH at individual tree level with a root-mean-squared error of 1.62 m, a coefficient of determination of 0.88 and a relative bias of 3.36%. Furthermore, we systematically analyzed the accuracies among different height groups and tree species by comparing with field measurements. Our results implied that taller trees had relatively higher uncertainties than shorter trees. Our findings also show that the accuracy for CBH estimation was the highest for black oak trees, with an RMSE of 0.52 m. The conifer species results were also good with uniformly high R 2 ranging from 0.82 to 0.93. In general, our method has

  15. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  16. Optimal difference-based estimation for partially linear models

    KAUST Repository

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  17. Relative amplitude preservation processing utilizing surface consistent amplitude correction. Part 3; Surface consistent amplitude correction wo mochiita sotai shinpuku hozon shori. 3

    Energy Technology Data Exchange (ETDEWEB)

    Saeki, T [Japan National Oil Corporation, Tokyo (Japan). Technology Research Center

    1996-10-01

    For the seismic reflection method conducted on the ground surface, generator and geophone are set on the surface. The observed waveforms are affected by the ground surface and surface layer. Therefore, it is required for discussing physical properties of the deep underground to remove the influence of surface layer, preliminarily. For the surface consistent amplitude correction, properties of the generator and geophone were removed by assuming that the observed waveforms can be expressed by equations of convolution. This is a correction method to obtain records without affected by the surface conditions. In response to analysis and correction of waveforms, wavelet conversion was examined. Using the amplitude patterns after correction, the significant signal region, noise dominant region, and surface wave dominant region would be separated each other. Since the amplitude values after correction of values in the significant signal region have only small variation, a representative value can be given. This can be used for analyzing the surface consistent amplitude correction. Efficiency of the process can be enhanced by considering the change of frequency. 3 refs., 5 figs.

  18. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  19. Quantitative analysis by X-ray fractography of fatigue fractured surface under variable amplitude loading

    International Nuclear Information System (INIS)

    Akita, Koichi; Kodama, Shotaro; Misawa, Hiroshi

    1994-01-01

    X-ray fractography is a method of analysing the causes of accidental fracture of machine components or structures. Almost all of the previous research on this problem has been carried out using constant amplitude fatigue tests. However, the actual loads on components and structures are usually of variable amplitudes. In this study, X-ray fractography was applied to fatigue fractured surfaces produced by variable amplitude loading. Fatigue tests were carried out on Ni-Cr-Mo steel CT specimens under the conditions of repeated, two-step and multiple-step loading. Residual stresses were measured on the fatigue fractured surface by an X-ray diffraction method. The relationships between residual stress and stress intensity factor or crack propagation rate were studied. They were discussed in terms of the quantitative expressions under constant amplitude loading, proposed by the authors in previous papers. The main results obtained were as follows : (1) It was possible to estimate the crack propagation rate of the fatigue fractured surface under variable amplitude loading by using the relationship between residual stress and stress intensity factor under constant amplitude loading. (2) The compressive residual stress components on the fatigue fractured surface correspond with cyclic softening of the material rather than with compressive plastic deformation at the crack tip. (author)

  20. Comparison of catchment grouping methods for flow duration curve estimation at ungauged sites in France

    Directory of Open Access Journals (Sweden)

    E. Sauquet

    2011-08-01

    Full Text Available The study aims at estimating flow duration curves (FDC at ungauged sites in France and quantifying the associated uncertainties using a large dataset of 1080 FDCs. The interpolation procedure focuses here on 15 percentiles standardised by the mean annual flow, which is assumed to be known at each site. In particular, this paper discusses the impact of different catchment grouping procedures on the estimation of percentiles by regional regression models.

    In a first step, five parsimonious FDC parametric models are tested to approximate FDCs at gauged sites. The results show that the model based on the expansion of Empirical Orthogonal Functions (EOF outperforms the other tested models. In the EOF model, each FDC is interpreted as a linear combination of regional amplitude functions with spatially variable weighting factors corresponding to the parameters of the model. In this approach, only one amplitude function is required to obtain a satisfactory fit with most of the observed curves. Thus, the considered model requires only two parameters to be applicable at ungauged locations.

    Secondly, homogeneous regions are derived according to hydrological response, on the one hand, and geological, climatic and topographic characteristics on the other hand. Hydrological similarity is assessed through two simple indicators: the concavity index (IC representing the shape of the dimensionless FDC and the seasonality ratio (SR, which is the ratio of summer and winter median flows. These variables are used as homogeneity criteria in three different methods for grouping catchments: (i according to an a priori classification of French Hydro-EcoRegions (HERs, (ii by applying regression tree clustering and (iii by using neighbourhoods obtained by canonical correlation analysis.

    Finally, considering all the data, and subsequently for each group obtained through the tested grouping techniques, we derive regression models between

  1. Generalized unitarity for N=4 super-amplitudes

    Energy Technology Data Exchange (ETDEWEB)

    Drummond, J.M.; Henn, J. [LAPTH, Université de Savoie, CNRS B.P. 110, F-74941 Annecy-le-Vieux Cedex (France); Korchemsky, G.P., E-mail: Gregory.Korchemsky@cea.fr [Institut de Physique Théorique, CEA Saclay, 91191 Gif-sur-Yvette Cedex (France); Sokatchev, E. [LAPTH, Université de Savoie, CNRS B.P. 110, F-74941 Annecy-le-Vieux Cedex (France)

    2013-04-21

    We develop a manifestly supersymmetric version of the generalized unitarity cut method for calculating scattering amplitudes in N=4 SYM theory. We illustrate the power of this method by computing the one-loop n-point NMHV super-amplitudes. The result confirms two conjectures which we made in Drummond, et al., [1]. Firstly, we derive the compact, manifestly dual superconformally covariant form of the NMHV tree amplitudes for arbitrary number and types of external particles. Secondly, we show that the ratio of the one-loop NMHV to the MHV amplitude is dual conformal invariant.

  2. Fundamental Frequency Estimation using Polynomial Rooting of a Subspace-Based Method

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2010-01-01

    improvements compared to HMUSIC. First, by using the proposed method we can obtain an estimate of the fundamental frequency without doing a grid search like in HMUSIC. This is due to that the fundamental frequency is estimated as the argument of the root lying closest to the unit circle. Second, we obtain...... a higher spectral resolution compared to HMUSIC which is a property of polynomial rooting methods. Our simulation results show that the proposed method is applicable to real-life signals, and that we in most cases obtain a higher spectral resolution than HMUSIC....

  3. Scattering amplitudes in gauge theories

    CERN Document Server

    Henn, Johannes M

    2014-01-01

    At the fundamental level, the interactions of elementary particles are described by quantum gauge field theory. The quantitative implications of these interactions are captured by scattering amplitudes, traditionally computed using Feynman diagrams. In the past decade tremendous progress has been made in our understanding of and computational abilities with regard to scattering amplitudes in gauge theories, going beyond the traditional textbook approach. These advances build upon on-shell methods that focus on the analytic structure of the amplitudes, as well as on their recently discovered hidden symmetries. In fact, when expressed in suitable variables the amplitudes are much simpler than anticipated and hidden patterns emerge.   These modern methods are of increasing importance in phenomenological applications arising from the need for high-precision predictions for the experiments carried out at the Large Hadron Collider, as well as in foundational mathematical physics studies on the S-matrix in quantum ...

  4. Experimental Study on Variable-Amplitude Fatigue of Welded Cross Plate-Hollow Sphere Joints in Grid Structures

    Directory of Open Access Journals (Sweden)

    Jin-Feng Jiao

    2018-01-01

    Full Text Available The fatigue stress amplitude of the welded cross plate-hollow sphere joint (WCPHSJ in a grid structure varies due to the random loading produced by suspending cranes. A total of 14 specimens considering three different types of WCPHSJs were prepared and tested using a specially designed test rig. Four typical loading conditions, “low-high,” “high-low,” “low-high-low,” and “high-low-high,” were first considered in the tests to investigate the fatigue behavior under variable load amplitudes, followed by metallographic analyses. The experimental and metallographic analysis results provide a fundamental understanding on the fatigue fracture form and fatigue mechanism of WCPHSJs. Based on the available data from constant-amplitude fatigue tests, the variable-amplitude fatigue life of the three types of WCPHSJs was estimated using the Miner rule and Corten-Dolan theory. Since both accumulative damage theories yield virtually same damaging results, the Miner rule is hence suggested to estimate the fatigue life of WCPHSJs.

  5. Frequency-Dependent Amplitude Panning for the Stereophonic Image Enhancement of Audio Recorded Using Two Closely Spaced Microphones

    Directory of Open Access Journals (Sweden)

    Chan Jun Chun

    2016-02-01

    Full Text Available In this paper, we propose a new frequency-dependent amplitude panning method for stereophonic image enhancement applied to a sound source recorded using two closely spaced omni-directional microphones. The ability to detect the direction of such a sound source is limited due to weak spatial information, such as the inter-channel time difference (ICTD and inter-channel level difference (ICLD. Moreover, when sound sources are recorded in a convolutive or a real room environment, the detection of sources is affected by reverberation effects. Thus, the proposed method first tries to estimate the source direction depending on the frequency using azimuth-frequency analysis. Then, a frequency-dependent amplitude panning technique is proposed to enhance the stereophonic image by modifying the stereophonic law of sines. To demonstrate the effectiveness of the proposed method, we compare its performance with that of a conventional method based on the beamforming technique in terms of directivity pattern, perceived direction, and quality degradation under three different recording conditions (anechoic, convolutive, and real reverberant. The comparison shows that the proposed method gives us better stereophonic images in a stereo loudspeaker reproduction than the conventional method without any annoying effects.

  6. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    International Nuclear Information System (INIS)

    Zhang, Xiaole; Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu

    2017-01-01

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  7. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiaole, E-mail: zhangxiaole10@outlook.com [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany); Institute of Public Safety Research, Department of Engineering Physics, Tsinghua University, Beijing, 100084 (China); Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany)

    2017-03-05

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  8. Subspace Based Blind Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Hayashi, Kazunori; Matsushima, Hiroki; Sakai, Hideaki

    2012-01-01

    The paper proposes a subspace based blind sparse channel estimation method using 1–2 optimization by replacing the 2–norm minimization in the conventional subspace based method by the 1–norm minimization problem. Numerical results confirm that the proposed method can significantly improve...

  9. [Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie

    At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.

  10. Hybrid islanding detection method by using grid impedance estimation in parallel-inverters-based microgrid

    DEFF Research Database (Denmark)

    Ghzaiel, Walid; Jebali-Ben Ghorbal, Manel; Slama-Belkhodja, Ilhem

    2014-01-01

    This paper presents a hybrid islanding detection algorithm integrated on the distributed generation unit more close to the point of common coupling of a Microgrid based on parallel inverters where one of them is responsible to control the system. The method is based on resonance excitation under...... parameters, both resistive and inductive parts, from the injected resonance frequency determination. Finally, the inverter will disconnect the microgrid from the faulty grid and reconnect the parallel inverter system to the controllable distributed system in order to ensure high power quality. This paper...... shows that grid impedance variation detection estimation can be an efficient method for islanding detection in microgrid systems. Theoretical analysis and simulation results are presented to validate the proposed method....

  11. Methods for estimating the semivariogram

    DEFF Research Database (Denmark)

    Lophaven, Søren Nymand; Carstensen, Niels Jacob; Rootzen, Helle

    2002-01-01

    . In the existing literature various methods for modelling the semivariogram have been proposed, while only a few studies have been made on comparing different approaches. In this paper we compare eight approaches for modelling the semivariogram, i.e. six approaches based on least squares estimation...... maximum likelihood performed better than the least squares approaches. We also applied maximum likelihood and least squares estimation to a real dataset, containing measurements of salinity at 71 sampling stations in the Kattegat basin. This showed that the calculation of spatial predictions...

  12. Estimation of geological formation thermal conductivity by using stochastic approximation method based on well-log temperature data

    International Nuclear Information System (INIS)

    Cheng, Wen-Long; Huang, Yong-Hua; Liu, Na; Ma, Ran

    2012-01-01

    Thermal conductivity is a key parameter for evaluating wellbore heat losses which plays an important role in determining the efficiency of steam injection processes. In this study, an unsteady formation heat-transfer model was established and a cost-effective in situ method by using stochastic approximation method based on well-log temperature data was presented. The proposed method was able to estimate the thermal conductivity and the volumetric heat capacity of geological formation simultaneously under the in situ conditions. The feasibility of the present method was assessed by a sample test, the results of which shown that the thermal conductivity and the volumetric heat capacity could be obtained with the relative errors of −0.21% and −0.32%, respectively. In addition, three field tests were conducted based on the easily obtainable well-log temperature data from the steam injection wells. It was found that the relative errors of thermal conductivity for the three field tests were within ±0.6%, demonstrating the excellent performance of the proposed method for calculating thermal conductivity. The relative errors of volumetric heat capacity ranged from −6.1% to −14.2% for the three field tests. Sensitivity analysis indicated that this was due to the low correlation between the volumetric heat capacity and the wellbore temperature, which was used to generate the judgment criterion. -- Highlights: ► A cost-effective in situ method for estimating thermal properties of formation was presented. ► Thermal conductivity and volumetric heat capacity can be estimated simultaneously by the proposed method. ► The relative error of thermal conductivity estimated was within ±0.6%. ► Sensitivity analysis was conducted to study the estimated results of thermal properties.

  13. Iteration of ultrasound aberration correction methods

    Science.gov (United States)

    Maasoey, Svein-Erik; Angelsen, Bjoern; Varslot, Trond

    2004-05-01

    Aberration in ultrasound medical imaging is usually modeled by time-delay and amplitude variations concentrated on the transmitting/receiving array. This filter process is here denoted a TDA filter. The TDA filter is an approximation to the physical aberration process, which occurs over an extended part of the human body wall. Estimation of the TDA filter, and performing correction on transmit and receive, has proven difficult. It has yet to be shown that this method works adequately for severe aberration. Estimation of the TDA filter can be iterated by retransmitting a corrected signal and re-estimate until a convergence criterion is fulfilled (adaptive imaging). Two methods for estimating time-delay and amplitude variations in receive signals from random scatterers have been developed. One method correlates each element signal with a reference signal. The other method use eigenvalue decomposition of the receive cross-spectrum matrix, based upon a receive energy-maximizing criterion. Simulations of iterating aberration correction with a TDA filter have been investigated to study its convergence properties. A weak and strong human-body wall model generated aberration. Both emulated the human abdominal wall. Results after iteration improve aberration correction substantially, and both estimation methods converge, even for the case of strong aberration.

  14. Human factors estimation methods in nuclear power plant

    International Nuclear Information System (INIS)

    Takano, Kenichi; Yoshino, Kenji; Nagasaka, Akihiko; Ishii, Keichiro; Nakasa, Hiroyasu

    1985-01-01

    To improve the operational and maintenance work reliability, it is neccessary for workers to maintain his performance always at high level, that leads to decreasing mistaken judgements and operations. This paper inuolves the development and evaluation of ''Multi-Purpose Physiological Information Measurement system'' to estimate human performance and conditions with a highly fixed quantity. The following itemes is mentioned : (1) Most suitable physiological informations are selected to measure worker' performance in nuclear power plant with none-disturbance, ambulatory, continual, and multi channel measurement. (2) Relatively important physiological informations are measured with the real-time monitoring functions. (electrocardiogram, respirometric functions and EMG (electromyogram) pulse rete). (3) It is made to optimize the measurement condition and analysing methods in the use of a noise-cut function and a D.C. drift cutting method. (4) As a example, it is clear that, when the different weight is loaded to the arm and make it strech-bend motion, the EMG signal is measured and analysed by this system, the analysed EMG pulse rate and maximum amplitude is related to the arm loaded weight. (author)

  15. Nonlinear Decoupling of Torque and Field Amplitude in an Induction Motor

    DEFF Research Database (Denmark)

    Rasmussen, Henrik; Vadstrup, P.; Børsting, H.

    1997-01-01

    A novel approach to control of induction motors, based on nonlinear state feedback, is presented. The resulting scheme gives a linearized input-output decoupling of the torque and the amplitude of the field. The proposed approach is used to design controllers for the field amplitude and the motor...... torque. The method is tested both by simulation and by experiments on a motor drive....

  16. Counting loop diagrams: computational complexity of higher-order amplitude evaluation

    International Nuclear Information System (INIS)

    Eijk, E. van; Kleiss, R.; Lazopoulos, A.

    2004-01-01

    We discuss the computational complexity of the perturbative evaluation of scattering amplitudes, both by the Caravaglios-Moretti algorithm and by direct evaluation of the individual diagrams. For a self-interacting scalar theory, we determine the complexity as a function of the number of external legs. We describe a method for obtaining the number of topologically inequivalent Feynman graphs containing closed loops, and apply this to 1- and 2-loop amplitudes. We also compute the number of graphs weighted by their symmetry factors, thus arriving at exact and asymptotic estimates for the average symmetry factor of diagrams. We present results for the asymptotic number of diagrams up to 10 loops, and prove that the average symmetry factor approaches unity as the number of external legs becomes large. (orig.)

  17. A new lithium-ion battery internal temperature on-line estimate method based on electrochemical impedance spectroscopy measurement

    Science.gov (United States)

    Zhu, J. G.; Sun, Z. C.; Wei, X. Z.; Dai, H. F.

    2015-01-01

    The power battery thermal management problem in EV (electric vehicle) and HEV (hybrid electric vehicle) has been widely discussed, and EIS (electrochemical impedance spectroscopy) is an effective experimental method to test and estimate the status of the battery. Firstly, an electrochemical-based impedance matrix analysis for lithium-ion battery is developed to describe the impedance response of electrochemical impedance spectroscopy. Then a method, based on electrochemical impedance spectroscopy measurement, has been proposed to estimate the internal temperature of power lithium-ion battery by analyzing the phase shift and magnitude of impedance at different ambient temperatures. Respectively, the SoC (state of charge) and temperature have different effects on the impedance characteristics of battery at various frequency ranges in the electrochemical impedance spectroscopy experimental study. Also the impedance spectrum affected by SoH (state of health) is discussed in the paper preliminary. Therefore, the excitation frequency selected to estimate the inner temperature is in the frequency range which is significantly influenced by temperature without the SoC and SoH. The intrinsic relationship between the phase shift and temperature is established under the chosen excitation frequency. And the magnitude of impedance related to temperature is studied in the paper. In practical applications, through obtaining the phase shift and magnitude of impedance, the inner temperature estimation could be achieved. Then the verification experiments are conduced to validate the estimate method. Finally, an estimate strategy and an on-line estimation system implementation scheme utilizing battery management system are presented to describe the engineering value.

  18. Channel Estimation in DCT-Based OFDM

    Science.gov (United States)

    Wang, Yulin; Zhang, Gengxin; Xie, Zhidong; Hu, Jing

    2014-01-01

    This paper derives the channel estimation of a discrete cosine transform- (DCT-) based orthogonal frequency-division multiplexing (OFDM) system over a frequency-selective multipath fading channel. Channel estimation has been proved to improve system throughput and performance by allowing for coherent demodulation. Pilot-aided methods are traditionally used to learn the channel response. Least square (LS) and mean square error estimators (MMSE) are investigated. We also study a compressed sensing (CS) based channel estimation, which takes the sparse property of wireless channel into account. Simulation results have shown that the CS based channel estimation is expected to have better performance than LS. However MMSE can achieve optimal performance because of prior knowledge of the channel statistic. PMID:24757439

  19. View Estimation Based on Value System

    Science.gov (United States)

    Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru

    Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.

  20. Bias correction for the estimation of sensitivity indices based on random balance designs

    International Nuclear Information System (INIS)

    Tissot, Jean-Yves; Prieur, Clémentine

    2012-01-01

    This paper deals with the random balance design method (RBD) and its hybrid approach, RBD-FAST. Both these global sensitivity analysis methods originate from Fourier amplitude sensitivity test (FAST) and consequently face the main problems inherent to discrete harmonic analysis. We present here a general way to correct a bias which occurs when estimating sensitivity indices (SIs) of any order – except total SI of single factor or group of factors – by the random balance design method (RBD) and its hybrid version, RBD-FAST. In the RBD case, this positive bias has been recently identified in a paper by Xu and Gertner [1]. Following their work, we propose a bias correction method for first-order SIs estimates in RBD. We then extend the correction method to the SIs of any order in RBD-FAST. At last, we suggest an efficient strategy to estimate all the first- and second-order SIs using RBD-FAST. - Highlights: ► We provide a bias correction method for the global sensitivity analysis methods: RBD and RBD-FAST. ► In RBD, first-order sensitivity estimates are corrected. ► In RBD-FAST, sensitivity indices of any order and closed sensitivity indices are corrected. ► We propose an efficient strategy to estimate all the first- and second-order indices of a model.

  1. Phase-amplitude reduction of transient dynamics far from attractors for limit-cycling systems

    Science.gov (United States)

    Shirasaka, Sho; Kurebayashi, Wataru; Nakao, Hiroya

    2017-02-01

    Phase reduction framework for limit-cycling systems based on isochrons has been used as a powerful tool for analyzing the rhythmic phenomena. Recently, the notion of isostables, which complements the isochrons by characterizing amplitudes of the system state, i.e., deviations from the limit-cycle attractor, has been introduced to describe the transient dynamics around the limit cycle [Wilson and Moehlis, Phys. Rev. E 94, 052213 (2016)]. In this study, we introduce a framework for a reduced phase-amplitude description of transient dynamics of stable limit-cycling systems. In contrast to the preceding study, the isostables are treated in a fully consistent way with the Koopman operator analysis, which enables us to avoid discontinuities of the isostables and to apply the framework to system states far from the limit cycle. We also propose a new, convenient bi-orthogonalization method to obtain the response functions of the amplitudes, which can be interpreted as an extension of the adjoint covariant Lyapunov vector to transient dynamics in limit-cycling systems. We illustrate the utility of the proposed reduction framework by estimating the optimal injection timing of external input that efficiently suppresses deviations of the system state from the limit cycle in a model of a biochemical oscillator.

  2. Parameter-free bearing fault detection based on maximum likelihood estimation and differentiation

    International Nuclear Information System (INIS)

    Bozchalooi, I Soltani; Liang, Ming

    2009-01-01

    Bearing faults can lead to malfunction and ultimately complete stall of many machines. The conventional high-frequency resonance (HFR) method has been commonly used for bearing fault detection. However, it is often very difficult to obtain and calibrate bandpass filter parameters, i.e. the center frequency and bandwidth, the key to the success of the HFR method. This inevitably undermines the usefulness of the conventional HFR technique. To avoid such difficulties, we propose parameter-free, versatile yet straightforward techniques to detect bearing faults. We focus on two types of measured signals frequently encountered in practice: (1) a mixture of impulsive faulty bearing vibrations and intrinsic background noise and (2) impulsive faulty bearing vibrations blended with intrinsic background noise and vibration interferences. To design a proper signal processing technique for each case, we analyze the effects of intrinsic background noise and vibration interferences on amplitude demodulation. For the first case, a maximum likelihood-based fault detection method is proposed to accommodate the Rician distribution of the amplitude-demodulated signal mixture. For the second case, we first illustrate that the high-amplitude low-frequency vibration interferences can make the amplitude demodulation ineffective. Then we propose a differentiation method to enhance the fault detectability. It is shown that the iterative application of a differentiation step can boost the relative strength of the impulsive faulty bearing signal component with respect to the vibration interferences. This preserves the effectiveness of amplitude demodulation and hence leads to more accurate fault detection. The proposed approaches are evaluated on simulated signals and experimental data acquired from faulty bearings

  3. Time-amplitude converter; Convertisseur temps-amplitude

    Energy Technology Data Exchange (ETDEWEB)

    Banner, M [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires

    1961-07-01

    It is normal in high energy physics to measure the time of flight of a particle in order to determine its mass. This can be done by the method which consists in transforming the time measurement into an analysis of amplitude, which is easier; a time-amplitude converter has therefore been built for this purpose. The apparatus here described uses a double grid control tube 6 BN 6 whose resolution time, as measured with a pulse generator, is 5 x 10{sup -11} s. The analysis of the response of a particle counter, made up of a scintillator and a photomultiplier, indicates that a time of resolution of 5 x 10{sup -10} s. can be obtained. A time of this order of magnitude is obtained experimentally with the converter. This converter has been used in the study of the time of flight of particles in a secondary beam of the accelerator Saturne. It has thus been possible to measure the energy spectrum of {pi}-mesons, of protons, and of deutons emitted from a polyethylene target bombarded by 1,4 and 2 GeV protons. (author) [French] Pour determiner la masse d'une particule, il est courant, en physique des hautes energies, de mesurer le temps de vol de cette particule. Cela peut etre fait par la methode qui consiste a transformer la mesure d'un temps en une analyse d'amplitude, plus aisee; aussi a-t-on, a cet effet, cree un convertisseur temps-amplitude. L'appareillage decrit dans cet article utilise un tube a double grille de commande 6 BN 6 dont le temps de resolution mesure avec un generateur d'impulsion est de 5.10{sup -11} s. L'analyse de la reponse d'un compteur de particules, constitue par un scintillateur et un photomultiplicateur, indique qu'un temps de resolution de 5.10{sup -10} s peut etre obtenu. Un temps de cet ordre est atteint experimentalement avec le convertisseur. Ce convertisseur a servi a l'etude du temps de vol des particules dans un faisceau secondaire de l'accelerateur Saturne. On a mesure ainsi le spectre d'energie des mesons {pi}, des protons, des deutons

  4. Joint Spatio-Temporal Filtering Methods for DOA and Fundamental Frequency Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Benesty, Jacob

    2015-01-01

    some attention in the community and is quite promising for several applications. The proposed methods are based on optimal, adaptive filters that leave the desired signal, having a certain DOA and fundamental frequency, undistorted and suppress everything else. The filtering methods simultaneously...... operate in space and time, whereby it is possible resolve cases that are otherwise problematic for pitch estimators or DOA estimators based on beamforming. Several special cases and improvements are considered, including a method for estimating the covariance matrix based on the recently proposed...

  5. Complex amplitude reconstruction by iterative amplitude-phase retrieval algorithm with reference

    Science.gov (United States)

    Shen, Cheng; Guo, Cheng; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun

    2018-06-01

    Multi-image iterative phase retrieval methods have been successfully applied in plenty of research fields due to their simple but efficient implementation. However, there is a mismatch between the measurement of the first long imaging distance and the sequential interval. In this paper, an amplitude-phase retrieval algorithm with reference is put forward without additional measurements or priori knowledge. It gets rid of measuring the first imaging distance. With a designed update formula, it significantly raises the convergence speed and the reconstruction fidelity, especially in phase retrieval. Its superiority over the original amplitude-phase retrieval (APR) method is validated by numerical analysis and experiments. Furthermore, it provides a conceptual design of a compact holographic image sensor, which can achieve numerical refocusing easily.

  6. Cumulant-Based Coherent Signal Subspace Method for Bearing and Range Estimation

    Directory of Open Access Journals (Sweden)

    Bourennane Salah

    2007-01-01

    Full Text Available A new method for simultaneous range and bearing estimation for buried objects in the presence of an unknown Gaussian noise is proposed. This method uses the MUSIC algorithm with noise subspace estimated by using the slice fourth-order cumulant matrix of the received data. The higher-order statistics aim at the removal of the additive unknown Gaussian noise. The bilinear focusing operator is used to decorrelate the received signals and to estimate the coherent signal subspace. A new source steering vector is proposed including the acoustic scattering model at each sensor. Range and bearing of the objects at each sensor are expressed as a function of those at the first sensor. This leads to the improvement of object localization anywhere, in the near-field or in the far-field zone of the sensor array. Finally, the performances of the proposed method are validated on data recorded during experiments in a water tank.

  7. Calculating the Mean Amplitude of Glycemic Excursions from Continuous Glucose Data Using an Open-Code Programmable Algorithm Based on the Integer Nonlinear Method.

    Science.gov (United States)

    Yu, Xuefei; Lin, Liangzhuo; Shen, Jie; Chen, Zhi; Jian, Jun; Li, Bin; Xin, Sherman Xuegang

    2018-01-01

    The mean amplitude of glycemic excursions (MAGE) is an essential index for glycemic variability assessment, which is treated as a key reference for blood glucose controlling at clinic. However, the traditional "ruler and pencil" manual method for the calculation of MAGE is time-consuming and prone to error due to the huge data size, making the development of robust computer-aided program an urgent requirement. Although several software products are available instead of manual calculation, poor agreement among them is reported. Therefore, more studies are required in this field. In this paper, we developed a mathematical algorithm based on integer nonlinear programming. Following the proposed mathematical method, an open-code computer program named MAGECAA v1.0 was developed and validated. The results of the statistical analysis indicated that the developed program was robust compared to the manual method. The agreement among the developed program and currently available popular software is satisfied, indicating that the worry about the disagreement among different software products is not necessary. The open-code programmable algorithm is an extra resource for those peers who are interested in the related study on methodology in the future.

  8. Nonlinear decoupling of torque and field amplitude in an induction motor

    Energy Technology Data Exchange (ETDEWEB)

    Rasmussen, H. [Aalborg University, Aalborg (Denmark); Vadstrup, P.; Boersting, H. [Grundfos A/S, Bjerringbro (Denmark)

    1997-12-31

    A novel approach to control of induction motors, based on nonlinear state feedback, is presented. The resulting scheme gives a linearized input-output decoupling of the torque and the amplitude of the field. The proposed approach is used to design controllers for the field amplitude and the motor torque. The method is tested both by simulation and by experiments on a motor drive. (orig.) 12 refs.

  9. Induction machine bearing faults detection based on a multi-dimensional MUSIC algorithm and maximum likelihood estimation.

    Science.gov (United States)

    Elbouchikhi, Elhoussin; Choqueuse, Vincent; Benbouzid, Mohamed

    2016-07-01

    Condition monitoring of electric drives is of paramount importance since it contributes to enhance the system reliability and availability. Moreover, the knowledge about the fault mode behavior is extremely important in order to improve system protection and fault-tolerant control. Fault detection and diagnosis in squirrel cage induction machines based on motor current signature analysis (MCSA) has been widely investigated. Several high resolution spectral estimation techniques have been developed and used to detect induction machine abnormal operating conditions. This paper focuses on the application of MCSA for the detection of abnormal mechanical conditions that may lead to induction machines failure. In fact, this paper is devoted to the detection of single-point defects in bearings based on parametric spectral estimation. A multi-dimensional MUSIC (MD MUSIC) algorithm has been developed for bearing faults detection based on bearing faults characteristic frequencies. This method has been used to estimate the fundamental frequency and the fault related frequency. Then, an amplitude estimator of the fault characteristic frequencies has been proposed and fault indicator has been derived for fault severity measurement. The proposed bearing faults detection approach is assessed using simulated stator currents data, issued from a coupled electromagnetic circuits approach for air-gap eccentricity emulating bearing faults. Then, experimental data are used for validation purposes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Joint channel/frequency offset estimation and correction for coherent optical FBMC/OQAM system

    Science.gov (United States)

    Wang, Daobin; Yuan, Lihua; Lei, Jingli; wu, Gang; Li, Suoping; Ding, Runqi; Wang, Dongye

    2017-12-01

    In this paper, we focus on analysis of the preamble-based joint estimation for channel and laser-frequency offset (LFO) in coherent optical filter bank multicarrier systems with offset quadrature amplitude modulation (CO-FBMC/OQAM). In order to reduce the noise impact on the estimation accuracy, we proposed an estimation method based on inter-frame averaging. This method averages the cross-correlation function of real-valued pilots within multiple FBMC frames. The laser-frequency offset is estimated according to the phase of this average. After correcting LFO, the final channel response is also acquired by averaging channel estimation results within multiple frames. The principle of the proposed method is analyzed theoretically, and the preamble structure is thoroughly designed and optimized to suppress the impact of inherent imaginary interference (IMI). The effectiveness of our method is demonstrated numerically using different fiber and LFO values. The obtained results show that the proposed method can improve transmission performance significantly.

  11. A citizen science based survey method for estimating the density of urban carnivores

    Science.gov (United States)

    Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W.; Mill, Aileen C.; Smith, Graham C.; Tolhurst, Bryony A.

    2018-01-01

    Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980’s. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on

  12. Fatigue life prediction of rotor blade composites: Validation of constant amplitude formulations with variable amplitude experiments

    International Nuclear Information System (INIS)

    Westphal, T; Nijssen, R P L

    2014-01-01

    The effect of Constant Life Diagram (CLD) formulation on the fatigue life prediction under variable amplitude (VA) loading was investigated based on variable amplitude tests using three different load spectra representative for wind turbine loading. Next to the Wisper and WisperX spectra, the recently developed NewWisper2 spectrum was used. Based on these variable amplitude fatigue results the prediction accuracy of 4 CLD formulations is investigated. In the study a piecewise linear CLD based on the S-N curves for 9 load ratios compares favourably in terms of prediction accuracy and conservativeness. For the specific laminate used in this study Boerstra's Multislope model provides a good alternative at reduced test effort

  13. Fatigue life prediction of rotor blade composites: Validation of constant amplitude formulations with variable amplitude experiments

    Science.gov (United States)

    Westphal, T.; Nijssen, R. P. L.

    2014-12-01

    The effect of Constant Life Diagram (CLD) formulation on the fatigue life prediction under variable amplitude (VA) loading was investigated based on variable amplitude tests using three different load spectra representative for wind turbine loading. Next to the Wisper and WisperX spectra, the recently developed NewWisper2 spectrum was used. Based on these variable amplitude fatigue results the prediction accuracy of 4 CLD formulations is investigated. In the study a piecewise linear CLD based on the S-N curves for 9 load ratios compares favourably in terms of prediction accuracy and conservativeness. For the specific laminate used in this study Boerstra's Multislope model provides a good alternative at reduced test effort.

  14. Estimating Surface Downward Shortwave Radiation over China Based on the Gradient Boosting Decision Tree Method

    Directory of Open Access Journals (Sweden)

    Lu Yang

    2018-01-01

    Full Text Available Downward shortwave radiation (DSR is an essential parameter in the terrestrial radiation budget and a necessary input for models of land-surface processes. Although several radiation products using satellite observations have been released, coarse spatial resolution and low accuracy limited their application. It is important to develop robust and accurate retrieval methods with higher spatial resolution. Machine learning methods may be powerful candidates for estimating the DSR from remotely sensed data because of their ability to perform adaptive, nonlinear data fitting. In this study, the gradient boosting regression tree (GBRT was employed to retrieve DSR measurements with the ground observation data in China collected from the China Meteorological Administration (CMA Meteorological Information Center and the satellite observations from the Advanced Very High Resolution Radiometer (AVHRR at a spatial resolution of 5 km. The validation results of the DSR estimates based on the GBRT method in China at a daily time scale for clear sky conditions show an R2 value of 0.82 and a root mean square error (RMSE value of 27.71 W·m−2 (38.38%. These values are 0.64 and 42.97 W·m−2 (34.57%, respectively, for cloudy sky conditions. The monthly DSR estimates were also evaluated using ground measurements. The monthly DSR estimates have an overall R2 value of 0.92 and an RMSE of 15.40 W·m−2 (12.93%. Comparison of the DSR estimates with the reanalyzed and retrieved DSR measurements from satellite observations showed that the estimated DSR is reasonably accurate but has a higher spatial resolution. Moreover, the proposed GBRT method has good scalability and is easy to apply to other parameter inversion problems by changing the parameters and training data.

  15. Nonparametric methods for volatility density estimation

    NARCIS (Netherlands)

    Es, van Bert; Spreij, P.J.C.; Zanten, van J.H.

    2009-01-01

    Stochastic volatility modelling of financial processes has become increasingly popular. The proposed models usually contain a stationary volatility process. We will motivate and review several nonparametric methods for estimation of the density of the volatility process. Both models based on

  16. Evaluating the impact of spatio-temporal smoothness constraints on the BOLD hemodynamic response function estimation: an analysis based on Tikhonov regularization

    International Nuclear Information System (INIS)

    Casanova, R; Yang, L; Hairston, W D; Laurienti, P J; Maldjian, J A

    2009-01-01

    Recently we have proposed the use of Tikhonov regularization with temporal smoothness constraints to estimate the BOLD fMRI hemodynamic response function (HRF). The temporal smoothness constraint was imposed on the estimates by using second derivative information while the regularization parameter was selected based on the generalized cross-validation function (GCV). Using one-dimensional simulations, we previously found this method to produce reliable estimates of the HRF time course, especially its time to peak (TTP), being at the same time fast and robust to over-sampling in the HRF estimation. Here, we extend the method to include simultaneous temporal and spatial smoothness constraints. This method does not need Gaussian smoothing as a pre-processing step as usually done in fMRI data analysis. We carried out two-dimensional simulations to compare the two methods: Tikhonov regularization with temporal (Tik-GCV-T) and spatio-temporal (Tik-GCV-ST) smoothness constraints on the estimated HRF. We focus our attention on quantifying the influence of the Gaussian data smoothing and the presence of edges on the performance of these techniques. Our results suggest that the spatial smoothing introduced by regularization is less severe than that produced by Gaussian smoothing. This allows more accurate estimates of the response amplitudes while producing similar estimates of the TTP. We illustrate these ideas using real data. (note)

  17. Bootstrap-Based Inference for Cube Root Consistent Estimators

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Jansson, Michael; Nagasawa, Kenichi

    This note proposes a consistent bootstrap-based distributional approximation for cube root consistent estimators such as the maximum score estimator of Manski (1975) and the isotonic density estimator of Grenander (1956). In both cases, the standard nonparametric bootstrap is known...... to be inconsistent. Our method restores consistency of the nonparametric bootstrap by altering the shape of the criterion function defining the estimator whose distribution we seek to approximate. This modification leads to a generic and easy-to-implement resampling method for inference that is conceptually distinct...... from other available distributional approximations based on some form of modified bootstrap. We offer simulation evidence showcasing the performance of our inference method in finite samples. An extension of our methodology to general M-estimation problems is also discussed....

  18. Advances in estimation methods of vegetation water content based on optical remote sensing techniques

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Quantitative estimation of vegetation water content(VWC) using optical remote sensing techniques is helpful in forest fire as-sessment,agricultural drought monitoring and crop yield estimation.This paper reviews the research advances of VWC retrieval using spectral reflectance,spectral water index and radiative transfer model(RTM) methods.It also evaluates the reli-ability of VWC estimation using spectral water index from the observation data and the RTM.Focusing on two main definitions of VWC-the fuel moisture content(FMC) and the equivalent water thickness(EWT),the retrieval accuracies of FMC and EWT using vegetation water indices are analyzed.Moreover,the measured information and the dataset are used to estimate VWC,the results show there are significant correlations among three kinds of vegetation water indices(i.e.,WSI,NDⅡ,NDWI1640,WI/NDVI) and canopy FMC of winter wheat(n=45).Finally,the future development directions of VWC detection based on optical remote sensing techniques are also summarized.

  19. High accuracy amplitude and phase measurements based on a double heterodyne architecture

    International Nuclear Information System (INIS)

    Zhao Danyang; Wang Guangwei; Pan Weimin

    2015-01-01

    In the digital low level RF (LLRF) system of a circular (particle) accelerator, the RF field signal is usually down converted to a fixed intermediate frequency (IF). The ratio of IF and sampling frequency determines the processing required, and differs in various LLRF systems. It is generally desirable to design a universally compatible architecture for different IFs with no change to the sampling frequency and algorithm. A new RF detection method based on a double heterodyne architecture for wide IF range has been developed, which achieves the high accuracy requirement of modern LLRF. In this paper, the relation of IF and phase error is systematically analyzed for the first time and verified by experiments. The effects of temperature drift for 16 h IF detection are inhibited by the amplitude and phase calibrations. (authors)

  20. A Fast Soft Bit Error Rate Estimation Method

    Directory of Open Access Journals (Sweden)

    Ait-Idir Tarik

    2010-01-01

    Full Text Available We have suggested in a previous publication a method to estimate the Bit Error Rate (BER of a digital communications system instead of using the famous Monte Carlo (MC simulation. This method was based on the estimation of the probability density function (pdf of soft observed samples. The kernel method was used for the pdf estimation. In this paper, we suggest to use a Gaussian Mixture (GM model. The Expectation Maximisation algorithm is used to estimate the parameters of this mixture. The optimal number of Gaussians is computed by using Mutual Information Theory. The analytical expression of the BER is therefore simply given by using the different estimated parameters of the Gaussian Mixture. Simulation results are presented to compare the three mentioned methods: Monte Carlo, Kernel and Gaussian Mixture. We analyze the performance of the proposed BER estimator in the framework of a multiuser code division multiple access system and show that attractive performance is achieved compared with conventional MC or Kernel aided techniques. The results show that the GM method can drastically reduce the needed number of samples to estimate the BER in order to reduce the required simulation run-time, even at very low BER.

  1. CONTROL BASED ON NUMERICAL METHODS AND RECURSIVE BAYESIAN ESTIMATION IN A CONTINUOUS ALCOHOLIC FERMENTATION PROCESS

    Directory of Open Access Journals (Sweden)

    Olga L. Quintero

    Full Text Available Biotechnological processes represent a challenge in the control field, due to their high nonlinearity. In particular, continuous alcoholic fermentation from Zymomonas mobilis (Z.m presents a significant challenge. This bioprocess has high ethanol performance, but it exhibits an oscillatory behavior in process variables due to the influence of inhibition dynamics (rate of ethanol concentration over biomass, substrate, and product concentrations. In this work a new solution for control of biotechnological variables in the fermentation process is proposed, based on numerical methods and linear algebra. In addition, an improvement to a previously reported state estimator, based on particle filtering techniques, is used in the control loop. The feasibility estimator and its performance are demonstrated in the proposed control loop. This methodology makes it possible to develop a controller design through the use of dynamic analysis with a tested biomass estimator in Z.m and without the use of complex calculations.

  2. Pipeline heating method based on optimal control and state estimation

    Energy Technology Data Exchange (ETDEWEB)

    Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail: fvianna@petrobras.com.br; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail: helcio@mecanica.ufrj.br; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail: dulikrav@fiu.edu

    2010-07-01

    In production of oil and gas wells in deep waters the flowing of hydrocarbon through pipeline is a challenging problem. This environment presents high hydrostatic pressures and low sea bed temperatures, which can favor the formation of solid deposits that in critical operating conditions, as unplanned shutdown conditions, may result in a pipeline blockage and consequently incur in large financial losses. There are different methods to protect the system, but nowadays thermal insulation and chemical injection are the standard solutions normally used. An alternative method of flow assurance is to heat the pipeline. This concept, which is known as active heating system, aims at heating the produced fluid temperature above a safe reference level in order to avoid the formation of solid deposits. The objective of this paper is to introduce a Bayesian statistical approach for the state estimation problem, in which the state variables are considered as the transient temperatures within a pipeline cross-section, and to use the optimal control theory as a design tool for a typical heating system during a simulated shutdown condition. An application example is presented to illustrate how Bayesian filters can be used to reconstruct the temperature field from temperature measurements supposedly available on the external surface of the pipeline. The temperatures predicted with the Bayesian filter are then utilized in a control approach for a heating system used to maintain the temperature within the pipeline above the critical temperature of formation of solid deposits. The physical problem consists of a pipeline cross section represented by a circular domain with four points over the pipe wall representing heating cables. The fluid is considered stagnant, homogeneous, isotropic and with constant thermo-physical properties. The mathematical formulation governing the direct problem was solved with the finite volume method and for the solution of the state estimation problem

  3. Integral-equation based methods for parameter estimation in output pulses of radiation detectors: Application in nuclear medicine and spectroscopy

    Science.gov (United States)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-04-01

    Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.

  4. Spline-based high-accuracy piecewise-polynomial phase-to-sinusoid amplitude converters.

    Science.gov (United States)

    Petrinović, Davor; Brezović, Marko

    2011-04-01

    We propose a method for direct digital frequency synthesis (DDS) using a cubic spline piecewise-polynomial model for a phase-to-sinusoid amplitude converter (PSAC). This method offers maximum smoothness of the output signal. Closed-form expressions for the cubic polynomial coefficients are derived in the spectral domain and the performance analysis of the model is given in the time and frequency domains. We derive the closed-form performance bounds of such DDS using conventional metrics: rms and maximum absolute errors (MAE) and maximum spurious free dynamic range (SFDR) measured in the discrete time domain. The main advantages of the proposed PSAC are its simplicity, analytical tractability, and inherent numerical stability for high table resolutions. Detailed guidelines for a fixed-point implementation are given, based on the algebraic analysis of all quantization effects. The results are verified on 81 PSAC configurations with the output resolutions from 5 to 41 bits by using a bit-exact simulation. The VHDL implementation of a high-accuracy DDS based on the proposed PSAC with 28-bit input phase word and 32-bit output value achieves SFDR of its digital output signal between 180 and 207 dB, with a signal-to-noise ratio of 192 dB. Its implementation requires only one 18 kB block RAM and three 18-bit embedded multipliers in a typical field-programmable gate array (FPGA) device. © 2011 IEEE

  5. Direct diffusion tensor estimation using a model-based method with spatial and parametric constraints.

    Science.gov (United States)

    Zhu, Yanjie; Peng, Xi; Wu, Yin; Wu, Ed X; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2017-02-01

    To develop a new model-based method with spatial and parametric constraints (MB-SPC) aimed at accelerating diffusion tensor imaging (DTI) by directly estimating the diffusion tensor from highly undersampled k-space data. The MB-SPC method effectively incorporates the prior information on the joint sparsity of different diffusion-weighted images using an L1-L2 norm and the smoothness of the diffusion tensor using a total variation seminorm. The undersampled k-space datasets were obtained from fully sampled DTI datasets of a simulated phantom and an ex-vivo experimental rat heart with acceleration factors ranging from 2 to 4. The diffusion tensor was directly reconstructed by solving a minimization problem with a nonlinear conjugate gradient descent algorithm. The reconstruction performance was quantitatively assessed using the normalized root mean square error (nRMSE) of the DTI indices. The MB-SPC method achieves acceptable DTI measures at an acceleration factor up to 4. Experimental results demonstrate that the proposed method can estimate the diffusion tensor more accurately than most existing methods operating at higher net acceleration factors. The proposed method can significantly reduce artifact, particularly at higher acceleration factors or lower SNRs. This method can easily be adapted to MR relaxometry parameter mapping and is thus useful in the characterization of biological tissue such as nerves, muscle, and heart tissue. © 2016 American Association of Physicists in Medicine.

  6. Calculation of the real part of the nuclear amplitude at high s and small t from the Coulomb amplitude

    Energy Technology Data Exchange (ETDEWEB)

    Gauron, P.; Nicolescu, B. [Universite Pierre et Marie Curie, Theory Group, Lab. de Physique Nucleaire et des Hautes Energies (LPNHE), CNRS 75 - Paris (France)

    2005-07-01

    A new method for the determination of the real part of the elastic scattering amplitude is examined for high energy proton-proton at small momentum transfer. This method allows us to decrease the number of model assumptions, to obtain the real part in a narrow region of momentum transfer and to test different models. The possible non-exponential behavior of the real part was found on the base of the analysis of the ISR experimental data. (authors)

  7. Rapid Estimation Method for State of Charge of Lithium-Ion Battery Based on Fractional Continual Variable Order Model

    Directory of Open Access Journals (Sweden)

    Xin Lu

    2018-03-01

    Full Text Available In recent years, the fractional order model has been employed to state of charge (SOC estimation. The non integer differentiation order being expressed as a function of recursive factors defining the fractality of charge distribution on porous electrodes. The battery SOC affects the fractal dimension of charge distribution, therefore the order of the fractional order model varies with the SOC at the same condition. This paper proposes a new method to estimate the SOC. A fractional continuous variable order model is used to characterize the fractal morphology of charge distribution. The order identification results showed that there is a stable monotonic relationship between the fractional order and the SOC after the battery inner electrochemical reaction reaches balanced. This feature makes the proposed model particularly suitable for SOC estimation when the battery is in the resting state. Moreover, a fast iterative method based on the proposed model is introduced for SOC estimation. The experimental results showed that the proposed iterative method can quickly estimate the SOC by several iterations while maintaining high estimation accuracy.

  8. Estimating the spatial distribution of soil moisture based on Bayesian maximum entropy method with auxiliary data from remote sensing

    Science.gov (United States)

    Gao, Shengguo; Zhu, Zhongli; Liu, Shaomin; Jin, Rui; Yang, Guangchao; Tan, Lei

    2014-10-01

    Soil moisture (SM) plays a fundamental role in the land-atmosphere exchange process. Spatial estimation based on multi in situ (network) data is a critical way to understand the spatial structure and variation of land surface soil moisture. Theoretically, integrating densely sampled auxiliary data spatially correlated with soil moisture into the procedure of spatial estimation can improve its accuracy. In this study, we present a novel approach to estimate the spatial pattern of soil moisture by using the BME method based on wireless sensor network data and auxiliary information from ASTER (Terra) land surface temperature measurements. For comparison, three traditional geostatistic methods were also applied: ordinary kriging (OK), which used the wireless sensor network data only, regression kriging (RK) and ordinary co-kriging (Co-OK) which both integrated the ASTER land surface temperature as a covariate. In Co-OK, LST was linearly contained in the estimator, in RK, estimator is expressed as the sum of the regression estimate and the kriged estimate of the spatially correlated residual, but in BME, the ASTER land surface temperature was first retrieved as soil moisture based on the linear regression, then, the t-distributed prediction interval (PI) of soil moisture was estimated and used as soft data in probability form. The results indicate that all three methods provide reasonable estimations. Co-OK, RK and BME can provide a more accurate spatial estimation by integrating the auxiliary information Compared to OK. RK and BME shows more obvious improvement compared to Co-OK, and even BME can perform slightly better than RK. The inherent issue of spatial estimation (overestimation in the range of low values and underestimation in the range of high values) can also be further improved in both RK and BME. We can conclude that integrating auxiliary data into spatial estimation can indeed improve the accuracy, BME and RK take better advantage of the auxiliary

  9. The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.

    Science.gov (United States)

    Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre

    2016-10-01

    Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.

  10. A Computationally Efficient Method for Polyphonic Pitch Estimation

    Directory of Open Access Journals (Sweden)

    Ruohua Zhou

    2009-01-01

    Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  11. Bayesian extraction of the parton distribution amplitude from the Bethe-Salpeter wave function

    Science.gov (United States)

    Gao, Fei; Chang, Lei; Liu, Yu-xin

    2017-07-01

    We propose a new numerical method to compute the parton distribution amplitude (PDA) from the Euclidean Bethe-Salpeter wave function. The essential step is to extract the weight function in the Nakanishi representation of the Bethe-Salpeter wave function in Euclidean space, which is an ill-posed inversion problem, via the maximum entropy method (MEM). The Nakanishi weight function as well as the corresponding light-front parton distribution amplitude (PDA) can be well determined. We confirm prior work on PDA computations, which was based on different methods.

  12. New relations for graviton-matter amplitudes

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    I report on recent progress in finding compact expressions for scattering amplitudes involving gravitons and gluons as well as massive scalar and fermionic matter particles. At tree level the single graviton emission amplitudes may be expressed as linear combination of purely non-gravitational ones. At the one-loop level recent results on all four point Einstein-Yang-Mills amplitudes with at most one opposite helicity state using unitarity methods are reported. 

  13. Estimate of the upper limit of amplitude of Solar Cycle No. 23

    Energy Technology Data Exchange (ETDEWEB)

    Silbergleit, V. M; Larocca, P. A [Departamento de Fisica, UBA (Argentina)

    2001-07-01

    AA* indices of values greater than 60 10{sup -9} Tesla are considered in order to characterize geomagnetic storms since the available series of these indices comprise the years from 1868 to 1998 (The longest existing interval of geomagnetic activity). By applying the precursor technique we have performed an analysis of the storm periods and the solar activity, obtaining a good correlation between the number of storms ({alpha})(characterized by the AA* indices) and the amplitudes of each solar cycle ({zeta}) and those of the next ({mu}). Using the multiple regression method applied to {alpha}=A+B{zeta} +C{mu}, the constants are calculated and the values found are: A=-33 {+-}18, B= 0.74{+-}0.13 y C= 0.56{+-}0.13. The present statistical method indicates that the current solar cycle (number 23) would have an upper limit of 202{+-}57 monthy mean sunspots. This value indicates that the solar activity would be high causing important effects on the Earth's environment. [Spanish] Se consideran los valores de los indices AA* de valor mayor que 60 10{sup -9} Tesla para caracterizar tormentas geomagneticas ya que las series disponibles de estos indices van desde 1868 hasta 1998 (el mas largo intervalo de la actividad geomagnetica existente). Aplicando la tecnica del precursor hemos realizado un analisis de los periodos de tormentas y la actividad solar obteniendo una buena correlacion entre el numero de tormentas ({alpha}) (caracterizado por los indices AA*) y las amplitudes de los ciclos solares corriente ({zeta}) y el proximo ({mu}). Usando el metodo de regresion multiple aplicado a {alpha}=A+B{zeta} +C{mu}, las consonantes resultaron: A=-33 {+-}18, B= 0.74{+-}0.13 y C= 0.56{+-}0.13. El metodo estadistico presentado indica que el ciclo actual (numero 23) tendria un pico de 202{+-} 57 manchas mensuales promedio. Este valor indica que la actividad solar seria alta produciendo importantes efectos en el medio ambiente terrestre.

  14. New relations for gauge-theory amplitudes

    International Nuclear Information System (INIS)

    Bern, Z.; Carrasco, J. J. M.; Johansson, H.

    2008-01-01

    We present an identity satisfied by the kinematic factors of diagrams describing the tree amplitudes of massless gauge theories. This identity is a kinematic analog of the Jacobi identity for color factors. Using this we find new relations between color-ordered partial amplitudes. We discuss applications to multiloop calculations via the unitarity method. In particular, we illustrate the relations between different contributions to a two-loop four-point QCD amplitude. We also use this identity to reorganize gravity tree amplitudes diagram by diagram, offering new insight into the structure of the Kawai-Lewellen-Tye (KLT) relations between gauge and gravity tree amplitudes. This insight leads to similar but novel relations. We expect this to be helpful in higher-loop studies of the ultraviolet properties of gravity theories.

  15. An Estimation Method for number of carrier frequency

    Directory of Open Access Journals (Sweden)

    Xiong Peng

    2015-01-01

    Full Text Available This paper proposes a method that utilizes AR model power spectrum estimation based on Burg algorithm to estimate the number of carrier frequency in single pulse. In the modern electronic and information warfare, the pulse signal form of radar is complex and changeable, among which single pulse with multi-carrier frequencies is the most typical one, such as the frequency shift keying (FSK signal, the frequency shift keying with linear frequency (FSK-LFM hybrid modulation signal and the frequency shift keying with bi-phase shift keying (FSK-BPSK hybrid modulation signal. In view of this kind of single pulse which has multi-carrier frequencies, this paper adopts a method which transforms the complex signal into AR model, then takes power spectrum based on Burg algorithm to show the effect. Experimental results show that the estimation method still can determine the number of carrier frequencies accurately even when the signal noise ratio (SNR is very low.

  16. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.

    Science.gov (United States)

    Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben

    2018-02-22

    This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  17. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves

    Directory of Open Access Journals (Sweden)

    Madaín Pérez-Patricio

    2018-02-01

    Full Text Available This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance, a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica, Canavalia ensiforme, and Lycopersicon esculentum. Experimental results showed that—in terms of accuracy and processing speed—the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica, where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  18. Estimating the Capacity of Urban Transportation Networks with an Improved Sensitivity Based Method

    Directory of Open Access Journals (Sweden)

    Muqing Du

    2015-01-01

    Full Text Available The throughput of a given transportation network is always of interest to the traffic administrative department, so as to evaluate the benefit of the transportation construction or expansion project before its implementation. The model of the transportation network capacity formulated as a mathematic programming with equilibrium constraint (MPEC well defines this problem. For practical applications, a modified sensitivity analysis based (SAB method is developed to estimate the solution of this bilevel model. The high-efficient origin-based (OB algorithm is extended for the precise solution of the combined model which is integrated in the network capacity model. The sensitivity analysis approach is also modified to simplify the inversion of the Jacobian matrix in large-scale problems. The solution produced in every iteration of SAB is restrained to be feasible to guarantee the success of the heuristic search. From the numerical experiments, the accuracy of the derivatives for the linear approximation could significantly affect the converging of the SAB method. The results also show that the proposed method could obtain good suboptimal solutions from different starting points in the test examples.

  19. Monte Carlo-based tail exponent estimator

    Science.gov (United States)

    Barunik, Jozef; Vacha, Lukas

    2010-11-01

    In this paper we propose a new approach to estimation of the tail exponent in financial stock markets. We begin the study with the finite sample behavior of the Hill estimator under α-stable distributions. Using large Monte Carlo simulations, we show that the Hill estimator overestimates the true tail exponent and can hardly be used on samples with small length. Utilizing our results, we introduce a Monte Carlo-based method of estimation for the tail exponent. Our proposed method is not sensitive to the choice of tail size and works well also on small data samples. The new estimator also gives unbiased results with symmetrical confidence intervals. Finally, we demonstrate the power of our estimator on the international world stock market indices. On the two separate periods of 2002-2005 and 2006-2009, we estimate the tail exponent.

  20. Robust Diagnosis Method Based on Parameter Estimation for an Interturn Short-Circuit Fault in Multipole PMSM under High-Speed Operation.

    Science.gov (United States)

    Lee, Jewon; Moon, Seokbae; Jeong, Hyeyun; Kim, Sang Woo

    2015-11-20

    This paper proposes a diagnosis method for a multipole permanent magnet synchronous motor (PMSM) under an interturn short circuit fault. Previous works in this area have suffered from the uncertainties of the PMSM parameters, which can lead to misdiagnosis. The proposed method estimates the q-axis inductance (Lq) of the faulty PMSM to solve this problem. The proposed method also estimates the faulty phase and the value of G, which serves as an index of the severity of the fault. The q-axis current is used to estimate the faulty phase, the values of G and Lq. For this reason, two open-loop observers and an optimization method based on a particle-swarm are implemented. The q-axis current of a healthy PMSM is estimated by the open-loop observer with the parameters of a healthy PMSM. The Lq estimation significantly compensates for the estimation errors in high-speed operation. The experimental results demonstrate that the proposed method can estimate the faulty phase, G, and Lq besides exhibiting robustness against parameter uncertainties.

  1. Training Methods for Image Noise Level Estimation on Wavelet Components

    Directory of Open Access Journals (Sweden)

    A. De Stefano

    2004-12-01

    Full Text Available The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD. This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.

  2. Pulse-amplitude multipliers using logarithmic amplitude-to-time conversion; Amplificateurs d'impulsions utilisant une conversion logarithmique temps-amplitude; Ob umnozhitelyakh amplitudy impul'sov s ispol'zovaniem logarifmicheskogo preobrazovaniya amplitudy vo vremya; Multiplicadores de amplitud de impulso usando una conversion logaritmica de amplitud en tiempo

    Energy Technology Data Exchange (ETDEWEB)

    Konrad, M [Institut Rudjer Boskovic, Zagreb, Yugoslavia (Croatia)

    1962-04-15

    The accuracy and limitations of multipliers based on logarithmic amplitude-to-time conversion using RC pulse stretchers are discussed with respect to their application for determining whether the amplitude product of two coincident pulses has a given value. Some possible circuits are given. (author) [French] L'auteur etudie la precision et les limitations des amplificateurs fondes sur la conversion logarithmique temps-amplitude et utilisant des allongeurs d'impulsions RC, afin d'etablir si ces appareils peuvent servir a determiner la valeur du produit des amplitudes de deux impulsions coincidentes. Il decrit en outre plusieurs circuits possibles. (author) [Spanish] La memoria discute la precision y limitaciones de los multiplicadores basados en la conversion logaritmica de amplitud en tiempo empleando circuitos alargadores de resistencia-capacidad en relacion con su aplicacion para determinar si el producto de las amplitudes de dos impulsos coincidentes tiene un valor determinado. Indica algunos circuitos posibles. (author) [Russian] Obsuzhdayutsya predel pogreshnosti i ogranicheniya umnozhitelej, osnovannykh na logarifmicheskom preobrazovanii amplitudy vo vremya, s ispol'zovaniem rasshiritelej impul'sov RC; ehto delaetsya v svyazi s ikh primeneniem dlya vyyasneniya voprosa o tom, imeet li opredelennuyu velichinu proizvedenie amplitud dvukh sovpadayushchikh impul'sov. Privodyatsya nekotorye vozmozhnye blok-skhemy. (author)

  3. Specialized Finite Set Statistics (FISST)-Based Estimation Methods to Enhance Space Situational Awareness in Medium Earth Orbit (MEO) and Geostationary Earth Orbit (GEO)

    Science.gov (United States)

    2016-08-17

    Specialized Finite Set Statistics (FISST)-based Estimation Methods to Enhance Space Situational Awareness in Medium Earth Orbit (MEO) and Geostationary...terms of specialized Geostationary Earth Orbit (GEO) elements to estimate the state of resident space objects in the geostationary regime. Justification...AFRL-RV-PS- AFRL-RV-PS- TR-2016-0114 TR-2016-0114 SPECIALIZED FINITE SET STATISTICS (FISST)- BASED ESTIMATION METHODS TO ENHANCE SPACE SITUATIONAL

  4. A new method for the determination of the real part of the hadron elastic scattering amplitude at small angles and high energies

    Energy Technology Data Exchange (ETDEWEB)

    Gauron, P. [Theory Group, Laboratoire de Physique Nucleaire et des Hautes Energies (LPNHE), CNRS, and Universite Pierre et Marie Curie, Paris (France)]. E-mail: gauron@in2p3.fr; Nicolescu, B. [Theory Group, Laboratoire de Physique Nucleaire et des Hautes Energies (LPNHE), CNRS, and Universite Pierre et Marie Curie, Paris (France)]. E-mail: nicolesc@lpnhep.in2p3.fr; Selyugin, O.V. [BLTP, JINR, Dubna, Moscow region (Russian Federation)]. E-mail: selugin@thsun1.jinr.ru

    2005-11-24

    A new method for the determination of the real part of the elastic scattering amplitude is examined for high energy proton-proton at small momentum transfer. This method allows us to decrease the number of model assumptions, to obtain the real part in a narrow region of momentum transfer and to test different models. The real part is computed at a given point t{sub min} near t=0 from the known Coulomb amplitude. Hence one obtains an important constraint on the real part of the forward scattering amplitude and therefore on the {rho}-parameter (measuring the ratio of the real to imaginary part of the scattering amplitude at t=0), which can be tested at LHC.

  5. Method and apparatus for digitally based high speed x-ray spectrometer

    International Nuclear Information System (INIS)

    Warburton, W.K.; Hubbard, B.

    1997-01-01

    A high speed, digitally based, signal processing system which accepts input data from a detector-preamplifier and produces a spectral analysis of the x-rays illuminating the detector. The system achieves high throughputs at low cost by dividing the required digital processing steps between a ''hardwired'' processor implemented in combinatorial digital logic, which detects the presence of the x-ray signals in the digitized data stream and extracts filtered estimates of their amplitudes, and a programmable digital signal processing computer, which refines the filtered amplitude estimates and bins them to produce the desired spectral analysis. One set of algorithms allow this hybrid system to match the resolution of analog systems while operating at much higher data rates. A second set of algorithms implemented in the processor allow the system to be self calibrating as well. The same processor also handles the interface to an external control computer. 19 figs

  6. A novel sampling method for multiple multiscale targets from scattering amplitudes at a fixed frequency

    Science.gov (United States)

    Liu, Xiaodong

    2017-08-01

    A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.

  7. Rapid estimation of earthquake magnitude from the arrival time of the peak high‐frequency amplitude

    Science.gov (United States)

    Noda, Shunta; Yamamoto, Shunroku; Ellsworth, William L.

    2016-01-01

    We propose a simple approach to measure earthquake magnitude M using the time difference (Top) between the body‐wave onset and the arrival time of the peak high‐frequency amplitude in an accelerogram. Measured in this manner, we find that Mw is proportional to 2logTop for earthquakes 5≤Mw≤7, which is the theoretical proportionality if Top is proportional to source dimension and stress drop is scale invariant. Using high‐frequency (>2  Hz) data, the root mean square (rms) residual between Mw and MTop(M estimated from Top) is approximately 0.5 magnitude units. The rms residuals of the high‐frequency data in passbands between 2 and 16 Hz are uniformly smaller than those obtained from the lower‐frequency data. Top depends weakly on epicentral distance, and this dependence can be ignored for distances earthquake produces a final magnitude estimate of M 9.0 at 120 s after the origin time. We conclude that Top of high‐frequency (>2  Hz) accelerograms has value in the context of earthquake early warning for extremely large events.

  8. Simple estimating method of damages of concrete gravity dam based on linear dynamic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sasaki, T.; Kanenawa, K.; Yamaguchi, Y. [Public Works Research Institute, Tsukuba, Ibaraki (Japan). Hydraulic Engineering Research Group

    2004-07-01

    Due to the occurrence of large earthquakes like the Kobe Earthquake in 1995, there is a strong need to verify seismic resistance of dams against much larger earthquake motions than those considered in the present design standard in Japan. Problems exist in using nonlinear analysis to evaluate the safety of dams including: that the influence which the set material properties have on the results of nonlinear analysis is large, and that the results of nonlinear analysis differ greatly according to the damage estimation models or analysis programs. This paper reports the evaluation indices based on a linear dynamic analysis method and the characteristics of the progress of cracks in concrete gravity dams with different shapes using a nonlinear dynamic analysis method. The study concludes that if simple linear dynamic analysis is appropriately conducted to estimate tensile stress at potential locations of initiating cracks, the damage due to cracks would be predicted roughly. 4 refs., 1 tab., 13 figs.

  9. Yield Estimation of Sugar Beet Based on Plant Canopy Using Machine Vision Methods

    Directory of Open Access Journals (Sweden)

    S Latifaltojar

    2014-09-01

    Full Text Available Crop yield estimation is one of the most important parameters for information and resources management in precision agriculture. This information is employed for optimizing the field inputs for successive cultivations. In the present study, the feasibility of sugar beet yield estimation by means of machine vision was studied. For the field experiments stripped images were taken during the growth season with one month intervals. The image of horizontal view of plants canopy was prepared at the end of each month. At the end of growth season, beet roots were harvested and the correlation between the sugar beet canopy in each month of growth period and corresponding weight of the roots were investigated. Results showed that there was a strong correlation between the beet yield and green surface area of autumn cultivated sugar beets. The highest coefficient of determination was 0.85 at three months before harvest. In order to assess the accuracy of the final model, the second year of study was performed with the same methodology. The results depicted a strong relationship between the actual and estimated beet weights with R2=0.94. The model estimated beet yield with about 9 percent relative error. It is concluded that this method has appropriate potential for estimation of sugar beet yield based on band imaging prior to harvest

  10. Bayesian extraction of the parton distribution amplitude from the Bethe–Salpeter wave function

    Directory of Open Access Journals (Sweden)

    Fei Gao

    2017-07-01

    Full Text Available We propose a new numerical method to compute the parton distribution amplitude (PDA from the Euclidean Bethe–Salpeter wave function. The essential step is to extract the weight function in the Nakanishi representation of the Bethe–Salpeter wave function in Euclidean space, which is an ill-posed inversion problem, via the maximum entropy method (MEM. The Nakanishi weight function as well as the corresponding light-front parton distribution amplitude (PDA can be well determined. We confirm prior work on PDA computations, which was based on different methods.

  11. Biological Inspired Stochastic Optimization Technique (PSO for DOA and Amplitude Estimation of Antenna Arrays Signal Processing in RADAR Communication System

    Directory of Open Access Journals (Sweden)

    Khurram Hammed

    2016-01-01

    Full Text Available This paper presents a stochastic global optimization technique known as Particle Swarm Optimization (PSO for joint estimation of amplitude and direction of arrival of the targets in RADAR communication system. The proposed scheme is an excellent optimization methodology and a promising approach for solving the DOA problems in communication systems. Moreover, PSO is quite suitable for real time scenario and easy to implement in hardware. In this study, uniform linear array is used and targets are supposed to be in far field of the arrays. Formulation of the fitness function is based on mean square error and this function requires a single snapshot to obtain the best possible solution. To check the accuracy of the algorithm, all of the results are taken by varying the number of antenna elements and targets. Finally, these results are compared with existing heuristic techniques to show the accuracy of PSO.

  12. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  13. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    Science.gov (United States)

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.

  14. Substructure based modeling of nickel single crystals cycled at low plastic strain amplitudes

    Science.gov (United States)

    Zhou, Dong

    In this dissertation a meso-scale, substructure-based, composite single crystal model is fully developed from the simple uniaxial model to the 3-D finite element method (FEM) model with explicit substructures and further with substructure evolution parameters, to simulate the completely reversed, strain controlled, low plastic strain amplitude cyclic deformation of nickel single crystals. Rate-dependent viscoplasticity and Armstrong-Frederick type kinematic hardening rules are applied to substructures on slip systems in the model to describe the kinematic hardening behavior of crystals. Three explicit substructure components are assumed in the composite single crystal model, namely "loop patches" and "channels" which are aligned in parallel in a "vein matrix," and persistent slip bands (PSBs) connected in series with the vein matrix. A magnetic domain rotation model is presented to describe the reverse magnetostriction of single crystal nickel. Kinematic hardening parameters are obtained by fitting responses to experimental data in the uniaxial model, and the validity of uniaxial assumption is verified in the 3-D FEM model with explicit substructures. With information gathered from experiments, all control parameters in the model including hardening parameters, volume fraction of loop patches and PSBs, and variation of Young's modulus etc. are correlated to cumulative plastic strain and/or plastic strain amplitude; and the whole cyclic deformation history of single crystal nickel at low plastic strain amplitudes is simulated in the uniaxial model. Then these parameters are implanted in the 3-D FEM model to simulate the formation of PSB bands. A resolved shear stress criterion is set to trigger the formation of PSBs, and stress perturbation in the specimen is obtained by several elements assigned with PSB material properties a priori. Displacement increment, plastic strain amplitude control and overall stress-strain monitor and output are carried out in the user

  15. Amplitude-aware permutation entropy: Illustration in spike detection and signal segmentation.

    Science.gov (United States)

    Azami, Hamed; Escudero, Javier

    2016-05-01

    Signal segmentation and spike detection are two important biomedical signal processing applications. Often, non-stationary signals must be segmented into piece-wise stationary epochs or spikes need to be found among a background of noise before being further analyzed. Permutation entropy (PE) has been proposed to evaluate the irregularity of a time series. PE is conceptually simple, structurally robust to artifacts, and computationally fast. It has been extensively used in many applications, but it has two key shortcomings. First, when a signal is symbolized using the Bandt-Pompe procedure, only the order of the amplitude values is considered and information regarding the amplitudes is discarded. Second, in the PE, the effect of equal amplitude values in each embedded vector is not addressed. To address these issues, we propose a new entropy measure based on PE: the amplitude-aware permutation entropy (AAPE). AAPE is sensitive to the changes in the amplitude, in addition to the frequency, of the signals thanks to it being more flexible than the classical PE in the quantification of the signal motifs. To demonstrate how the AAPE method can enhance the quality of the signal segmentation and spike detection, a set of synthetic and realistic synthetic neuronal signals, electroencephalograms and neuronal data are processed. We compare the performance of AAPE in these problems against state-of-the-art approaches and evaluate the significance of the differences with a repeated ANOVA with post hoc Tukey's test. In signal segmentation, the accuracy of AAPE-based method is higher than conventional segmentation methods. AAPE also leads to more robust results in the presence of noise. The spike detection results show that AAPE can detect spikes well, even when presented with single-sample spikes, unlike PE. For multi-sample spikes, the changes in AAPE are larger than in PE. We introduce a new entropy metric, AAPE, that enables us to consider amplitude information in the

  16. Influence of dynamic dislocation drag on amplitude dependences of damping decrement and modulus defect in lead

    International Nuclear Information System (INIS)

    Soifer, Y.M.; Golosovskii, M.A.; Kobelev, N.P.

    1981-01-01

    A study was made of the amplitude dependences of the damping decrement and the modulus defect in lead at low temperatures at frequencies of 100 kHz and 5 MHz. It was shown that in pure lead at high frequencies a change in the amplitude dependences of the damping decrement and the modulus defect under the superconducting transition is due mainly to the change in the losses caused by the dynamic drag of dislocations whereas in measurements at low frequencies the influence of the superconducting transition is due to the change in the conditions of dislocation unpinning from point defects. The influence of the dynamic dislocation drag on the amplitude dependences of the damping decrement and the modulus defect is calculated and a method is presented for experimental estimation of the contribution of dynamic effects to the amplitude-dependent internal friction

  17. Assessing methane emission estimation methods based on atmospheric measurements from oil and gas production using LES simulations

    Science.gov (United States)

    Saide, P. E.; Steinhoff, D.; Kosovic, B.; Weil, J.; Smith, N.; Blewitt, D.; Delle Monache, L.

    2017-12-01

    There are a wide variety of methods that have been proposed and used to estimate methane emissions from oil and gas production by using air composition and meteorology observations in conjunction with dispersion models. Although there has been some verification of these methodologies using controlled releases and concurrent atmospheric measurements, it is difficult to assess the accuracy of these methods for more realistic scenarios considering factors such as terrain, emissions from multiple components within a well pad, and time-varying emissions representative of typical operations. In this work we use a large-eddy simulation (LES) to generate controlled but realistic synthetic observations, which can be used to test multiple source term estimation methods, also known as an Observing System Simulation Experiment (OSSE). The LES is based on idealized simulations of the Weather Research & Forecasting (WRF) model at 10 m horizontal grid-spacing covering an 8 km by 7 km domain with terrain representative of a region located in the Barnett shale. Well pads are setup in the domain following a realistic distribution and emissions are prescribed every second for the components of each well pad (e.g., chemical injection pump, pneumatics, compressor, tanks, and dehydrator) using a simulator driven by oil and gas production volume, composition and realistic operational conditions. The system is setup to allow assessments under different scenarios such as normal operations, during liquids unloading events, or during other prescribed operational upset events. Methane and meteorology model output are sampled following the specifications of the emission estimation methodologies and considering typical instrument uncertainties, resulting in realistic observations (see Figure 1). We will show the evaluation of several emission estimation methods including the EPA Other Test Method 33A and estimates using the EPA AERMOD regulatory model. We will also show source estimation

  18. Eikonal representation of N-body Coulomb scattering amplitudes

    International Nuclear Information System (INIS)

    Fried, H.M.; Kang, K.; McKellar, B.H.J.

    1983-01-01

    A new technique for the construction of N-body Coulomb scattering amplitudes is proposed, suggested by the simplest case of N = 2: Calculate the scattering amplitude in eikonal approximation, discard the infinite phase factors which appear upon taking the limit of a Coulomb potential, and treat the remainder as an amplitude whose absolute value squared produces the exact, Coulomb differential cross section. The method easily generalizes to the N-body Coulomb problem for elastic scattering, and for inelastic rearrangement scattering of Coulomb bound states. We give explicit results for N = 3 and 4; in the N = 3 case we extract amplitudes for the processes (12)+3->1+2+3 (breakup), (12)+3->1+(23) (rearrangement), and (12)+3→(12)'+3 (inelastic scattering) as residues at the appropriate poles in the free-free amplitude. The method produces scattering amplitudes f/sub N/ given in terms of explicit quadratures over (N-2) 2 distinct integrands

  19. Age estimation in forensic anthropology: quantification of observer error in phase versus component-based methods.

    Science.gov (United States)

    Shirley, Natalie R; Ramirez Montes, Paula Andrea

    2015-01-01

    The purpose of this study was to assess observer error in phase versus component-based scoring systems used to develop age estimation methods in forensic anthropology. A method preferred by forensic anthropologists in the AAFS was selected for this evaluation (the Suchey-Brooks method for the pubic symphysis). The Suchey-Brooks descriptions were used to develop a corresponding component-based scoring system for comparison. Several commonly used reliability statistics (kappa, weighted kappa, and the intraclass correlation coefficient) were calculated to assess observer agreement between two observers and to evaluate the efficacy of each of these statistics for this study. The linear weighted kappa was determined to be the most suitable measure of observer agreement. The results show that a component-based system offers the possibility for more objective scoring than a phase system as long as the coding possibilities for each trait do not exceed three states of expression, each with as little overlap as possible. © 2014 American Academy of Forensic Sciences.

  20. New strings for old Veneziano amplitudes. II. Group-theoretic treatment

    Science.gov (United States)

    Kholodenko, A. L.

    2006-09-01

    In this part of our four parts work we use theory of polynomial invariants of finite pseudo-reflection groups in order to reconstruct both the Veneziano and Veneziano-like (tachyon-free) amplitudes and the generating function reproducing these amplitudes. We demonstrate that such generating function and amplitudes associated with it can be recovered with help of finite dimensional exactly solvableN=2 supersymmetric quantum mechanical model known earlier from works of Witten, Stone and others. Using the Lefschetz isomorphism theorem we replace traditional supersymmetric calculations by the group-theoretic thus solving the Veneziano model exactly using standard methods of representation theory. Mathematical correctness of our arguments relies on important theorems by Shepard and Todd, Serre and Solomon proven respectively in the early 50s and 60s and documented in the monograph by Bourbaki. Based on these theorems, we explain why the developed formalism leaves all known results of conformal field theories unchanged. We also explain why these theorems impose stringent requirements connecting analytical properties of scattering amplitudes with symmetries of space-time in which such amplitudes act.

  1. Reverse survival method of fertility estimation: An evaluation

    Directory of Open Access Journals (Sweden)

    Thomas Spoorenberg

    2014-07-01

    Full Text Available Background: For the most part, demographers have relied on the ever-growing body of sample surveys collecting full birth history to derive total fertility estimates in less statistically developed countries. Yet alternative methods of fertility estimation can return very consistent total fertility estimates by using only basic demographic information. Objective: This paper evaluates the consistency and sensitivity of the reverse survival method -- a fertility estimation method based on population data by age and sex collected in one census or a single-round survey. Methods: A simulated population was first projected over 15 years using a set of fertility and mortality age and sex patterns. The projected population was then reverse survived using the Excel template FE_reverse_4.xlsx, provided with Timæus and Moultrie (2012. Reverse survival fertility estimates were then compared for consistency to the total fertility rates used to project the population. The sensitivity was assessed by introducing a series of distortions in the projection of the population and comparing the difference implied in the resulting fertility estimates. Results: The reverse survival method produces total fertility estimates that are very consistent and hardly affected by erroneous assumptions on the age distribution of fertility or by the use of incorrect mortality levels, trends, and age patterns. The quality of the age and sex population data that is 'reverse survived' determines the consistency of the estimates. The contribution of the method for the estimation of past and present trends in total fertility is illustrated through its application to the population data of five countries characterized by distinct fertility levels and data quality issues. Conclusions: Notwithstanding its simplicity, the reverse survival method of fertility estimation has seldom been applied. The method can be applied to a large body of existing and easily available population data

  2. The design of multi-channel pulse amplitude analyzer based on ARM micro controller

    International Nuclear Information System (INIS)

    Li Hai; Li Xiang; Liu Caixue

    2010-01-01

    It introduces the design of multi-channel pulse amplitude analyzer based on embedded ARM micro-controller. The embedded and real-time system μC/OS-II builds up the real-time and stability of the system and advances the integration. (authors)

  3. An extended numerical calibration method for an electrochemical probe in thin wavy flow with large amplitude waves

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Ki Yong; No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1999-12-31

    The calibrating method for an electrochemical probe, neglecting the effect of the normal velocity on the mass transport, can cause large errors when applied to the measurement of wall shear rates in thin wavy flow with large amplitude waves. An extended calibrating method is developed to consider the contributions of the normal velocity. The inclusion of the turbulence-induced normal velocity term is found to have a negligible effect on the mass transfer coefficient. The contribution of the wave-induced normal velocity can be classified on the dimensionless parameter, V. If V is above a critical value of V, V{sub crit}, the effects of the wave-induced normal velocity become larger with an increase in V. While its effects negligible for inversely. The present inverse method can predict the unknown shear rate more accurately in thin wavy flow with large amplitude waves than the previous method. 18 refs., 8 figs. (Author)

  4. An extended numerical calibration method for an electrochemical probe in thin wavy flow with large amplitude waves

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Ki Yong; No, Hee Cheon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    The calibrating method for an electrochemical probe, neglecting the effect of the normal velocity on the mass transport, can cause large errors when applied to the measurement of wall shear rates in thin wavy flow with large amplitude waves. An extended calibrating method is developed to consider the contributions of the normal velocity. The inclusion of the turbulence-induced normal velocity term is found to have a negligible effect on the mass transfer coefficient. The contribution of the wave-induced normal velocity can be classified on the dimensionless parameter, V. If V is above a critical value of V, V{sub crit}, the effects of the wave-induced normal velocity become larger with an increase in V. While its effects negligible for inversely. The present inverse method can predict the unknown shear rate more accurately in thin wavy flow with large amplitude waves than the previous method. 18 refs., 8 figs. (Author)

  5. Estimating primary production from oxygen time series: A novel approach in the frequency domain

    NARCIS (Netherlands)

    Cox, T.J.S.; Maris, T.; Soetaert, K.; Kromkamp, J.C.; Meire, P.; Meysman, F.J.R.

    2015-01-01

    Based on an analysis in the frequency domain of the governing equation of oxygen dynamics in aquatic systems, we derive a new method for estimating gross primary production (GPP) from oxygen time series. The central result of this article is a relation between time averaged GPP and the amplitude of

  6. Phase and amplitude inversion of crosswell radar data

    Science.gov (United States)

    Ellefsen, Karl J.; Mazzella, Aldo T.; Horton, Robert J.; McKenna, Jason R.

    2011-01-01

    Phase and amplitude inversion of crosswell radar data estimates the logarithm of complex slowness for a 2.5D heterogeneous model. The inversion is formulated in the frequency domain using the vector Helmholtz equation. The objective function is minimized using a back-propagation method that is suitable for a 2.5D model and that accounts for the near-, intermediate-, and far-field regions of the antennas. The inversion is tested with crosswell radar data collected in a laboratory tank. The model anomalies are consistent with the known heterogeneity in the tank; the model’s relative dielectric permittivity, which is calculated from the real part of the estimated complex slowness, is consistent with independent laboratory measurements. The methodologies developed for this inversion can be adapted readily to inversions of seismic data (e.g., crosswell seismic and vertical seismic profiling data).

  7. Markov chain-based mass estimation method for loose part monitoring system and its performance

    Directory of Open Access Journals (Sweden)

    Sung-Hwan Shin

    2017-10-01

    Full Text Available A loose part monitoring system is used to identify unexpected loose parts in a nuclear reactor vessel or steam generator. It is still necessary for the mass estimation of loose parts, one function of a loose part monitoring system, to develop a new method due to the high estimation error of conventional methods such as Hertz's impact theory and the frequency ratio method. The purpose of this study is to propose a mass estimation method using a Markov decision process and compare its performance with a method using an artificial neural network model proposed in a previous study. First, how to extract feature vectors using discrete cosine transform was explained. Second, Markov chains were designed with codebooks obtained from the feature vector. A 1/8-scaled mockup of the reactor vessel for OPR1000 was employed, and all used signals were obtained by impacting its surface with several solid spherical masses. Next, the performance of mass estimation by the proposed Markov model was compared with that of the artificial neural network model. Finally, it was investigated that the proposed Markov model had matching error below 20% in mass estimation. That was a similar performance to the method using an artificial neural network model and considerably improved in comparison with the conventional methods.

  8. A comprehensive estimation of the economic effects of meteorological services based on the input-output method.

    Science.gov (United States)

    Wu, Xianhua; Wei, Guo; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian

    2014-01-01

    Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27-1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30-1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries.

  9. A Comprehensive Estimation of the Economic Effects of Meteorological Services Based on the Input-Output Method

    Science.gov (United States)

    Wu, Xianhua; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian

    2014-01-01

    Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27–1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30–1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries. PMID:24578666

  10. Lithium-Ion Battery Capacity Estimation: A Method Based on Visual Cognition

    Directory of Open Access Journals (Sweden)

    Yujie Cheng

    2017-01-01

    Full Text Available This study introduces visual cognition into Lithium-ion battery capacity estimation. The proposed method consists of four steps. First, the acquired charging current or discharge voltage data in each cycle are arranged to form a two-dimensional image. Second, the generated image is decomposed into multiple spatial-frequency channels with a set of orientation subbands by using non-subsampled contourlet transform (NSCT. NSCT imitates the multichannel characteristic of the human visual system (HVS that provides multiresolution, localization, directionality, and shift invariance. Third, several time-domain indicators of the NSCT coefficients are extracted to form an initial high-dimensional feature vector. Similarly, inspired by the HVS manifold sensing characteristic, the Laplacian eigenmap manifold learning method, which is considered to reveal the evolutionary law of battery performance degradation within a low-dimensional intrinsic manifold, is used to further obtain a low-dimensional feature vector. Finally, battery capacity degradation is estimated using the geodesic distance on the manifold between the initial and the most recent features. Verification experiments were conducted using data obtained under different operating and aging conditions. Results suggest that the proposed visual cognition approach provides a highly accurate means of estimating battery capacity and thus offers a promising method derived from the emerging field of cognitive computing.

  11. A Low-Complexity ESPRIT-Based DOA Estimation Method for Co-Prime Linear Arrays.

    Science.gov (United States)

    Sun, Fenggang; Gao, Bin; Chen, Lizhen; Lan, Peng

    2016-08-25

    The problem of direction-of-arrival (DOA) estimation is investigated for co-prime array, where the co-prime array consists of two uniform sparse linear subarrays with extended inter-element spacing. For each sparse subarray, true DOAs are mapped into several equivalent angles impinging on the traditional uniform linear array with half-wavelength spacing. Then, by applying the estimation of signal parameters via rotational invariance technique (ESPRIT), the equivalent DOAs are estimated, and the candidate DOAs are recovered according to the relationship among equivalent and true DOAs. Finally, the true DOAs are estimated by combining the results of the two subarrays. The proposed method achieves a better complexity-performance tradeoff as compared to other existing methods.

  12. The amplitude of quantum field theory

    International Nuclear Information System (INIS)

    Medvedev, B.V.; Pavlov, V.P.; Polivanov, M.K.; Sukhanov, A.D.

    1989-01-01

    General properties of the transition amplitude in axiomatic quantum field theory are discussed. Bogolyubov's axiomatic method is chosen as the variant of the theory. The axioms of this method are analyzed. In particular, the significance of the off-shell extension and of the various forms of the causality condition are examined. A complete proof is given of the existence of a single analytic function whose boundary values are the amplitudes of all channels of a process with given particle number

  13. Low-cost extrapolation method for maximal lte radio base station exposure estimation: Test and validation

    International Nuclear Information System (INIS)

    Verloock, L.; Joseph, W.; Gati, A.; Varsier, N.; Flach, B.; Wiart, J.; Martens, L.

    2013-01-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on down-link band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2x2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders. (authors)

  14. Adaptive Methods for Permeability Estimation and Smart Well Management

    Energy Technology Data Exchange (ETDEWEB)

    Lien, Martha Oekland

    2005-04-01

    The main focus of this thesis is on adaptive regularization methods. We consider two different applications, the inverse problem of absolute permeability estimation and the optimal control problem of estimating smart well management. Reliable estimates of absolute permeability are crucial in order to develop a mathematical description of an oil reservoir. Due to the nature of most oil reservoirs, mainly indirect measurements are available. In this work, dynamic production data from wells are considered. More specifically, we have investigated into the resolution power of pressure data for permeability estimation. The inversion of production data into permeability estimates constitutes a severely ill-posed problem. Hence, regularization techniques are required. In this work, deterministic regularization based on adaptive zonation is considered, i.e. a solution approach with adaptive multiscale estimation in conjunction with level set estimation is developed for coarse scale permeability estimation. A good mathematical reservoir model is a valuable tool for future production planning. Recent developments within well technology have given us smart wells, which yield increased flexibility in the reservoir management. In this work, we investigate into the problem of finding the optimal smart well management by means of hierarchical regularization techniques based on multiscale parameterization and refinement indicators. The thesis is divided into two main parts, where Part I gives a theoretical background for a collection of research papers that has been written by the candidate in collaboration with others. These constitutes the most important part of the thesis, and are presented in Part II. A brief outline of the thesis follows below. Numerical aspects concerning calculations of derivatives will also be discussed. Based on the introduction to regularization given in Chapter 2, methods for multiscale zonation, i.e. adaptive multiscale estimation and refinement

  15. Comparison of methods for estimating premorbid intelligence

    OpenAIRE

    Bright, Peter; van der Linde, Ian

    2018-01-01

    To evaluate impact of neurological injury on cognitive performance it is typically necessary to derive a baseline (or ‘premorbid’) estimate of a patient’s general cognitive ability prior to the onset of impairment. In this paper, we consider a range of common methods for producing this estimate, including those based on current best performance, embedded ‘hold/no hold’ tests, demographic information, and word reading ability. Ninety-two neurologically healthy adult participants were assessed ...

  16. A Comparative Study of Potential Evapotranspiration Estimation by Eight Methods with FAO Penman–Monteith Method in Southwestern China

    Directory of Open Access Journals (Sweden)

    Dengxiao Lang

    2017-09-01

    Full Text Available Potential evapotranspiration (PET is crucial for water resources assessment. In this regard, the FAO (Food and Agriculture Organization–Penman–Monteith method (PM is commonly recognized as a standard method for PET estimation. However, due to requirement of detailed meteorological data, the application of PM is often constrained in many regions. Under such circumstances, an alternative method with similar efficiency to that of PM needs to be identified. In this study, three radiation-based methods, Makkink (Mak, Abtew (Abt, and Priestley–Taylor (PT, and five temperature-based methods, Hargreaves–Samani (HS, Thornthwaite (Tho, Hamon (Ham, Linacre (Lin, and Blaney–Criddle (BC, were compared with PM at yearly and seasonal scale, using long-term (50 years data from 90 meteorology stations in southwest China. Indicators, viz. (videlicet Nash–Sutcliffe efficiency (NSE, relative error (Re, normalized root mean squared error (NRMSE, and coefficient of determination (R2 were used to evaluate the performance of PET estimations by the above-mentioned eight methods. The results showed that the performance of the methods in PET estimation varied among regions; HS, PT, and Abt overestimated PET, while others underestimated. In Sichuan basin, Mak, Abt and HS yielded similar estimations to that of PM, while, in Yun-Gui plateau, Abt, Mak, HS, and PT showed better performances. Mak performed the best in the east Tibetan Plateau at yearly and seasonal scale, while HS showed a good performance in summer and autumn. In the arid river valley, HS, Mak, and Abt performed better than the others. On the other hand, Tho, Ham, Lin, and BC could not be used to estimate PET in some regions. In general, radiation-based methods for PET estimation performed better than temperature-based methods among the selected methods in the study area. Among the radiation-based methods, Mak performed the best, while HS showed the best performance among the temperature-based

  17. Hedgehog bases for A{sub n} cluster polylogarithms and an application to six-point amplitudes

    Energy Technology Data Exchange (ETDEWEB)

    Parker, Daniel E.; Scherlis, Adam; Spradlin, Marcus; Volovich, Anastasia [Department of Physics, Brown University, Providence RI 02912 (United States)

    2015-11-20

    Multi-loop scattering amplitudes in N=4 Yang-Mills theory possess cluster algebra structure. In order to develop a computational framework which exploits this connection, we show how to construct bases of Goncharov polylogarithm functions, at any weight, whose symbol alphabet consists of cluster coordinates on the A{sub n} cluster algebra. Using such a basis we present a new expression for the 2-loop 6-particle NMHV amplitude which makes some of its cluster structure manifest.

  18. Intent-Estimation- and Motion-Model-Based Collision Avoidance Method for Autonomous Vehicles in Urban Environments

    Directory of Open Access Journals (Sweden)

    Rulin Huang

    2017-04-01

    Full Text Available Existing collision avoidance methods for autonomous vehicles, which ignore the driving intent of detected vehicles, thus, cannot satisfy the requirements for autonomous driving in urban environments because of their high false detection rates of collisions with vehicles on winding roads and the missed detection rate of collisions with maneuvering vehicles. This study introduces an intent-estimation- and motion-model-based (IEMMB method to address these disadvantages. First, a state vector is constructed by combining the road structure and the moving state of detected vehicles. A Gaussian mixture model is used to learn the maneuvering patterns of vehicles from collected data, and the patterns are used to estimate the driving intent of the detected vehicles. Then, a desirable long-term trajectory is obtained by weighting time and comfort. The long-term trajectory and the short-term trajectory, which are predicted using a constant yaw rate motion model, are fused to achieve an accurate trajectory. Finally, considering the moving state of the autonomous vehicle, collisions can be detected and avoided. Experiments have shown that the intent estimation method performed well, achieving an accuracy of 91.7% on straight roads and an accuracy of 90.5% on winding roads, which is much higher than that achieved by the method that ignores the road structure. The average collision detection distance is increased by more than 8 m. In addition, the maximum yaw rate and acceleration during an evasive maneuver are decreased, indicating an improvement in the driving comfort.

  19. Multiple-Parameter Estimation Method Based on Spatio-Temporal 2-D Processing for Bistatic MIMO Radar

    Directory of Open Access Journals (Sweden)

    Shouguo Yang

    2015-12-01

    Full Text Available A novel spatio-temporal 2-dimensional (2-D processing method that can jointly estimate the transmitting-receiving azimuth and Doppler frequency for bistatic multiple-input multiple-output (MIMO radar in the presence of spatial colored noise and an unknown number of targets is proposed. In the temporal domain, the cross-correlation of the matched filters’ outputs for different time-delay sampling is used to eliminate the spatial colored noise. In the spatial domain, the proposed method uses a diagonal loading method and subspace theory to estimate the direction of departure (DOD and direction of arrival (DOA, and the Doppler frequency can then be accurately estimated through the estimation of the DOD and DOA. By skipping target number estimation and the eigenvalue decomposition (EVD of the data covariance matrix estimation and only requiring a one-dimensional search, the proposed method achieves low computational complexity. Furthermore, the proposed method is suitable for bistatic MIMO radar with an arbitrary transmitted and received geometrical configuration. The correction and efficiency of the proposed method are verified by computer simulation results.

  20. A comparison study of size-specific dose estimate calculation methods

    Energy Technology Data Exchange (ETDEWEB)

    Parikh, Roshni A. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Michigan Health System, Department of Radiology, Ann Arbor, MI (United States); Wien, Michael A.; Jordan, David W.; Ciancibello, Leslie; Berlin, Sheila C. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Novak, Ronald D. [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); Rebecca D. Considine Research Institute, Children' s Hospital Medical Center of Akron, Center for Mitochondrial Medicine Research, Akron, OH (United States); Klahr, Paul [CT Clinical Science, Philips Healthcare, Highland Heights, OH (United States); Soriano, Stephanie [Rainbow Babies and Children' s Hospital, University Hospitals Cleveland Medical Center, Case Western Reserve University School of Medicine, Department of Radiology, Cleveland, OH (United States); University of Washington, Department of Radiology, Seattle, WA (United States)

    2018-01-15

    The size-specific dose estimate (SSDE) has emerged as an improved metric for use by medical physicists and radiologists for estimating individual patient dose. Several methods of calculating SSDE have been described, ranging from patient thickness or attenuation-based (automated and manual) measurements to weight-based techniques. To compare the accuracy of thickness vs. weight measurement of body size to allow for the calculation of the size-specific dose estimate (SSDE) in pediatric body CT. We retrospectively identified 109 pediatric body CT examinations for SSDE calculation. We examined two automated methods measuring a series of level-specific diameters of the patient's body: method A used the effective diameter and method B used the water-equivalent diameter. Two manual methods measured patient diameter at two predetermined levels: the superior endplate of L2, where body width is typically most thin, and the superior femoral head or iliac crest (for scans that did not include the pelvis), where body width is typically most thick; method C averaged lateral measurements at these two levels from the CT projection scan, and method D averaged lateral and anteroposterior measurements at the same two levels from the axial CT images. Finally, we used body weight to characterize patient size, method E, and compared this with the various other measurement methods. Methods were compared across the entire population as well as by subgroup based on body width. Concordance correlation (ρ{sub c}) between each of the SSDE calculation methods (methods A-E) was greater than 0.92 across the entire population, although the range was wider when analyzed by subgroup (0.42-0.99). When we compared each SSDE measurement method with CTDI{sub vol,} there was poor correlation, ρ{sub c}<0.77, with percentage differences between 20.8% and 51.0%. Automated computer algorithms are accurate and efficient in the calculation of SSDE. Manual methods based on patient thickness provide

  1. Research on an estimation method of DOA for wireless location based on TD-SCDMA

    Science.gov (United States)

    Zhang, Yi; Luo, Yuan; Cheng, Shi-xin

    2004-03-01

    To meet the urgent need of personal communication and hign-speed data services,the standardization and products development for International Mobile Telecommunication-2000 (IMT-2000) have become a hot point in wordwide. The wireless location for mobile terminals has been an important research project. Unlike GPS which is located by 24 artificial satellities, it is based on the base-station of wireless cell network, and the research and development of it are correlative with IMT-2000. While the standard for the third generation mobile telecommunication (3G)-TD-SCDMA, which is proposed by China and the intellective property right of which is possessed by Chinese, is adopted by ITU-T at the first time, the research for wireless location based on TD-SCDMA has theoretic meaning, applied value and marketable foreground. First,the basic principle and method for wireless location, i.e. Direction of Angle(DOA), Time of Arrival(TOA) or Time Difference of Arrival(TDOA), hybridized location(TOA/DOA,TDOA/DOA,TDOA/DOA),etc. is introduced in the paper. So the research of DOA is very important in wireless location. Next, Main estimation methods of DOA for wireless location, i.e. ESPRIT, MUSIC, WSF, Min-norm, etc. are researched in the paper. In the end, the performances of DOA estimation for wireless location based on mobile telecommunication network are analyzed by the research of theory and simulation experiment and the contrast algorithms between and Cramer-Rao Bound. Its research results aren't only propitious to the choice of algorithms for wireless location, but also to the realization of new service of wireless location .

  2. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 1: Frequency analysis

    Science.gov (United States)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion

  3. Application of Modal Parameter Estimation Methods for Continuous Wavelet Transform-Based Damage Detection for Beam-Like Structures

    Directory of Open Access Journals (Sweden)

    Zhi Qiu

    2015-02-01

    Full Text Available This paper presents a hybrid damage detection method based on continuous wavelet transform (CWT and modal parameter identification techniques for beam-like structures. First, two kinds of mode shape estimation methods, herein referred to as the quadrature peaks picking (QPP and rational fraction polynomial (RFP methods, are used to identify the first four mode shapes of an intact beam-like structure based on the hammer/accelerometer modal experiment. The results are compared and validated using a numerical simulation with ABAQUS software. In order to determine the damage detection effectiveness between the QPP-based method and the RFP-based method when applying the CWT technique, the first two mode shapes calculated by the QPP and RFP methods are analyzed using CWT. The experiment, performed on different damage scenarios involving beam-like structures, shows that, due to the outstanding advantage of the denoising characteristic of the RFP-based (RFP-CWT technique, the RFP-CWT method gives a clearer indication of the damage location than the conventionally used QPP-based (QPP-CWT method. Finally, an overall evaluation of the damage detection is outlined, as the identification results suggest that the newly proposed RFP-CWT method is accurate and reliable in terms of detection of damage locations on beam-like structures.

  4. Stochastic LMP (Locational marginal price) calculation method in distribution systems to minimize loss and emission based on Shapley value and two-point estimate method

    International Nuclear Information System (INIS)

    Azad-Farsani, Ehsan; Agah, S.M.M.; Askarian-Abyaneh, Hossein; Abedi, Mehrdad; Hosseinian, S.H.

    2016-01-01

    LMP (Locational marginal price) calculation is a serious impediment in distribution operation when private DG (distributed generation) units are connected to the network. A novel policy is developed in this study to guide distribution company (DISCO) to exert its control over the private units when power loss and green-house gases emissions are minimized. LMP at each DG bus is calculated according to the contribution of the DG to the reduced amount of loss and emission. An iterative algorithm which is based on the Shapley value method is proposed to allocate loss and emission reduction. The proposed algorithm will provide a robust state estimation tool for DISCOs in the next step of operation. The state estimation tool provides the decision maker with the ability to exert its control over private DG units when loss and emission are minimized. Also, a stochastic approach based on the PEM (point estimate method) is employed to capture uncertainty in the market price and load demand. The proposed methodology is applied to a realistic distribution network, and efficiency and accuracy of the method are verified. - Highlights: • Reduction of the loss and emission at the same time. • Fair allocation of loss and emission reduction. • Estimation of the system state using an iterative algorithm. • Ability of DISCOs to control DG units via the proposed policy. • Modeling the uncertainties to calculate the stochastic LMP.

  5. A METHOD TO ESTIMATE TEMPORAL INTERACTION IN A CONDITIONAL RANDOM FIELD BASED APPROACH FOR CROP RECOGNITION

    Directory of Open Access Journals (Sweden)

    P. M. A. Diaz

    2016-06-01

    Full Text Available This paper presents a method to estimate the temporal interaction in a Conditional Random Field (CRF based approach for crop recognition from multitemporal remote sensing image sequences. This approach models the phenology of different crop types as a CRF. Interaction potentials are assumed to depend only on the class labels of an image site at two consecutive epochs. In the proposed method, the estimation of temporal interaction parameters is considered as an optimization problem, whose goal is to find the transition matrix that maximizes the CRF performance, upon a set of labelled data. The objective functions underlying the optimization procedure can be formulated in terms of different accuracy metrics, such as overall and average class accuracy per crop or phenological stages. To validate the proposed approach, experiments were carried out upon a dataset consisting of 12 co-registered LANDSAT images of a region in southeast of Brazil. Pattern Search was used as the optimization algorithm. The experimental results demonstrated that the proposed method was able to substantially outperform estimates related to joint or conditional class transition probabilities, which rely on training samples.

  6. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    Science.gov (United States)

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  7. Graphene based plasmonic terahertz amplitude modulator operating above 100 MHz

    Energy Technology Data Exchange (ETDEWEB)

    Jessop, D. S., E-mail: dsj23@cam.ac.uk, E-mail: rd448@cam.ac.uk; Kindness, S. J.; Ren, Y.; Beere, H. E.; Ritchie, D. A.; Degl' Innocenti, R., E-mail: dsj23@cam.ac.uk, E-mail: rd448@cam.ac.uk [Cavendish Laboratory, University of Cambridge, J J Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Xiao, L.; Braeuninger-Weimer, P.; Hofmann, S. [Department of Engineering, University of Cambridge, 9 J J Thomson Avenue, Cambridge CB3 0FA (United Kingdom); Lin, H.; Zeitler, J. A. [Department of Chemical Engineering & Biotechnology, University of Cambridge, Pembroke Street, Cambridge CB2 3RA (United Kingdom); Ren, C. X. [Department of Materials Science and Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom)

    2016-04-25

    The terahertz (THz) region of the electromagnetic spectrum holds great potential in many fields of study, from spectroscopy to biomedical imaging, remote gas sensing, and high speed communication. To fully exploit this potential, fast optoelectronic devices such as amplitude and phase modulators must be developed. In this work, we present a room temperature external THz amplitude modulator based on plasmonic bow-tie antenna arrays with graphene. By applying a modulating bias to a back gate electrode, the conductivity of graphene is changed, which modifies the reflection characteristics of the incoming THz radiation. The broadband response of the device was characterized by using THz time-domain spectroscopy, and the modulation characteristics such as the modulation depth and cut-off frequency were investigated with a 2.0 THz single frequency emission quantum cascade laser. An optical modulation cut-off frequency of 105 ± 15 MHz is reported. The results agree well with a lumped element circuit model developed to describe the device.

  8. Graphene based plasmonic terahertz amplitude modulator operating above 100 MHz

    International Nuclear Information System (INIS)

    Jessop, D. S.; Kindness, S. J.; Ren, Y.; Beere, H. E.; Ritchie, D. A.; Degl'Innocenti, R.; Xiao, L.; Braeuninger-Weimer, P.; Hofmann, S.; Lin, H.; Zeitler, J. A.; Ren, C. X.

    2016-01-01

    The terahertz (THz) region of the electromagnetic spectrum holds great potential in many fields of study, from spectroscopy to biomedical imaging, remote gas sensing, and high speed communication. To fully exploit this potential, fast optoelectronic devices such as amplitude and phase modulators must be developed. In this work, we present a room temperature external THz amplitude modulator based on plasmonic bow-tie antenna arrays with graphene. By applying a modulating bias to a back gate electrode, the conductivity of graphene is changed, which modifies the reflection characteristics of the incoming THz radiation. The broadband response of the device was characterized by using THz time-domain spectroscopy, and the modulation characteristics such as the modulation depth and cut-off frequency were investigated with a 2.0 THz single frequency emission quantum cascade laser. An optical modulation cut-off frequency of 105 ± 15 MHz is reported. The results agree well with a lumped element circuit model developed to describe the device.

  9. Sum rules for the real parts of nonforward current-particle scattering amplitudes

    International Nuclear Information System (INIS)

    Abdel-Rahman, A.M.M.

    1976-01-01

    Extending previous work, using Taha's refined infinite-momentum method, new sum rules for the real parts of nonforward current-particle scattering amplitudes are derived. The sum rules are based on covariance, casuality, scaling, equal-time algebra and unsubtracted dispersion relations for the amplitudes. A comparison with the corresponding light-cone approach is made, and it is shown that the light-cone sum rules would also follow from the assumptions underlying the present work

  10. Acoustic analog computing based on a reflective metasurface with decoupled modulation of phase and amplitude

    Science.gov (United States)

    Zuo, Shu-Yu; Tian, Ye; Wei, Qi; Cheng, Ying; Liu, Xiao-Jun

    2018-03-01

    The use of metasurfaces has allowed the provision of a variety of functionalities by ultrathin structures, paving the way toward novel highly compact analog computing devices. Here, we conceptually realize analog computing using an acoustic reflective computational metasurface (RCM) that can independently manipulate the reflection phase and amplitude of an incident acoustic signal. This RCM is composed of coating unit cells and perforated panels, where the first can tune the transmission phase within the full range of 2π and the second can adjust the reflection amplitude in the range of 0-1. We show that this RCM can achieve arbitrary reflection phase and amplitude and can be used to realize a unique linear spatially invariant transfer function. Using the spatial Fourier transform (FT), an acoustic analog computing (AAC) system is proposed based on the RCM together with a focusing lens. Based on numerical simulations, we demonstrate that this AAC system can perform mathematical operations such as spatial differentiation, integration, and convolution on an incident acoustic signal. The proposed system has low complexity and reduced size because the RCM is able to individually adjust the reflection phase and amplitude and because only one block is involved in performing the spatial FT. Our work may offer a practical, efficient, and flexible approach to the design of compact devices for acoustic computing applications, signal processing, equation solving, and acoustic wave manipulations.

  11. Developing an objective evaluation method to estimate diabetes risk in community-based settings.

    Science.gov (United States)

    Kenya, Sonjia; He, Qing; Fullilove, Robert; Kotler, Donald P

    2011-05-01

    Exercise interventions often aim to affect abdominal obesity and glucose tolerance, two significant risk factors for type 2 diabetes. Because of limited financial and clinical resources in community and university-based environments, intervention effects are often measured with interviews or questionnaires and correlated with weight loss or body fat indicated by body bioimpedence analysis (BIA). However, self-reported assessments are subject to high levels of bias and low levels of reliability. Because obesity and body fat are correlated with diabetes at different levels in various ethnic groups, data reflecting changes in weight or fat do not necessarily indicate changes in diabetes risk. To determine how exercise interventions affect diabetes risk in community and university-based settings, improved evaluation methods are warranted. We compared a noninvasive, objective measurement technique--regional BIA--with whole-body BIA for its ability to assess abdominal obesity and predict glucose tolerance in 39 women. To determine regional BIA's utility in predicting glucose, we tested the association between the regional BIA method and blood glucose levels. Regional BIA estimates of abdominal fat area were significantly correlated (r = 0.554, P < 0.003) with fasting glucose. When waist circumference and family history of diabetes were added to abdominal fat in multiple regression models, the association with glucose increased further (r = 0.701, P < 0.001). Regional BIA estimates of abdominal fat may predict fasting glucose better than whole-body BIA as well as provide an objective assessment of changes in diabetes risk achieved through physical activity interventions in community settings.

  12. Helicity amplitudes for matter-coupled gravity

    International Nuclear Information System (INIS)

    Aldrovandi, R.; Novaes, S.F.; Spehler, D.

    1992-07-01

    The Weyl-van der Waerden spinor formalism is applied to the evaluation of helicity invariant amplitudes in the framework of linearized gravitation. The graviton couplings to spin-0, 1 - 2 , 1, and 3 - 2 particles are given, and, to exhibit the reach of this method, the helicity amplitudes for the process electron + positron → photon + graviton are obtained. (author)

  13. Constructing QCD one-loop amplitudes

    International Nuclear Information System (INIS)

    Forde, D

    2008-01-01

    In the context of constructing one-loop amplitudes using a unitarity bootstrap approach we discuss a general systematic procedure for obtaining the coefficients of the scalar bubble and triangle integral functions of one-loop amplitudes. Coefficients are extracted after examining the behavior of the cut integrand as the unconstrained parameters of a specifically chosen parameterization of the cut loop momentum approach infinity. Measurements of new physics at the forthcoming experimental program at CERN's Large Hadron Collider (LHC) will require a precise understanding of processes at next-to-leading order (NLO). This places increased demands for the computation of new one-loop amplitudes. This in turn has spurred recent developments towards improved calculational techniques. Direct calculations using Feynman diagrams are in general inefficient. Developments of more efficient techniques have usually centered around unitarity techniques [1], where tree amplitudes are effectively 'glued' together to form loops. The most straightforward application of this method, in which the cut loop momentum is in D = 4, allows for the computation of 'cut-constructible' terms only, i.e. (poly)logarithmic containing terms and any related constants. QCD amplitudes contain, in addition to such terms, rational pieces which cannot be derived using such cuts. These 'missing' rational parts can be extracted using cut loop momenta in D = 4-2 (var e psilon). The greater difficulty of such calculations has restricted the application of this approach, although recent developments [3, 4] have provided new promise for this technique. Recently the application of on-shell recursion relations [5] to obtaining the 'missing' rational parts of one-loop processes [6] has provided an alternative very promising solution to this problem. In combination with unitarity methods an 'on-shell bootstrap' approach provides an efficient technique for computing complete one-loop QCD amplitudes [7]. Additionally

  14. Application of modified homotopy perturbation method and amplitude frequency formulation to strongly nonlinear oscillators

    Directory of Open Access Journals (Sweden)

    seyd ghasem enayati

    2017-01-01

    Full Text Available In this paper, two powerful analytical methods known as modified homotopy perturbation method and Amplitude Frequency Formulation called respectively MHPM and AFF, are introduced to derive approximate solutions of a system of ordinary differential equations appear in mechanical applications. These methods convert a difficult problem into a simple one, which can be easily handled. The obtained solutions are compared with numerical fourth order runge-kutta method to show the applicability and accuracy of both MHPM and AFF in solving this sample problem. The results attained in this paper confirm the idea that MHPM and AFF are powerful mathematical tools and they can be applied to linear and nonlinear problems.

  15. The probability estimate of the defects of the asynchronous motors based on the complex method of diagnostics

    Science.gov (United States)

    Zhukovskiy, Yu L.; Korolev, N. A.; Babanova, I. S.; Boikov, A. V.

    2017-10-01

    This article is devoted to the development of a method for probability estimate of failure of an asynchronous motor as a part of electric drive with a frequency converter. The proposed method is based on a comprehensive method of diagnostics of vibration and electrical characteristics that take into account the quality of the supply network and the operating conditions. The developed diagnostic system allows to increase the accuracy and quality of diagnoses by determining the probability of failure-free operation of the electromechanical equipment, when the parameters deviate from the norm. This system uses an artificial neural networks (ANNs). The results of the system for estimator the technical condition are probability diagrams of the technical state and quantitative evaluation of the defects of the asynchronous motor and its components.

  16. Algebraic evaluation of rational polynomials in one-loop amplitudes

    International Nuclear Information System (INIS)

    Binoth, Thomas; Guillet, Jean-Philippe; Heinrich, Gudrun

    2007-01-01

    One-loop amplitudes are to a large extent determined by their unitarity cuts in four dimensions. We show that the remaining rational terms can be obtained from the ultraviolet behaviour of the amplitude, and determine universal form factors for these rational parts by applying reduction techniques to the Feynman diagrammatic representation of the amplitude. The method is valid for massless and massive internal particles. We illustrate this method by evaluating the rational terms of the one-loop amplitudes for gg→H, γγ→γγ, gg→gg,γγ→ggg and γγ→γγγγ

  17. UD-DKF-based Parameters on-line Identification Method and AEKF-Based SOC Estimation Strategy of Lithium-ion Battery

    Directory of Open Access Journals (Sweden)

    Xuanju Dang

    2014-09-01

    Full Text Available State of charge (SOC is a significant parameter for the Battery Management System (BMS. The accurate estimation of the SOC can not only guarantee the SOC remaining within a reasonable scope of work, but also prevent the battery from being over or deeply-charged to extend the lifespan of battery. In this paper, the third-order RC equivalent circuit model is adopted to describe cell characteristics and the dual Kalman filter (DKF is used online to identify model parameters for battery. In order to avoid the impacts of rounding error calculation leading to the estimation error matrix loss of non-negative qualitative which result in the filtering divergence phenomenon, the UD decomposition method is applied for filtering time and state updates simultaneously to enhance the stability of the algorithm, reduce the computational complexity and improve the high recognition accuracy. Based on the obtained model parameters, Adaptive Extended Kalman Filter (AEKF is introduced to online estimate the SOC of battery. The simulation and experimental results demonstrate that the established third-order RC equivalent circuit model is effective, and the SOC estimation has a higher precision.

  18. Jump phenomena. [large amplitude responses of nonlinear systems

    Science.gov (United States)

    Reiss, E. L.

    1980-01-01

    The paper considers jump phenomena composed of large amplitude responses of nonlinear systems caused by small amplitude disturbances. Physical problems where large jumps in the solution amplitude are important features of the response are described, including snap buckling of elastic shells, chemical reactions leading to combustion and explosion, and long-term climatic changes of the earth's atmosphere. A new method of rational functions was then developed which consists of representing the solutions of the jump problems as rational functions of the small disturbance parameter; this method can solve jump problems explicitly.

  19. A New Approach to Eliminate High Amplitude Artifacts in EEG Signals

    Directory of Open Access Journals (Sweden)

    Ana Rita Teixeira

    2016-09-01

    Full Text Available High amplitude artifacts represent a problem during EEG recordings in neuroscience research. Taking this into account, this paper proposes a method to identify high amplitude artifacts with no requirement for visual inspection, electrooscillogram (EOG reference channel or user assigned parameters. A potential solution to the high amplitude artifacts (HAA elimination is presented based on blind source separation methods. The assumption underlying the selection of components is that HAA are independent of the EEG signal and different HAA can be generated during the EEG recordings. Therefore, the number of components related to HAA is variable and depends on the processed signal, which means that the method is adaptable to the input signal. The results show, when removing the HAA artifacts, the delta band is distorted but all the other frequency bands are preserved. A case study with EEG signals recorded while participants performed on the Halstead Category Test (HCT is presented. After HAA removal, data analysis revealed, as expected, an error-related frontal ERP wave: the feedback-related negativity (FRN in response to feedback stimuli.

  20. Benchmarking Foot Trajectory Estimation Methods for Mobile Gait Analysis

    Directory of Open Access Journals (Sweden)

    Julius Hannink

    2017-08-01

    Full Text Available Mobile gait analysis systems based on inertial sensing on the shoe are applied in a wide range of applications. Especially for medical applications, they can give new insights into motor impairment in, e.g., neurodegenerative disease and help objectify patient assessment. One key component in these systems is the reconstruction of the foot trajectories from inertial data. In literature, various methods for this task have been proposed. However, performance is evaluated on a variety of datasets due to the lack of large, generally accepted benchmark datasets. This hinders a fair comparison of methods. In this work, we implement three orientation estimation and three double integration schemes for use in a foot trajectory estimation pipeline. All methods are drawn from literature and evaluated against a marker-based motion capture reference. We provide a fair comparison on the same dataset consisting of 735 strides from 16 healthy subjects. As a result, the implemented methods are ranked and we identify the most suitable processing pipeline for foot trajectory estimation in the context of mobile gait analysis.

  1. A New Method to Estimate Changes in Glacier Surface Elevation Based on Polynomial Fitting of Sparse ICESat—GLAS Footprints

    Directory of Open Access Journals (Sweden)

    Tianjin Huang

    2017-08-01

    Full Text Available We present in this paper a polynomial fitting method applicable to segments of footprints measured by the Geoscience Laser Altimeter System (GLAS to estimate glacier thickness change. Our modification makes the method applicable to complex topography, such as a large mountain glacier. After a full analysis of the planar fitting method to characterize errors of estimates due to complex topography, we developed an improved fitting method by adjusting a binary polynomial surface to local topography. The improved method and the planar fitting method were tested on the accumulation areas of the Naimona’nyi glacier and Yanong glacier on along-track facets with lengths of 1000 m, 1500 m, 2000 m, and 2500 m, respectively. The results show that the improved method gives more reliable estimates of changes in elevation than planar fitting. The improved method was also tested on Guliya glacier with a large and relatively flat area and the Chasku Muba glacier with very complex topography. The results in these test sites demonstrate that the improved method can give estimates of glacier thickness change on glaciers with a large area and a complex topography. Additionally, the improved method based on GLAS Data and Shuttle Radar Topography Mission-Digital Elevation Model (SRTM-DEM can give estimates of glacier thickness change from 2000 to 2008/2009, since it takes the 2000 SRTM-DEM as a reference, which is a longer period than 2004 to 2008/2009, when using the GLAS data only and the planar fitting method.

  2. Research on the method of information system risk state estimation based on clustering particle filter

    Directory of Open Access Journals (Sweden)

    Cui Jia

    2017-05-01

    Full Text Available With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  3. Research on the method of information system risk state estimation based on clustering particle filter

    Science.gov (United States)

    Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua

    2017-05-01

    With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  4. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    Science.gov (United States)

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  5. Practical state of health estimation of power batteries based on Delphi method and grey relational grade analysis

    Science.gov (United States)

    Sun, Bingxiang; Jiang, Jiuchun; Zheng, Fangdan; Zhao, Wei; Liaw, Bor Yann; Ruan, Haijun; Han, Zhiqiang; Zhang, Weige

    2015-05-01

    The state of health (SOH) estimation is very critical to battery management system to ensure the safety and reliability of EV battery operation. Here, we used a unique hybrid approach to enable complex SOH estimations. The approach hybridizes the Delphi method known for its simplicity and effectiveness in applying weighting factors for complicated decision-making and the grey relational grade analysis (GRGA) for multi-factor optimization. Six critical factors were used in the consideration for SOH estimation: peak power at 30% state-of-charge (SOC), capacity, the voltage drop at 30% SOC with a C/3 pulse, the temperature rises at the end of discharge and charge at 1C; respectively, and the open circuit voltage at the end of charge after 1-h rest. The weighting of these factors for SOH estimation was scored by the 'experts' in the Delphi method, indicating the influencing power of each factor on SOH. The parameters for these factors expressing the battery state variations are optimized by GRGA. Eight battery cells were used to illustrate the principle and methodology to estimate the SOH by this hybrid approach, and the results were compared with those based on capacity and power capability. The contrast among different SOH estimations is discussed.

  6. Cosmological constraints on the amplitude of relic gravitational waves

    International Nuclear Information System (INIS)

    Novosyadlij, B.; Apunevich, S.

    2005-01-01

    The evolution of the amplitude of relic gravitational waves (RGW) generated in early Universe has been analyzed. The analytical approximation is presented for angular power spectrum of cosmic microwave background anisotropies caused by gravitational waves through Sachs-Wolfe effect. The estimate of the most probable value for this amplitude was obtained on the basis of observation data on cosmic microwave background anisotropies from COBE, WMAP and BOOMERanG experiments along with large-scale structure observations

  7. The development of special equipment amplitude detection instrument based on DSP

    International Nuclear Information System (INIS)

    Dai Sidan; Chen Ligang; Lan Peng; Wang Huiting; Zhang Liangxu; Wang Lin

    2014-01-01

    Development and industrial application of special equipment plays an important role in the development of nuclear energy process. Equipment development process need to do a lot of tests, amplitude detection is a key test,it can analysis the device's electromechanical and physical properties. In the industrial application, the amplitude detection can effectively reflect the operational status of the current equipment, the equipment can also be a certain degree of fault diagnosis, identify problems in a timely manner. The main development target in this article is amplitude detection of special equipment. This article describes the development of special equipment amplitude detection instrument. The instrument uses a digital signal processor (DSP) as the central processing unit, and uses the DSP + CPLD + high-speed AD technology to build a complete set of high-precision signal acquisition and analysis processing systems, rechargeable lithium battery as the powered device. It can do a online monitoring of special equipment amplitude, speed parameters by acquiring and analysing the tachometer signal in the special equipment, and locally display through the LCD screen. (authors)

  8. Direct phase derivative estimation using difference equation modeling in holographic interferometry

    International Nuclear Information System (INIS)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2014-01-01

    A new method is proposed for the direct phase derivative estimation from a single spatial frequency modulated carrier fringe pattern in holographic interferometry. The fringe intensity in a given row/column is modeled as a difference equation of intensity with spatially varying coefficients. These coefficients carry the information on the phase derivative. Consequently, the accurate estimation of the coefficients is obtained by approximating the coefficients as a linear combination of the predefined linearly independent basis functions. Unlike Fourier transform based fringe analysis, the method does not call for performing the filtering of the Fourier spectrum of fringe intensity. Moreover, the estimation of the carrier frequency is performed by applying the proposed method to a reference interferogram. The performance of the proposed method is insensitive to the fringe amplitude modulation and is validated with the simulation results. (paper)

  9. Analytic continuation of dual Feynman amplitudes

    International Nuclear Information System (INIS)

    Bleher, P.M.

    1981-01-01

    A notion of dual Feynman amplitude is introduced and a theorem on the existence of analytic continuation of this amplitude from the convergence domain to the whole complex is proved. The case under consideration corresponds to massless power propagators and the analytic continuation is constructed on the propagators powers. Analytic continuation poles and singular set of external impulses are found explicitly. The proof of the theorem on the existence of analytic continuation is based on the introduction of α-representation for dual Feynman amplitudes. In proving, the so-called ''trees formula'' and ''trees-with-cycles formula'' are established that are dual by formulation to the trees and 2-trees formulae for usual Feynman amplitudes. (Auth.)

  10. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  11. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    1997-01-01

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  12. Improving cluster-based missing value estimation of DNA microarray data.

    Science.gov (United States)

    Brás, Lígia P; Menezes, José C

    2007-06-01

    We present a modification of the weighted K-nearest neighbours imputation method (KNNimpute) for missing values (MVs) estimation in microarray data based on the reuse of estimated data. The method was called iterative KNN imputation (IKNNimpute) as the estimation is performed iteratively using the recently estimated values. The estimation efficiency of IKNNimpute was assessed under different conditions (data type, fraction and structure of missing data) by the normalized root mean squared error (NRMSE) and the correlation coefficients between estimated and true values, and compared with that of other cluster-based estimation methods (KNNimpute and sequential KNN). We further investigated the influence of imputation on the detection of differentially expressed genes using SAM by examining the differentially expressed genes that are lost after MV estimation. The performance measures give consistent results, indicating that the iterative procedure of IKNNimpute can enhance the prediction ability of cluster-based methods in the presence of high missing rates, in non-time series experiments and in data sets comprising both time series and non-time series data, because the information of the genes having MVs is used more efficiently and the iterative procedure allows refining the MV estimates. More importantly, IKNN has a smaller detrimental effect on the detection of differentially expressed genes.

  13. Scattering amplitudes in open superstring theory

    Energy Technology Data Exchange (ETDEWEB)

    Schlotterer, Oliver

    2011-07-15

    The present thesis deals with the theme field of the scattering amplitudes in theories of open superstrings. Especially two different formalisms for the handling of superstrings are introduced and applied for the calaculation of tree-level amplitudes - the Ramond- Neveu-Schwarz (RNS) and the Pure-Spinor (PS) formalism. The RNS approach is proved as flexible in order to describe compactification of the initially ten flat space-time dimensions to four dimensions. We solve the technical problems, which result from the interacting basing world-sheet theory with conformal symmetry. This is used to calculate phenomenologically relevant scattering amplitudes of gluons and quarks as well as production rates of massive harmonic vibrations, which were already identified as virtual exchange particles on the massless level. In the case of a low string mass scale in the range of some Tev the string-specific signatures in parton collisions can be observed in the near future in the LHC experiment at CERN and indicated as first experimental proof of the string theory. THose string effects occur universally for a wide class of string ground states respectively internal geometries and represent an elegant way to avoid the so-called landscape problem of the string theory. A further theme complex in this thesis is based on the PS formalism, which allows a manifestly supersymmetric treatment of scattering amplitudes in ten space-time dimension with sixteen supercharges. We introduce a family of superfields, which occur in massless amplitudes of the open string and can be naturally identified with diagrams of three-valued knots. Thereby we reach not only a compact superspace representation of the n-point field-theory amplitude but can also write the complete superstring n-point amplitude as minimal linear combination of partial amplitudes of the field theory as well as hypergeometric functions. The latter carry the string effects and are analyzed from different perspectives, above all

  14. Scattering amplitudes in open superstring theory

    International Nuclear Information System (INIS)

    Schlotterer, Oliver

    2011-01-01

    The present thesis deals with the theme field of the scattering amplitudes in theories of open superstrings. Especially two different formalisms for the handling of superstrings are introduced and applied for the calaculation of tree-level amplitudes - the Ramond- Neveu-Schwarz (RNS) and the Pure-Spinor (PS) formalism. The RNS approach is proved as flexible in order to describe compactification of the initially ten flat space-time dimensions to four dimensions. We solve the technical problems, which result from the interacting basing world-sheet theory with conformal symmetry. This is used to calculate phenomenologically relevant scattering amplitudes of gluons and quarks as well as production rates of massive harmonic vibrations, which were already identified as virtual exchange particles on the massless level. In the case of a low string mass scale in the range of some Tev the string-specific signatures in parton collisions can be observed in the near future in the LHC experiment at CERN and indicated as first experimental proof of the string theory. THose string effects occur universally for a wide class of string ground states respectively internal geometries and represent an elegant way to avoid the so-called landscape problem of the string theory. A further theme complex in this thesis is based on the PS formalism, which allows a manifestly supersymmetric treatment of scattering amplitudes in ten space-time dimension with sixteen supercharges. We introduce a family of superfields, which occur in massless amplitudes of the open string and can be naturally identified with diagrams of three-valued knots. Thereby we reach not only a compact superspace representation of the n-point field-theory amplitude but can also write the complete superstring n-point amplitude as minimal linear combination of partial amplitudes of the field theory as well as hypergeometric functions. The latter carry the string effects and are analyzed from different perspectives, above all

  15. Fast Estimation Method of Space-Time Two-Dimensional Positioning Parameters Based on Hadamard Product

    Directory of Open Access Journals (Sweden)

    Haiwen Li

    2018-01-01

    Full Text Available The estimation speed of positioning parameters determines the effectiveness of the positioning system. The time of arrival (TOA and direction of arrival (DOA parameters can be estimated by the space-time two-dimensional multiple signal classification (2D-MUSIC algorithm for array antenna. However, this algorithm needs much time to complete the two-dimensional pseudo spectral peak search, which makes it difficult to apply in practice. Aiming at solving this problem, a fast estimation method of space-time two-dimensional positioning parameters based on Hadamard product is proposed in orthogonal frequency division multiplexing (OFDM system, and the Cramer-Rao bound (CRB is also presented. Firstly, according to the channel frequency domain response vector of each array, the channel frequency domain estimation vector is constructed using the Hadamard product form containing location information. Then, the autocorrelation matrix of the channel response vector for the extended array element in frequency domain and the noise subspace are calculated successively. Finally, by combining the closed-form solution and parameter pairing, the fast joint estimation for time delay and arrival direction is accomplished. The theoretical analysis and simulation results show that the proposed algorithm can significantly reduce the computational complexity and guarantee that the estimation accuracy is not only better than estimating signal parameters via rotational invariance techniques (ESPRIT algorithm and 2D matrix pencil (MP algorithm but also close to 2D-MUSIC algorithm. Moreover, the proposed algorithm also has certain adaptability to multipath environment and effectively improves the ability of fast acquisition of location parameters.

  16. Inventory-based estimates of forest biomass carbon stocks in China: A comparison of three methods

    Science.gov (United States)

    Zhaodi Guo; Jingyun Fang; Yude Pan; Richard. Birdsey

    2010-01-01

    Several studies have reported different estimates for forest biomass carbon (C) stocks in China. The discrepancy among these estimates may be largely attributed to the methods used. In this study, we used three methods [mean biomass density method (MBM), mean ratio method (MRM), and continuous biomass expansion factor (BEF) method (abbreviated as CBM)] applied to...

  17. A multi-timescale estimator for battery state of charge and capacity dual estimation based on an online identified model

    International Nuclear Information System (INIS)

    Wei, Zhongbao; Zhao, Jiyun; Ji, Dongxu; Tseng, King Jet

    2017-01-01

    Highlights: •SOC and capacity are dually estimated with online adapted battery model. •Model identification and state dual estimate are fully decoupled. •Multiple timescales are used to improve estimation accuracy and stability. •The proposed method is verified with lab-scale experiments. •The proposed method is applicable to different battery chemistries. -- Abstract: Reliable online estimation of state of charge (SOC) and capacity is critically important for the battery management system (BMS). This paper presents a multi-timescale method for dual estimation of SOC and capacity with an online identified battery model. The model parameter estimator and the dual estimator are fully decoupled and executed with different timescales to improve the model accuracy and stability. Specifically, the model parameters are online adapted with the vector-type recursive least squares (VRLS) to address the different variation rates of them. Based on the online adapted battery model, the Kalman filter (KF)-based SOC estimator and RLS-based capacity estimator are formulated and integrated in the form of dual estimation. Experimental results suggest that the proposed method estimates the model parameters, SOC, and capacity in real time with fast convergence and high accuracy. Experiments on both lithium-ion battery and vanadium redox flow battery (VRB) verify the generality of the proposed method on multiple battery chemistries. The proposed method is also compared with other existing methods on the computational cost to reveal its superiority for practical application.

  18. A Comprehensive Estimation of the Economic Effects of Meteorological Services Based on the Input-Output Method

    Directory of Open Access Journals (Sweden)

    Xianhua Wu

    2014-01-01

    Full Text Available Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27–1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30–1 : 51. Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries.

  19. Tip radius preservation for high resolution imaging in amplitude modulation atomic force microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, Jorge R., E-mail: jorge.rr@cea.cu [Instituto de Ciencia de Materiales de Madrid, Sor Juana Inés de la Cruz 3, Canto Blanco, 28049 Madrid, España (Spain)

    2014-07-28

    The acquisition of high resolution images in atomic force microscopy (AFM) is correlated to the cantilever's tip shape, size, and imaging conditions. In this work, relative tip wear is quantified based on the evolution of a direct experimental observable in amplitude modulation atomic force microscopy, i.e., the critical amplitude. We further show that the scanning parameters required to guarantee a maximum compressive stress that is lower than the yield/fracture stress of the tip can be estimated via experimental observables. In both counts, the optimized parameters to acquire AFM images while preserving the tip are discussed. The results are validated experimentally by employing IgG antibodies as a model system.

  20. TREEDE, Point Fluxes and Currents Based on Track Rotation Estimator by Monte-Carlo Method

    International Nuclear Information System (INIS)

    Dubi, A.

    1985-01-01

    1 - Description of problem or function: TREEDE is a Monte Carlo transport code based on the Track Rotation estimator, used, in general, to calculate fluxes and currents at a point. This code served as a test code in the development of the concept of the Track Rotation estimator, and therefore analogue Monte Carlo is used (i.e. no importance biasing). 2 - Method of solution: The basic idea is to follow the particle's track in the medium and then to rotate it such that it passes through the detector point. That is, rotational symmetry considerations (even in non-spherically symmetric configurations) are applied to every history, so that a very large fraction of the track histories can be rotated and made to pass through the point of interest; in this manner the 1/r 2 singularity in the un-collided flux estimator (next event estimator) is avoided. TREEDE, being a test code, is used to estimate leakage or in-medium fluxes at given points in a 3-dimensional finite box, where the source is an isotropic point source at the centre of the z = 0 surface. However, many of the constraints of geometry and source can be easily removed. The medium is assumed homogeneous with isotropic scattering, and one energy group only is considered. 3 - Restrictions on the complexity of the problem: One energy group, a homogeneous medium, isotropic scattering

  1. Estimating monthly temperature using point based interpolation techniques

    Science.gov (United States)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  2. The five-gluon amplitude and one-loop integrals

    International Nuclear Information System (INIS)

    Bern, Z.; Dixon, L.; Kosower, D.A.

    1992-12-01

    We review the conventional field theory description of the string motivated technique. This technique is applied to the one-loop five-gluon amplitude. To evaluate the amplitude a general method for computing dimensionally regulated one-loop integrals is outlined including results for one-loop integrals required for the pentagon diagram and beyond. Finally, two five-gluon helicity amplitudes are given

  3. Methods to estimate the genetic risk

    International Nuclear Information System (INIS)

    Ehling, U.H.

    1989-01-01

    The estimation of the radiation-induced genetic risk to human populations is based on the extrapolation of results from animal experiments. Radiation-induced mutations are stochastic events. The probability of the event depends on the dose; the degree of the damage dose not. There are two main approaches in making genetic risk estimates. One of these, termed the direct method, expresses risk in terms of expected frequencies of genetic changes induced per unit dose. The other, referred to as the doubling dose method or the indirect method, expresses risk in relation to the observed incidence of genetic disorders now present in man. The advantage of the indirect method is that not only can Mendelian mutations be quantified, but also other types of genetic disorders. The disadvantages of the method are the uncertainties in determining the current incidence of genetic disorders in human and, in addition, the estimasion of the genetic component of congenital anomalies, anomalies expressed later and constitutional and degenerative diseases. Using the direct method we estimated that 20-50 dominant radiation-induced mutations would be expected in 19 000 offspring born to parents exposed in Hiroshima and Nagasaki, but only a small proportion of these mutants would have been detected with the techniques used for the population study. These methods were used to predict the genetic damage from the fallout of the reactor accident at Chernobyl in the vicinity of Southern Germany. The lack of knowledge for the interaction of chemicals with ionizing radiation and the discrepancy between the high safety standards for radiation protection and the low level of knowledge for the toxicological evaluation of chemical mutagens will be emphasized. (author)

  4. A method for estimating age of medieval sub-adults from infancy to adulthood based on long bone length

    DEFF Research Database (Denmark)

    Primeau, Charlotte; Friis, Laila Saidane; Sejrsen, Birgitte

    2016-01-01

    OBJECTIVES: To develop a series of regression equations for estimating age from length of long bones for archaeological sub-adults when aging from dental development cannot be performed. Further, to compare derived ages when using these regression equations, and two other methods. MATERIAL AND ME...... as later than the medieval period, although this would require further testing. The quadratic equations are suggested to yield more accurate ages then using simply linear regression equations. Am J Phys Anthropol, 2015. © 2015 Wiley Periodicals, Inc.......OBJECTIVES: To develop a series of regression equations for estimating age from length of long bones for archaeological sub-adults when aging from dental development cannot be performed. Further, to compare derived ages when using these regression equations, and two other methods. MATERIAL...... AND METHODS: A total of 183 skeletal sub-adults from the Danish medieval period, were aged from radiographic images. Linear regression formulae were then produced for individual bones. Age was then estimated from the femur length using three different methods: equations developed in this study, data based...

  5. Climate reconstruction analysis using coexistence likelihood estimation (CRACLE): a method for the estimation of climate using vegetation.

    Science.gov (United States)

    Harbert, Robert S; Nixon, Kevin C

    2015-08-01

    • Plant distributions have long been understood to be correlated with the environmental conditions to which species are adapted. Climate is one of the major components driving species distributions. Therefore, it is expected that the plants coexisting in a community are reflective of the local environment, particularly climate.• Presented here is a method for the estimation of climate from local plant species coexistence data. The method, Climate Reconstruction Analysis using Coexistence Likelihood Estimation (CRACLE), is a likelihood-based method that employs specimen collection data at a global scale for the inference of species climate tolerance. CRACLE calculates the maximum joint likelihood of coexistence given individual species climate tolerance characterization to estimate the expected climate.• Plant distribution data for more than 4000 species were used to show that this method accurately infers expected climate profiles for 165 sites with diverse climatic conditions. Estimates differ from the WorldClim global climate model by less than 1.5°C on average for mean annual temperature and less than ∼250 mm for mean annual precipitation. This is a significant improvement upon other plant-based climate-proxy methods.• CRACLE validates long hypothesized interactions between climate and local associations of plant species. Furthermore, CRACLE successfully estimates climate that is consistent with the widely used WorldClim model and therefore may be applied to the quantitative estimation of paleoclimate in future studies. © 2015 Botanical Society of America, Inc.

  6. A Method to Estimate the Size and Characteristics of HIV-positive Populations Using an Individual-based Stochastic Simulation Model

    DEFF Research Database (Denmark)

    Nakagawa, Fumiyo; van Sighem, Ard; Thiebaut, Rodolphe

    2016-01-01

    % plausibility range: 39,900-45,560) men who have sex with men were estimated to be living with HIV in the UK, of whom 10,400 (6,160-17,350) were undiagnosed. There were an estimated 3,210 (1,730-5,350) infections per year on average between 2010 and 2013. Sixty-two percent of the total HIV-positive population......It is important not only to collect epidemiologic data on HIV but to also fully utilize such information to understand the epidemic over time and to help inform and monitor the impact of policies and interventions. We describe and apply a novel method to estimate the size and characteristics of HIV-positive...... populations. The method was applied to data on men who have sex with men living in the UK and to a pseudo dataset to assess performance for different data availability. The individual-based simulation model was calibrated using an approximate Bayesian computation-based approach. In 2013, 48,310 (90...

  7. Estimation of debonded area in bearing babbitt metal by C-Scan method

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Gye-jo; Park, Sang-ki [Korea Electric Power Research Inst., Taejeon (Korea); Cha, Seok-ju [Korea South Eastern Power Corp., Seoul (Korea). GEN Sector; Park, Young-woo [Chungnam National Univ., Taejeon (Korea). Mechatronics

    2006-07-01

    The debonding area which had a complex boundary was imaged with a immersion technique, and the acoustic image was compared with the actual area. The amplitude information from focused transducer can discriminate between a defected boundary area and a sound interface of dissimilar metal. The shape of irregular boundary and area was processed by a histogram equalization, after that, through the clustering and labelling, it makes the defect area cleared. Each pixel has ultrasonic intensity rate and represents a position data. The estimation error in measuring debonding area was within 4% by image processing technique. The validity of this immersion method and image equalizing technique has been done for the inspection of power plant turbine's thrust bearings. (orig.)

  8. Estimation of the neural drive to the muscle from surface electromyograms

    Science.gov (United States)

    Hofmann, David

    Muscle force is highly correlated with the standard deviation of the surface electromyogram (sEMG) produced by the active muscle. Correctly estimating this quantity of non-stationary sEMG and understanding its relation to neural drive and muscle force is of paramount importance. The single constituents of the sEMG are called motor unit action potentials whose biphasic amplitude can interfere (named amplitude cancellation), potentially affecting the standard deviation (Keenan etal. 2005). However, when certain conditions are met the Campbell-Hardy theorem suggests that amplitude cancellation does not affect the standard deviation. By simulation of the sEMG, we verify the applicability of this theorem to myoelectric signals and investigate deviations from its conditions to obtain a more realistic setting. We find no difference in estimated standard deviation with and without interference, standing in stark contrast to previous results (Keenan etal. 2008, Farina etal. 2010). Furthermore, since the theorem provides us with the functional relationship between standard deviation and neural drive we conclude that complex methods based on high density electrode arrays and blind source separation might not bear substantial advantages for neural drive estimation (Farina and Holobar 2016). Funded by NIH Grant Number 1 R01 EB022872 and NSF Grant Number 1208126.

  9. Differential estimates of southern flying squirrel (Glaucomys volans) population structure based on capture method

    Science.gov (United States)

    Kevin S. Laves; Susan C. Loeb

    2005-01-01

    It is commonly assumed that population estimates derived from trapping small mammals are accurate and unbiased or that estimates derived from different capture methods are comparable. We captured southern flying squirrels (Glaucmrtys volam) using two methods to study their effect on red-cockaded woodpecker (Picoides bumah) reproductive success. Southern flying...

  10. Radiation Pattern Reconstruction from the Near-Field Amplitude Measurement on Two Planes Using PSO

    Directory of Open Access Journals (Sweden)

    Z. Novacek

    2005-12-01

    Full Text Available The paper presents a new approach to the radiation patternreconstruction from near-field amplitude only measurement over a twoplanar scanning surfaces. This new method for antenna patternreconstruction is based on the global optimization PSO (Particle SwarmOptimization. The paper presents appropriate phaseless measurementrequirements and phase retrieval algorithm together with a briefdescription of the particle swarm optimization method. In order toexamine the methodologies developed in this paper, phaselessmeasurement results for two different antennas are presented andcompared to results obtained by a complex measurement (amplitude andphase.

  11. Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation

    Directory of Open Access Journals (Sweden)

    Sharad Damodar Gore

    2009-10-01

    Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.

  12. Evaluation of non cyanide methods for hemoglobin estimation

    Directory of Open Access Journals (Sweden)

    Vinaya B Shah

    2011-01-01

    Full Text Available Background: The hemoglobincyanide method (HiCN method for measuring hemoglobin is used extensively worldwide; its advantages are the ready availability of a stable and internationally accepted reference standard calibrator. However, its use may create a problem, as the waste disposal of large volumes of reagent containing cyanide constitutes a potential toxic hazard. Aims and Objective: As an alternative to drabkin`s method of Hb estimation, we attempted to estimate hemoglobin by other non-cyanide methods: alkaline hematin detergent (AHD-575 using Triton X-100 as lyser and alkaline- borax method using quarternary ammonium detergents as lyser. Materials and Methods: The hemoglobin (Hb results on 200 samples of varying Hb concentrations obtained by these two cyanide free methods were compared with a cyanmethemoglobin method on a colorimeter which is light emitting diode (LED based. Hemoglobin was also estimated in one hundred blood donors and 25 blood samples of infants and compared by these methods. Statistical analysis used was Pearson`s correlation coefficient. Results: The response of the non cyanide method is linear for serially diluted blood samples over the Hb concentration range from 3gm/dl -20 gm/dl. The non cyanide methods has a precision of + 0.25g/dl (coefficient of variation= (2.34% and is suitable for use with fixed wavelength or with colorimeters at wavelength- 530 nm and 580 nm. Correlation of these two methods was excellent (r=0.98. The evaluation has shown it to be as reliable and reproducible as HiCN for measuring hemoglobin at all concentrations. The reagents used in non cyanide methods are non-biohazardous and did not affect the reliability of data determination and also the cost was less than HiCN method. Conclusions: Thus, non cyanide methods of Hb estimation offer possibility of safe and quality Hb estimation and should prove useful for routine laboratory use. Non cyanide methods is easily incorporated in hemobloginometers

  13. Unrecorded Alcohol Consumption: Quantitative Methods of Estimation

    OpenAIRE

    Razvodovsky, Y. E.

    2010-01-01

    unrecorded alcohol; methods of estimation In this paper we focused on methods of estimation of unrecorded alcohol consumption level. Present methods of estimation of unrevorded alcohol consumption allow only approximate estimation of unrecorded alcohol consumption level. Tacking into consideration the extreme importance of such kind of data, further investigation is necessary to improve the reliability of methods estimation of unrecorded alcohol consumption.

  14. Light Diffraction by Large Amplitude Ultrasonic Waves in Liquids

    Science.gov (United States)

    Adler, Laszlo; Cantrell, John H.; Yost, William T.

    2016-01-01

    Light diffraction from ultrasound, which can be used to investigate nonlinear acoustic phenomena in liquids, is reported for wave amplitudes larger than that typically reported in the literature. Large amplitude waves result in waveform distortion due to the nonlinearity of the medium that generates harmonics and produces asymmetries in the light diffraction pattern. For standing waves with amplitudes above a threshold value, subharmonics are generated in addition to the harmonics and produce additional diffraction orders of the incident light. With increasing drive amplitude above the threshold a cascade of period-doubling subharmonics are generated, terminating in a region characterized by a random, incoherent (chaotic) diffraction pattern. To explain the experimental results a toy model is introduced, which is derived from traveling wave solutions of the nonlinear wave equation corresponding to the fundamental and second harmonic standing waves. The toy model reduces the nonlinear partial differential equation to a mathematically more tractable nonlinear ordinary differential equation. The model predicts the experimentally observed cascade of period-doubling subharmonics terminating in chaos that occurs with increasing drive amplitudes above the threshold value. The calculated threshold amplitude is consistent with the value estimated from the experimental data.

  15. Multiple-image authentication with a cascaded multilevel architecture based on amplitude field random sampling and phase information multiplexing.

    Science.gov (United States)

    Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Pan, Xuemei; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2015-04-10

    A multiple-image authentication method with a cascaded multilevel architecture in the Fresnel domain is proposed, in which a synthetic encoded complex amplitude is first fabricated, and its real amplitude component is generated by iterative amplitude encoding, random sampling, and space multiplexing for the low-level certification images, while the phase component of the synthetic encoded complex amplitude is constructed by iterative phase information encoding and multiplexing for the high-level certification images. Then the synthetic encoded complex amplitude is iteratively encoded into two phase-type ciphertexts located in two different transform planes. During high-level authentication, when the two phase-type ciphertexts and the high-level decryption key are presented to the system and then the Fresnel transform is carried out, a meaningful image with good quality and a high correlation coefficient with the original certification image can be recovered in the output plane. Similar to the procedure of high-level authentication, in the case of low-level authentication with the aid of a low-level decryption key, no significant or meaningful information is retrieved, but it can result in a remarkable peak output in the nonlinear correlation coefficient of the output image and the corresponding original certification image. Therefore, the method realizes different levels of accessibility to the original certification image for different authority levels with the same cascaded multilevel architecture.

  16. A comparison of different discrimination parameters for the DFT-based PSD method in fast scintillators

    International Nuclear Information System (INIS)

    Liu, G.; Yang, J.; Luo, X.L.; Lin, C.B.; Peng, J.X.; Yang, Y.

    2013-01-01

    Although the discrete Fourier transform (DFT) based pulse shape discrimination (PSD) method, realized by transforming the digitized scintillation pulses into frequency coefficients by using DFT, has been proven to effectively discriminate neutrons and γ rays, its discrimination performance depends strongly on the selection of the discrimination parameter obtained by the combination of these frequency coefficients. In order to thoroughly understand and apply the DFT-based PSD in organic scintillation detectors, a comparison of three different discrimination parameters, i.e. the amplitude of zero-frequency component, the amplitude difference between the amplitude of zero-frequency component and the amplitude of base-frequency component, and the ratio of the amplitude of base-frequency component to the amplitude of zero-frequency component, is described in this paper. An experimental setup consisting of an Americium–Beryllium (Am–Be) source, a BC501A liquid scintillator detector, and a 5Gsample/s 8-bit oscilloscope was built to assess the performance of the DFT-based PSD with each of these discrimination parameters in terms of the figure-of-merit (based on the separation of the event distributions). The third technique, which uses the ratio of the amplitude of base-frequency component to the amplitude of zero-frequency component as the discrimination parameter, is observed to provide the best discrimination performance in this research. - Highlights: • The spectrum difference between neutron pulse and γ-ray pulse was investigated. • The DFT-based PSD with different parameter definitions was assessed. • The way of using the ratio of magnitude spectrum provides the best performance. • The performance differences were explained from noise suppression features

  17. Spatio-Temporal Audio Enhancement Based on IAA Noise Covariance Matrix Estimates

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    A method for estimating the noise covariance matrix in a mul- tichannel setup is proposed. The method is based on the iter- ative adaptive approach (IAA), which only needs short seg- ments of data to estimate the covariance matrix. Therefore, the method can be used for fast varying signals....... The method is based on an assumption of the desired signal being harmonic, which is used for estimating the noise covariance matrix from the covariance matrix of the observed signal. The noise co- variance estimate is used in the linearly constrained minimum variance (LCMV) filter and compared...

  18. Permanent Magnet Flux Online Estimation Based on Zero-Voltage Vector Injection Method

    DEFF Research Database (Denmark)

    Xie, Ge; Lu, Kaiyuan; Kumar, Dwivedi Sanjeet

    2015-01-01

    In this paper, a simple signal injection method is proposed for sensorless control of PMSM at low speed, which ideally requires one voltage vector only for position estimation. The proposed method is easy to implement resulting in low computation burden. No filters are needed for extracting...

  19. Three applications of a bonus relation for gravity amplitudes

    International Nuclear Information System (INIS)

    Spradlin, Marcus; Volovich, Anastasia; Wen, Congkao

    2009-01-01

    Arkani-Hamed et al. have recently shown that all tree-level scattering amplitudes in maximal supergravity exhibit exceptionally soft behavior when two supermomenta are taken to infinity in a particular complex direction, and that this behavior implies new non-trivial relations amongst amplitudes in addition to the well-known on-shell recursion relations. We consider the application of these new 'bonus relations' to MHV amplitudes, showing that they can be used quite generally to relate (n-2)!-term formulas typically obtained from recursion relations to (n-3)!-term formulas related to the original BGK conjecture. Specifically we provide (1) a direct proof of a formula presented by Elvang and Freedman, (2) a new formula based on one due to Bedford et al., and (3) an alternate proof of a formula recently obtained by Mason and Skinner. Our results also provide the first direct proof that the conjectured BGK formula, only very recently proven via completely different methods, satisfies the on-shell recursion.

  20. A New Method for Estimation of Velocity Vectors

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Munk, Peter

    1998-01-01

    The paper describes a new method for determining the velocity vector of a remotely sensed object using either sound or electromagnetic radiation. The movement of the object is determined from a field with spatial oscillations in both the axial direction of the transducer and in one or two...... directions transverse to the axial direction. By using a number of pulse emissions, the inter-pulse movement can be estimated and the velocity found from the estimated movement and the time between pulses. The method is based on the principle of using transverse spatial modulation for making the received...

  1. Dynamic response function and large-amplitude dissipative collective motion

    International Nuclear Information System (INIS)

    Wu Xizhen; Zhuo Yizhong; Li Zhuxia; Sakata, Fumihiko.

    1993-05-01

    Aiming at exploring microscopic dynamics responsible for the dissipative large-amplitude collective motion, the dynamic response and correlation functions are introduced within the general theory of nuclear coupled-master equations. The theory is based on the microscopic theory of nuclear collective dynamics which has been developed within the time-dependent Hartree-Fock (TDHF) theory for disclosing complex structure of the TDHF-manifold. A systematic numerical method for calculating the dynamic response and correlation functions is proposed. By performing numerical calculation for a simple model Hamiltonian, it is pointed out that the dynamic response function gives an important information in understanding the large-amplitude dissipative collective motion which is described by an ensemble of trajectories within the TDHF-manifold. (author)

  2. Damage Detection of Structures for Ambient Loading Based on Cross Correlation Function Amplitude and SVM

    Directory of Open Access Journals (Sweden)

    Lin-sheng Huo

    2016-01-01

    Full Text Available An effective method for the damage detection of skeletal structures which combines the cross correlation function amplitude (CCFA with the support vector machine (SVM is presented in this paper. The proposed method consists of two stages. Firstly, the data features are extracted from the CCFA, which, calculated from dynamic responses and as a representation of the modal shapes of the structure, changes when damage occurs on the structure. The data features are then input into the SVM with the one-against-one (OAO algorithm to classify the damage status of the structure. The simulation data of IASC-ASCE benchmark model and a vibration experiment of truss structure are adopted to verify the feasibility of proposed method. The results show that the proposed method is suitable for the damage identification of skeletal structures with the limited sensors subjected to ambient excitation. As the CCFA based data features are sensitive to damage, the proposed method demonstrates its reliability in the diagnosis of structures with damage, especially for those with minor damage. In addition, the proposed method shows better noise robustness and is more suitable for noisy environments.

  3. Application of the Total Least Square ESPRIT Method to Estimation of Angular Coordinates of Moving Objects

    Directory of Open Access Journals (Sweden)

    Wojciech Rosloniec

    2010-01-01

    Full Text Available The TLS ESPRIT method is investigated in application to estimation of angular coordinates (angles of arrival of two moving objects at the presence of an external, relatively strong uncorrelated signal. As a radar antenna system, the 32-element uniform linear array (ULA is used. Various computer simulations have been carried out in order to demonstrate good accuracy and high spatial resolution of the TLS ESPRIT method in the scenario outlined above. It is also shown that accuracy and angle resolution can be significantly increased by using the proposed preprocessing (beamforming. The most of simulation results, presented in a graphical form, have been compared to the corresponding equivalent results obtained by using the ESPRIT method and conventional amplitude monopulse method aided by the coherent Doppler filtration.

  4. Amplitude and Ascoli analysis

    International Nuclear Information System (INIS)

    Hansen, J.D.

    1976-01-01

    This article discusses the partial wave analysis of two, three and four meson systems. The difference between the two approaches, referred to as amplitude and Ascoli analysis is discussed. Some of the results obtained with these methods are shown. (B.R.H.)

  5. The amplitude and phase precision of 40 Hz auditory steady-state response depend on the level of arousal

    DEFF Research Database (Denmark)

    Griskova, Inga; Mørup, Morten; Parnas, Josef

    2007-01-01

    The aim of this study was to investigate, in healthy subjects, the modulation of amplitude and phase precision of the auditory steady-state response (ASSR) to 40 Hz stimulation in two resting conditions varying in the level of arousal. Previously, ASSR measures have shown to be affected......-negative multi-way factorization (NMWF) (Morup et al. in J Neurosci Methods 161:361-368, 2007). The estimates of these measures were subjected to statistical analysis. The amplitude and phase precision of the ASSR were significantly larger during the low arousal state compared to the high arousal condition...

  6. HIGH-PRECISION ATTITUDE ESTIMATION METHOD OF STAR SENSORS AND GYRO BASED ON COMPLEMENTARY FILTER AND UNSCENTED KALMAN FILTER

    Directory of Open Access Journals (Sweden)

    C. Guo

    2017-07-01

    Full Text Available Determining the attitude of satellite at the time of imaging then establishing the mathematical relationship between image points and ground points is essential in high-resolution remote sensing image mapping. Star tracker is insensitive to the high frequency attitude variation due to the measure noise and satellite jitter, but the low frequency attitude motion can be determined with high accuracy. Gyro, as a short-term reference to the satellite’s attitude, is sensitive to high frequency attitude change, but due to the existence of gyro drift and integral error, the attitude determination error increases with time. Based on the opposite noise frequency characteristics of two kinds of attitude sensors, this paper proposes an on-orbit attitude estimation method of star sensors and gyro based on Complementary Filter (CF and Unscented Kalman Filter (UKF. In this study, the principle and implementation of the proposed method are described. First, gyro attitude quaternions are acquired based on the attitude kinematics equation. An attitude information fusion method is then introduced, which applies high-pass filtering and low-pass filtering to the gyro and star tracker, respectively. Second, the attitude fusion data based on CF are introduced as the observed values of UKF system in the process of measurement updating. The accuracy and effectiveness of the method are validated based on the simulated sensors attitude data. The obtained results indicate that the proposed method can suppress the gyro drift and measure noise of attitude sensors, improving the accuracy of the attitude determination significantly, comparing with the simulated on-orbit attitude and the attitude estimation results of the UKF defined by the same simulation parameters.

  7. Simultaneous estimation of multiple phases in digital holographic interferometry using state space analysis

    Science.gov (United States)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-05-01

    A new approach is proposed for the multiple phase estimation from a multicomponent exponential phase signal recorded in multi-beam digital holographic interferometry. It is capable of providing multidimensional measurements in a simultaneous manner from a single recording of the exponential phase signal encoding multiple phases. Each phase within a small window around each pixel is appproximated with a first order polynomial function of spatial coordinates. The problem of accurate estimation of polynomial coefficients, and in turn the unwrapped phases, is formulated as a state space analysis wherein the coefficients and signal amplitudes are set as the elements of a state vector. The state estimation is performed using the extended Kalman filter. An amplitude discrimination criterion is utilized in order to unambiguously estimate the coefficients associated with the individual signal components. The performance of proposed method is stable over a wide range of the ratio of signal amplitudes. The pixelwise phase estimation approach of the proposed method allows it to handle the fringe patterns that may contain invalid regions.

  8. Fibre optical measuring network based on quasi-distributed amplitude sensors for detecting deformation loads

    International Nuclear Information System (INIS)

    Kul'chin, Yurii N; Kolchinskiy, V A; Kamenev, O T; Petrov, Yu S

    2013-01-01

    A new design of a sensitive element for a fibre optical sensor of deformation loads is proposed. A distributed fibre optical measuring network, aimed at determining both the load application point and the load mass, has been developed based on these elements. It is shown that neural network methods of data processing make it possible to combine quasi-distributed amplitude sensors of different types into a unified network. The results of the experimental study of a breadboard of a fibre optical measuring network are reported, which demonstrate successful reconstruction of the trajectory of a moving object (load) with a spatial resolution of 8 cm, as well as the load mass in the range of 1 – 10 kg with a sensitivity of 0.043 kg -1 . (laser optics 2012)

  9. A new method to estimate genetic gain in annual crops

    Directory of Open Access Journals (Sweden)

    Flávio Breseghello

    1998-12-01

    Full Text Available The genetic gain obtained by breeding programs to improve quantitative traits may be estimated by using data from regional trials. A new statistical method for this estimate is proposed and includes four steps: a joint analysis of regional trial data using a generalized linear model to obtain adjusted genotype means and covariance matrix of these means for the whole studied period; b calculation of the arithmetic mean of the adjusted genotype means, exclusively for the group of genotypes evaluated each year; c direct year comparison of the arithmetic means calculated, and d estimation of mean genetic gain by regression. Using the generalized least squares method, a weighted estimate of mean genetic gain during the period is calculated. This method permits a better cancellation of genotype x year and genotype x trial/year interactions, thus resulting in more precise estimates. This method can be applied to unbalanced data, allowing the estimation of genetic gain in series of multilocational trials.Os ganhos genéticos obtidos pelo melhoramento de caracteres quantitativos podem ser estimados utilizando resultados de ensaios regionais de avaliação de linhagens e cultivares. Um novo método estatístico para esta estimativa é proposto, o qual consiste em quatro passos: a análise conjunta da série de dados dos ensaios regionais através de um modelo linear generalizado de forma a obter as médias ajustadas dos genótipos e a matriz de covariâncias destas médias; b para o grupo de genótipos avaliados em cada ano, cálculo da média aritmética das médias ajustadas obtidas na análise conjunta; c comparação direta dos anos, conforme as médias aritméticas obtidas, e d estimativa de um ganho genético médio, por regressão. Aplicando-se o método de quadrados mínimos generalizado, é calculada uma estimativa ponderada do ganho genético médio no período. Este método permite um melhor cancelamento das interações genótipo x ano e gen

  10. Statistical Methods for Estimating the Uncertainty in the Best Basis Inventories

    International Nuclear Information System (INIS)

    WILMARTH, S.R.

    2000-01-01

    This document describes the statistical methods used to determine sample-based uncertainty estimates for the Best Basis Inventory (BBI). For each waste phase, the equation for the inventory of an analyte in a tank is Inventory (Kg or Ci) = Concentration x Density x Waste Volume. the total inventory is the sum of the inventories in the different waste phases. Using tanks sample data: statistical methods are used to obtain estimates of the mean concentration of an analyte the density of the waste, and their standard deviations. The volumes of waste in the different phases, and their standard deviations, are estimated based on other types of data. The three estimates are multiplied to obtain the inventory estimate. The standard deviations are combined to obtain a standard deviation of the inventory. The uncertainty estimate for the Best Basis Inventory (BBI) is the approximate 95% confidence interval on the inventory

  11. Estimating time-based instantaneous total mortality rate based on the age-structured abundance index

    Science.gov (United States)

    Wang, Yingbin; Jiao, Yan

    2015-05-01

    The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.

  12. A Method to Estimate Energy Demand in Existing Buildings Based on the Danish Building and Dwellings Register (BBR)

    DEFF Research Database (Denmark)

    Nielsen, Anker; Bertelsen, Niels Haldor; Wittchen, Kim Bjarne

    2013-01-01

    an energy label. The Danish Building Research Institute has described a method that can be used to estimate the energy demand in buildings specially dwellings. This is based on the information in the Danish Building and Dwelling Register (BBR) and information on building regulations at construction year......The Energy Performance Directive requires energy certifications for buildings. This is implemented in Denmark so that houses that are sold must have an energy performance label based on an evaluation from a visit to the building. The result is that only a small part of the existing houses has...... for the house. The result is an estimate for energy demand in each building with a variation. This makes it possible to make an automatic classification of all buildings. Then it is possible to find houses in need for thermal improvements. This method is tested for single family houses and flats. The paper...

  13. Top quark amplitudes with an anomalous magnetic moment

    International Nuclear Information System (INIS)

    Larkoski, Andrew J.; Peskin, Michael E.

    2011-01-01

    The anomalous magnetic moment of the top quark may be measured during the first run of the LHC at 7 TeV. For these measurements, it will be useful to have available tree amplitudes with tt and arbitrarily many photons and gluons, including both QED and color anomalous magnetic moments. In this paper, we present a method for computing these amplitudes using the Britto-Cachazo-Feng-Witten recursion formula. Because we deal with an effective theory with higher-dimension couplings, there are roadblocks to a direct computation with the Britto-Cachazo-Feng-Witten method. We evade these by using an auxiliary scalar theory to compute a subset of the amplitudes.

  14. Scattering amplitudes in gauge theories

    Energy Technology Data Exchange (ETDEWEB)

    Henn, Johannes M. [Institute for Advanced Study, Princeton, NJ (United States). School of Natural Sciences; Plefka, Jan C. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik

    2014-03-01

    First monographical text on this fundamental topic. Course-tested, pedagogical and self-contained exposition. Includes exercises and solutions. At the fundamental level, the interactions of elementary particles are described by quantum gauge field theory. The quantitative implications of these interactions are captured by scattering amplitudes, traditionally computed using Feynman diagrams. In the past decade tremendous progress has been made in our understanding of and computational abilities with regard to scattering amplitudes in gauge theories, going beyond the traditional textbook approach. These advances build upon on-shell methods that focus on the analytic structure of the amplitudes, as well as on their recently discovered hidden symmetries. In fact, when expressed in suitable variables the amplitudes are much simpler than anticipated and hidden patterns emerge. These modern methods are of increasing importance in phenomenological applications arising from the need for high-precision predictions for the experiments carried out at the Large Hadron Collider, as well as in foundational mathematical physics studies on the S-matrix in quantum field theory. Bridging the gap between introductory courses on quantum field theory and state-of-the-art research, these concise yet self-contained and course-tested lecture notes are well-suited for a one-semester graduate level course or as a self-study guide for anyone interested in fundamental aspects of quantum field theory and its applications. The numerous exercises and solutions included will help readers to embrace and apply the material presented in the main text.

  15. Scattering amplitudes in gauge theories

    International Nuclear Information System (INIS)

    Henn, Johannes M.; Plefka, Jan C.

    2014-01-01

    First monographical text on this fundamental topic. Course-tested, pedagogical and self-contained exposition. Includes exercises and solutions. At the fundamental level, the interactions of elementary particles are described by quantum gauge field theory. The quantitative implications of these interactions are captured by scattering amplitudes, traditionally computed using Feynman diagrams. In the past decade tremendous progress has been made in our understanding of and computational abilities with regard to scattering amplitudes in gauge theories, going beyond the traditional textbook approach. These advances build upon on-shell methods that focus on the analytic structure of the amplitudes, as well as on their recently discovered hidden symmetries. In fact, when expressed in suitable variables the amplitudes are much simpler than anticipated and hidden patterns emerge. These modern methods are of increasing importance in phenomenological applications arising from the need for high-precision predictions for the experiments carried out at the Large Hadron Collider, as well as in foundational mathematical physics studies on the S-matrix in quantum field theory. Bridging the gap between introductory courses on quantum field theory and state-of-the-art research, these concise yet self-contained and course-tested lecture notes are well-suited for a one-semester graduate level course or as a self-study guide for anyone interested in fundamental aspects of quantum field theory and its applications. The numerous exercises and solutions included will help readers to embrace and apply the material presented in the main text.

  16. Estimation of Optimum Stimulus Amplitude for Balance Training using Electrical Stimulation of the Vestibular System

    Science.gov (United States)

    Goel, R.; Rosenberg, M. J.; De Dios, Y. E.; Cohen, H. S.; Bloomberg, J. J.; Mulavara, A. P.

    2016-01-01

    Sensorimotor changes such as posture and gait instabilities can affect the functional performance of astronauts after gravitational transitions. Sensorimotor Adaptability (SA) training can help alleviate decrements on exposure to novel sensorimotor environments based on the concept of 'learning to learn' by exposure to varying sensory challenges during posture and locomotion tasks (Bloomberg 2015). Supra-threshold Stochastic Vestibular Stimulation (SVS) can be used to provide one of many challenges by disrupting vestibular inputs. In this scenario, the central nervous system can be trained to utilize veridical information from other sensory inputs, such as vision and somatosensory inputs, for posture and locomotion control. The minimum amplitude of SVS to simulate the effect of deterioration in vestibular inputs for preflight training or for evaluating vestibular contribution in functional tests in general, however, has not yet been identified. Few studies (MacDougall 2006; Dilda 2014) have used arbitrary but fixed maximum current amplitudes from 3 to 5 mA in the medio-lateral (ML) direction to disrupt balance function in healthy adults. Giving this high level of current amplitude to all the individuals has a risk of invoking side effects such as nausea and discomfort. The goal of this study was to determine the minimum SVS level that yields an equivalently degraded balance performance. Thirteen subjects stood on a compliant foam surface with their eyes closed and were instructed to maintain a stable upright stance. Measures of stability of the head, trunk, and whole body were quantified in the ML direction. Duration of time they could stand on the foam surface was also measured. The minimum SVS dosage was defined to be that level which significantly degraded balance performance such that any further increase in stimulation level did not lead to further balance degradation. The minimum SVS level was determined by performing linear fits on the performance variable

  17. Model-based estimation of finite population total in stratified sampling

    African Journals Online (AJOL)

    The work presented in this paper concerns the estimation of finite population total under model – based framework. Nonparametric regression approach as a method of estimating finite population total is explored. The asymptotic properties of the estimators based on nonparametric regression are also developed under ...

  18. Comparing writing style feature-based classification methods for estimating user reputations in social media.

    Science.gov (United States)

    Suh, Jong Hwan

    2016-01-01

    In recent years, the anonymous nature of the Internet has made it difficult to detect manipulated user reputations in social media, as well as to ensure the qualities of users and their posts. To deal with this, this study designs and examines an automatic approach that adopts writing style features to estimate user reputations in social media. Under varying ways of defining Good and Bad classes of user reputations based on the collected data, it evaluates the classification performance of the state-of-art methods: four writing style features, i.e. lexical, syntactic, structural, and content-specific, and eight classification techniques, i.e. four base learners-C4.5, Neural Network (NN), Support Vector Machine (SVM), and Naïve Bayes (NB)-and four Random Subspace (RS) ensemble methods based on the four base learners. When South Korea's Web forum, Daum Agora, was selected as a test bed, the experimental results show that the configuration of the full feature set containing content-specific features and RS-SVM combining RS and SVM gives the best accuracy for classification if the test bed poster reputations are segmented strictly into Good and Bad classes by portfolio approach. Pairwise t tests on accuracy confirm two expectations coming from the literature reviews: first, the feature set adding content-specific features outperform the others; second, ensemble learning methods are more viable than base learners. Moreover, among the four ways on defining the classes of user reputations, i.e. like, dislike, sum, and portfolio, the results show that the portfolio approach gives the highest accuracy.

  19. When is respiratory management necessary for partial breast intensity modulated radiotherapy: A respiratory amplitude escalation treatment planning study

    International Nuclear Information System (INIS)

    Quirk, Sarah; Conroy, Leigh; Smith, Wendy L.

    2014-01-01

    Purpose: The impact of typical respiratory motion amplitudes (∼2 mm) on partial breast irradiation (PBI) is minimal; however, some patients have larger respiratory amplitudes that may negatively affect dose homogeneity. Here we determine at what amplitude respiratory management may be required to maintain plan quality. Methods and Materials: Ten patients were planned with PBI IMRT. Respiratory motion (2–20 mm amplitude) probability density functions were convolved with static plan fluence to estimate the delivered dose. Evaluation metrics included target coverage, ipsilateral breast hotspot, homogeneity, and uniformity indices. Results: Degradation of dose homogeneity was the limiting factor in reduction of plan quality due to respiratory motion, not loss of coverage. Hotspot increases were observed even at typical motion amplitudes. At 2 and 5 mm, 2/10 plans had a hotspot greater than 107% and at 10 mm this increased to 5/10 plans. Target coverage was only compromised at larger amplitudes: 5/10 plans did not meet coverage criteria at 15 mm amplitude and no plans met minimum coverage at 20 mm. Conclusions: We recommend that if respiratory amplitude is greater than 10 mm, respiratory management or alternative radiotherapy should be considered due to an increase in the hotspot in the ipsilateral breast and a decrease in dose homogeneity

  20. Mirror symmetry, toric branes and topological string amplitudes as polynomials

    Energy Technology Data Exchange (ETDEWEB)

    Alim, Murad

    2009-07-13

    The central theme of this thesis is the extension and application of mirror symmetry of topological string theory. The contribution of this work on the mathematical side is given by interpreting the calculated partition functions as generating functions for mathematical invariants which are extracted in various examples. Furthermore the extension of the variation of the vacuum bundle to include D-branes on compact geometries is studied. Based on previous work for non-compact geometries a system of differential equations is derived which allows to extend the mirror map to the deformation spaces of the D-Branes. Furthermore, these equations allow the computation of the full quantum corrected superpotentials which are induced by the D-branes. Based on the holomorphic anomaly equation, which describes the background dependence of topological string theory relating recursively loop amplitudes, this work generalizes a polynomial construction of the loop amplitudes, which was found for manifolds with a one dimensional space of deformations, to arbitrary target manifolds with arbitrary dimension of the deformation space. The polynomial generators are determined and it is proven that the higher loop amplitudes are polynomials of a certain degree in the generators. Furthermore, the polynomial construction is generalized to solve the extension of the holomorphic anomaly equation to D-branes without deformation space. This method is applied to calculate higher loop amplitudes in numerous examples and the mathematical invariants are extracted. (orig.)

  1. Mirror symmetry, toric branes and topological string amplitudes as polynomials

    International Nuclear Information System (INIS)

    Alim, Murad

    2009-01-01

    The central theme of this thesis is the extension and application of mirror symmetry of topological string theory. The contribution of this work on the mathematical side is given by interpreting the calculated partition functions as generating functions for mathematical invariants which are extracted in various examples. Furthermore the extension of the variation of the vacuum bundle to include D-branes on compact geometries is studied. Based on previous work for non-compact geometries a system of differential equations is derived which allows to extend the mirror map to the deformation spaces of the D-Branes. Furthermore, these equations allow the computation of the full quantum corrected superpotentials which are induced by the D-branes. Based on the holomorphic anomaly equation, which describes the background dependence of topological string theory relating recursively loop amplitudes, this work generalizes a polynomial construction of the loop amplitudes, which was found for manifolds with a one dimensional space of deformations, to arbitrary target manifolds with arbitrary dimension of the deformation space. The polynomial generators are determined and it is proven that the higher loop amplitudes are polynomials of a certain degree in the generators. Furthermore, the polynomial construction is generalized to solve the extension of the holomorphic anomaly equation to D-branes without deformation space. This method is applied to calculate higher loop amplitudes in numerous examples and the mathematical invariants are extracted. (orig.)

  2. Research of Block-Based Motion Estimation Methods for Video Compression

    Directory of Open Access Journals (Sweden)

    Tropchenko Andrey

    2016-08-01

    Full Text Available This work is a review of the block-based algorithms used for motion estimation in video compression. It researches different types of block-based algorithms that range from the simplest named Full Search to the fast adaptive algorithms like Hierarchical Search. The algorithms evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, such as MPEG-4 Visual and H.264. The work also presents a very brief introduction to the entire flow of video compression.

  3. Estimate-Merge-Technique-based algorithms to track an underwater ...

    Indian Academy of Sciences (India)

    D V A N Ravi Kumar

    2017-07-04

    Jul 4, 2017 ... In this paper, two novel methods based on the Estimate Merge Technique ... mentioned advantages of the proposed novel methods is shown by carrying out Monte Carlo simulation in .... equations are converted to sequential equations to make ... estimation error and low convergence time) at feasibly high.

  4. A Group Contribution Method for Estimating Cetane and Octane Numbers

    Energy Technology Data Exchange (ETDEWEB)

    Kubic, William Louis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Process Modeling and Analysis Group

    2016-07-28

    Much of the research on advanced biofuels is devoted to the study of novel chemical pathways for converting nonfood biomass into liquid fuels that can be blended with existing transportation fuels. Many compounds under consideration are not found in the existing fuel supplies. Often, the physical properties needed to assess the viability of a potential biofuel are not available. The only reliable information available may be the molecular structure. Group contribution methods for estimating physical properties from molecular structure have been used for more than 60 years. The most common application is estimation of thermodynamic properties. More recently, group contribution methods have been developed for estimating rate dependent properties including cetane and octane numbers. Often, published group contribution methods are limited in terms of types of function groups and range of applicability. In this study, a new, broadly-applicable group contribution method based on an artificial neural network was developed to estimate cetane number research octane number, and motor octane numbers of hydrocarbons and oxygenated hydrocarbons. The new method is more accurate over a greater range molecular weights and structural complexity than existing group contribution methods for estimating cetane and octane numbers.

  5. Comparison of different methods in estimating potential evapotranspiration at Muda Irrigation Scheme of Malaysia

    Directory of Open Access Journals (Sweden)

    Sobri Harun

    2012-04-01

    Full Text Available Evapotranspiration (ET is a complex process in the hydrological cycle that influences the quantity of runoff and thus the irrigation water requirements. Numerous methods have been developed to estimate potential evapotranspiration (PET. Unfortunately, most of the reliable PET methods are parameter rich models and therefore, not feasible for application in data scarce regions. On the other hand, accuracy and reliability of simple PET models vary widely according to regional climate conditions. The objective of the present study was to evaluate the performance of three temperature-based and three radiation-based simple ET methods in estimating historical ET and projecting future ET at Muda Irrigation Scheme at Kedah, Malaysia. The performance was measured by comparing those methods with the parameter intensive Penman-Monteith Method. It was found that radiation based methods gave better performance compared to temperature-based methods in estimation of ET in the study area. Future ET simulated from projected climate data obtained through statistical downscaling technique also showed that radiation-based methods can project closer ET values to that projected by Penman-Monteith Method. It is expected that the study will guide in selecting suitable methods for estimating and projecting ET in accordance to availability of meteorological data.

  6. One-loop helicity amplitudes for t anti t production at hadron colliders

    International Nuclear Information System (INIS)

    Badger, Simon; Yundin, Valery

    2011-01-01

    We present compact analytic expressions for all one-loop helicity amplitudes contributing to t anti t production at hadron colliders. Using recently developed generalised unitarity methods and a traditional Feynman based approach we produce a fast and flexible implementation. (ORIG.)

  7. One-loop helicity amplitudes for t anti t production at hadron colliders

    Energy Technology Data Exchange (ETDEWEB)

    Badger, Simon [The Niels Bohr International Academy and Discovery Center, Copenhagen (Denmark). Niels Bohr Inst.; Sattler, Ralf [Humboldt Univ. Berlin (Germany). Inst. fuer Physik; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Yundin, Valery [Silesia Univ., Katowice (Poland). Inst. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2011-01-15

    We present compact analytic expressions for all one-loop helicity amplitudes contributing to t anti t production at hadron colliders. Using recently developed generalised unitarity methods and a traditional Feynman based approach we produce a fast and flexible implementation. (ORIG.)

  8. Novel Method for 5G Systems NLOS Channels Parameter Estimation

    Directory of Open Access Journals (Sweden)

    Vladeta Milenkovic

    2017-01-01

    Full Text Available For the development of new 5G systems to operate in mm bands, there is a need for accurate radio propagation modelling at these bands. In this paper novel approach for NLOS channels parameter estimation will be presented. Estimation will be performed based on LCR performance measure, which will enable us to estimate propagation parameters in real time and to avoid weaknesses of ML and moment method estimation approaches.

  9. Unambiguous amplitude analysis of NN {yields} {Delta}N transition from asymmetry measurements

    Energy Technology Data Exchange (ETDEWEB)

    Auger, J.P. [Universite d`Orleans (France). Lab. de Physique Theorique; Lazard, C. [Paris-11 Univ., 91 - Orsay (France). Div. de Physique Theorique

    1997-12-31

    For particular {Delta}-production angles, an unambiguous determination of the NN {yields} {Delta}N transition amplitudes is performed, from NN {yields} (N{pi})N experiments, in which the polarization states are measured in the entrance channel, only. A three-step method is developed, which determines, firstly, the magnitudes of the amplitudes, secondly, independent relative phases, and thirdly, some dependent relative phases for resolving the remaining discrete ambiguities. A rule of ambiguity elimination is applied, which is based on the closure of a chain of consecutive independent relative phases, by means of the ad-hoc dependent one. A generalization of this rule is given, for the case of a non-diagonal matrix connecting observables and bilinear combinations of amplitudes. (author) 18 refs.

  10. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  11. Top Quark Amplitudes with an Anomolous Magnetic Moment

    International Nuclear Information System (INIS)

    Larkoski, Andrew

    2011-01-01

    The anomalous magnetic moment of the top quark may be measured during the first run of the LHC at 7 TeV. For these measurements, it will be useful to have available tree amplitudes with t(bar t) and arbitrarily many photons and gluons, including both QED and color anomalous magnetic moments. In this paper, we present a method for computing these amplitudes using the Britto-Cachazo-Feng-Witten recursion formula. Because we deal with an effective theory with higher-dimension couplings, there are roadblocks to a direct computation with the Britto-Cachazo-Feng-Witten method. We evade these by using an auxiliary scalar theory to compute a subset of the amplitudes.

  12. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  13. Improved pion pion scattering amplitude from dispersion relation formalism

    International Nuclear Information System (INIS)

    Cavalcante, I.P.; Coutinho, Y.A.; Borges, J. Sa

    2005-01-01

    Pion-pion scattering amplitude is obtained from Chiral Perturbation Theory at one- and two-loop approximations. Dispersion relation formalism provides a more economic method, which was proved to reproduce the analytical structure of that amplitude at both approximation levels. This work extends the use of the formalism in order to compute further unitarity corrections to partial waves, including the D-wave amplitude. (author)

  14. Uncertainty estimation with a small number of measurements, part II: a redefinition of uncertainty and an estimator method

    Science.gov (United States)

    Huang, Hening

    2018-01-01

    This paper is the second (Part II) in a series of two papers (Part I and Part II). Part I has quantitatively discussed the fundamental limitations of the t-interval method for uncertainty estimation with a small number of measurements. This paper (Part II) reveals that the t-interval is an ‘exact’ answer to a wrong question; it is actually misused in uncertainty estimation. This paper proposes a redefinition of uncertainty, based on the classical theory of errors and the theory of point estimation, and a modification of the conventional approach to estimating measurement uncertainty. It also presents an asymptotic procedure for estimating the z-interval. The proposed modification is to replace the t-based uncertainty with an uncertainty estimator (mean- or median-unbiased). The uncertainty estimator method is an approximate answer to the right question to uncertainty estimation. The modified approach provides realistic estimates of uncertainty, regardless of whether the population standard deviation is known or unknown, or if the sample size is small or large. As an application example of the modified approach, this paper presents a resolution to the Du-Yang paradox (i.e. Paradox 2), one of the three paradoxes caused by the misuse of the t-interval in uncertainty estimation.

  15. Examining the time dependence of DAMA's modulation amplitude

    Science.gov (United States)

    Kelso, Chris; Savage, Christopher; Sandick, Pearl; Freese, Katherine; Gondolo, Paolo

    2018-03-01

    If dark matter is composed of weakly interacting particles, Earth's orbital motion may induce a small annual variation in the rate at which these particles interact in a terrestrial detector. The DAMA collaboration has identified at a 9.3σ confidence level such an annual modulation in their event rate over two detector iterations, DAMA/NaI and DAMA/LIBRA, each with ˜ 7 years of observations. This data is well fit by a constant modulation amplitude for the two iterations of the experiment. We statistically examine the time dependence of the modulation amplitudes, which "by eye" appear to be decreasing with time in certain energy ranges. We perform a chi-squared goodness of fit test of the average modulation amplitudes measured by the two detector iterations which rejects the hypothesis of a consistent modulation amplitude at greater than 80, 96, and 99.6% for the 2-4, 2-5 and 2-6 keVee energy ranges, respectively. We also find that among the 14 annual cycles there are three ≳ 3σ departures from the average in our estimated data in the 5-6 keVee energy range. In addition, we examined several phenomenological models for the time dependence of the modulation amplitude. Using a maximum likelihood test, we find that descriptions of the modulation amplitude as decreasing with time are preferred over a constant modulation amplitude at anywhere between 1σ and 3σ , depending on the phenomenological model for the time dependence and the signal energy range considered. A time dependent modulation amplitude is not expected for a dark matter signal, at least for dark matter halo morphologies consistent with the DAMA signal. New data from DAMA/LIBRA-phase2 will certainly aid in determining whether any apparent time dependence is a real effect or a statistical fluctuation.

  16. A robust method for estimating motorbike count based on visual information learning

    Science.gov (United States)

    Huynh, Kien C.; Thai, Dung N.; Le, Sach T.; Thoai, Nam; Hamamoto, Kazuhiko

    2015-03-01

    Estimating the number of vehicles in traffic videos is an important and challenging task in traffic surveillance, especially with a high level of occlusions between vehicles, e.g.,in crowded urban area with people and/or motorbikes. In such the condition, the problem of separating individual vehicles from foreground silhouettes often requires complicated computation [1][2][3]. Thus, the counting problem is gradually shifted into drawing statistical inferences of target objects density from their shape [4], local features [5], etc. Those researches indicate a correlation between local features and the number of target objects. However, they are inadequate to construct an accurate model for vehicles density estimation. In this paper, we present a reliable method that is robust to illumination changes and partial affine transformations. It can achieve high accuracy in case of occlusions. Firstly, local features are extracted from images of the scene using Speed-Up Robust Features (SURF) method. For each image, a global feature vector is computed using a Bag-of-Words model which is constructed from the local features above. Finally, a mapping between the extracted global feature vectors and their labels (the number of motorbikes) is learned. That mapping provides us a strong prediction model for estimating the number of motorbikes in new images. The experimental results show that our proposed method can achieve a better accuracy in comparison to others.

  17. Sparse Multi-Pitch and Panning Estimation of Stereophonic Signals

    DEFF Research Database (Denmark)

    Kronvall, Ted; Jakobsson, Andreas; Hansen, Martin Weiss

    2016-01-01

    In this paper, we propose a novel multi-pitch estimator for stereophonic mixtures, allowing for pitch estimation on multi-channel audio even if the amplitude and delay panning parameters are unknown. The presented method does not require prior knowledge of the number of sources present in the mix...

  18. Forward Behavioral Modeling of a Three-Way Amplitude Modulator-Based Transmitter Using an Augmented Memory Polynomial

    Directory of Open Access Journals (Sweden)

    Jatin Chatrath

    2018-03-01

    Full Text Available Reconfigurable and multi-standard RF front-ends for wireless communication and sensor networks have gained importance as building blocks for the Internet of Things. Simpler and highly-efficient transmitter architectures, which can transmit better quality signals with reduced impairments, are an important step in this direction. In this regard, mixer-less transmitter architecture, namely, the three-way amplitude modulator-based transmitter, avoids the use of imperfect mixers and frequency up-converters, and their resulting distortions, leading to an improved signal quality. In this work, an augmented memory polynomial-based model for the behavioral modeling of such mixer-less transmitter architecture is proposed. Extensive simulations and measurements have been carried out in order to validate the accuracy of the proposed modeling strategy. The performance of the proposed model is evaluated using normalized mean square error (NMSE for long-term evolution (LTE signals. NMSE for a LTE signal of 1.4 MHz bandwidth with 100,000 samples for digital combining and analog combining are recorded as −36.41 dB and −36.9 dB, respectively. Similarly, for a 5 MHz signal the proposed models achieves −31.93 dB and −32.08 dB NMSE using digital and analog combining, respectively. For further validation of the proposed model, amplitude-to-amplitude (AM-AM, amplitude-to-phase (AM-PM, and the spectral response of the modeled and measured data are plotted, reasonably meeting the desired modeling criteria.

  19. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2013-01-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss–Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates

  20. An iterative stochastic ensemble method for parameter estimation of subsurface flow models

    KAUST Repository

    Elsheikh, Ahmed H.

    2013-06-01

    Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss-Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates. © 2013 Elsevier Inc.

  1. Boundary methods for mode estimation

    Science.gov (United States)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  2. Data-driven method based on particle swarm optimization and k-nearest neighbor regression for estimating capacity of lithium-ion battery

    International Nuclear Information System (INIS)

    Hu, Chao; Jain, Gaurav; Zhang, Puqiang; Schmidt, Craig; Gomadam, Parthasarathy; Gorka, Tom

    2014-01-01

    Highlights: • We develop a data-driven method for the battery capacity estimation. • Five charge-related features that are indicative of the capacity are defined. • The kNN regression model captures the dependency of the capacity on the features. • Results with 10 years’ continuous cycling data verify the effectiveness of the method. - Abstract: Reliability of lithium-ion (Li-ion) rechargeable batteries used in implantable medical devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, physicians, and patients. To ensure Li-ion batteries in these devices operate reliably, it is important to be able to assess the battery health condition by estimating the battery capacity over the life-time. This paper presents a data-driven method for estimating the capacity of Li-ion battery based on the charge voltage and current curves. The contributions of this paper are three-fold: (i) the definition of five characteristic features of the charge curves that are indicative of the capacity, (ii) the development of a non-linear kernel regression model, based on the k-nearest neighbor (kNN) regression, that captures the complex dependency of the capacity on the five features, and (iii) the adaptation of particle swarm optimization (PSO) to finding the optimal combination of feature weights for creating a kNN regression model that minimizes the cross validation (CV) error in the capacity estimation. Verification with 10 years’ continuous cycling data suggests that the proposed method is able to accurately estimate the capacity of Li-ion battery throughout the whole life-time

  3. Estimating local noise power spectrum from a few FBP-reconstructed CT scans

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Rongping, E-mail: rongping.zeng@fda.hhs.gov; Gavrielides, Marios A.; Petrick, Nicholas; Sahiner, Berkman; Li, Qin; Myers, Kyle J. [Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, CDRH, FDA, Silver Spring, Maryland 20993 (United States)

    2016-01-15

    Purpose: Traditional ways to estimate 2D CT noise power spectrum (NPS) involve an ensemble average of the power spectrums of many noisy scans. When only a few scans are available, regions of interest are often extracted from different locations to obtain sufficient samples to estimate the NPS. Using image samples from different locations ignores the nonstationarity of CT noise and thus cannot accurately characterize its local properties. The purpose of this work is to develop a method to estimate local NPS using only a few fan-beam CT scans. Methods: As a result of FBP reconstruction, the CT NPS has the same radial profile shape for all projection angles, with the magnitude varying with the noise level in the raw data measurement. This allows a 2D CT NPS to be factored into products of a 1D angular and a 1D radial function in polar coordinates. The polar separability of CT NPS greatly reduces the data requirement for estimating the NPS. The authors use this property and derive a radial NPS estimation method: in brief, the radial profile shape is estimated from a traditional NPS based on image samples extracted at multiple locations. The amplitudes are estimated by fitting the traditional local NPS to the estimated radial profile shape. The estimated radial profile shape and amplitudes are then combined to form a final estimate of the local NPS. We evaluate the accuracy of the radial NPS method and compared it to traditional NPS methods in terms of normalized mean squared error (NMSE) and signal detectability index. Results: For both simulated and real CT data sets, the local NPS estimated with no more than six scans using the radial NPS method was very close to the reference NPS, according to the metrics of NMSE and detectability index. Even with only two scans, the radial NPS method was able to achieve a fairly good accuracy. Compared to those estimated using traditional NPS methods, the accuracy improvement was substantial when a few scans were available

  4. Speed Control Analysis of Brushless DC Motor Based on Maximum Amplitude DC Current Feedback

    Directory of Open Access Journals (Sweden)

    Hassan M.A.A.

    2014-07-01

    Full Text Available This paper describes an approach to develop accurate and simple current controlled modulation technique for brushless DC (BLDC motor drive. The approach is applied to control phase current based on generation of quasi-square wave current by using only one current controller for the three phases. Unlike the vector control method which is complicated to be implemented, this simple current modulation technique presents advantages such as phase currents are kept in balance and the current is controlled through only one dc signal which represent maximum amplitude value of trapezoidal current (Imax. This technique is performed with Proportional Integral (PI control algorithm and triangular carrier comparison method to generate Pulse Width Modulation (PWM signal. In addition, the PI speed controller is incorporated with the current controller to perform desirable speed operation of non-overshoot response. The performance and functionality of the BLDC motor driver are verified via simulation by using MATLAB/SIMULINK. The simulation results show the developed control system performs desirable speed operation of non-overshoot and good current waveforms.

  5. Joint Symbol Timing and CFO Estimation for OFDM/OQAM Systems in Multipath Channels

    Directory of Open Access Journals (Sweden)

    Petrella Angelo

    2010-01-01

    Full Text Available The problem of data-aided synchronization for orthogonal frequency division multiplexing (OFDM systems based on offset quadrature amplitude modulation (OQAM in multipath channels is considered. In particular, the joint maximum-likelihood (ML estimator for carrier-frequency offset (CFO, amplitudes, phases, and delays, exploiting a short known preamble, is derived. The ML estimators for phases and amplitudes are in closed form. Moreover, under the assumption that the CFO is sufficiently small, a closed form approximate ML (AML CFO estimator is obtained. By exploiting the obtained closed form solutions a cost function whose peaks provide an estimate of the delays is derived. In particular, the symbol timing (i.e., the delay of the first multipath component is obtained by considering the smallest estimated delay. The performance of the proposed joint AML estimator is assessed via computer simulations and compared with that achieved by the joint AML estimator designed for AWGN channel and that achieved by a previously derived joint estimator for OFDM systems.

  6. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    Storage reliability that measures the ability of products in a dormant state to keep their required functions is studied in this paper. For certain types of products, Storage reliability may not always be 100% at the beginning of storage, unlike the operational reliability, which exist possible initial failures that are normally neglected in the models of storage reliability. In this paper, a new integrated technique, the non-parametric measure based on the E-Bayesian estimates of current failure probabilities is combined with the parametric measure based on the exponential reliability function, is proposed to estimate and predict the storage reliability of products with possible initial failures, where the non-parametric method is used to estimate the number of failed products and the reliability at each testing time, and the parameter method is used to estimate the initial reliability and the failure rate of storage product. The proposed method has taken into consideration that, the reliability test data of storage products containing the unexamined before and during the storage process, is available for providing more accurate estimates of both the initial failure probability and the storage failure probability. When storage reliability prediction that is the main concern in this field should be made, the non-parametric estimates of failure numbers can be used into the parametric models for the failure process in storage. In the case of exponential models, the assessment and prediction method for storage reliability is presented in this paper. Finally, a numerical example is given to illustrate the method. Furthermore, a detailed comparison between the proposed and traditional method, for examining the rationality of assessment and prediction on the storage reliability, is investigated. The results should be useful for planning a storage environment, decision-making concerning the maximum length of storage, and identifying the production quality. - Highlights:

  7. Higher-Twist Distribution Amplitudes of the K Meson in QCD

    CERN Document Server

    Ball, P; Lenz, A; Ball, Patricia

    2006-01-01

    We present a systematic study of twist-3 and twist-4 light-cone distribution amplitudes of the K meson in QCD. The structure of SU(3)-breaking corrections is studied in detail. Non-perturbative input parameters are estimated from QCD sum rules and renormalons. As a by-product, we give a complete reanalysis of the twist-3 and -4 parameters of the pi-meson distribution amplitudes; some of the results differ from those usually quoted in the literature.

  8. Estimating the impact of extreme events on crude oil price. An EMD-based event analysis method

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xun; Wang, Shouyang [Institute of Systems Science, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190 (China); School of Mathematical Sciences, Graduate University of Chinese Academy of Sciences, Beijing 100190 (China); Yu, Lean [Institute of Systems Science, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190 (China); Lai, Kin Keung [Department of Management Sciences, City University of Hong Kong, Tat Chee Avenue, Kowloon (China)

    2009-09-15

    The impact of extreme events on crude oil markets is of great importance in crude oil price analysis due to the fact that those events generally exert strong impact on crude oil markets. For better estimation of the impact of events on crude oil price volatility, this study attempts to use an EMD-based event analysis approach for this task. In the proposed method, the time series to be analyzed is first decomposed into several intrinsic modes with different time scales from fine-to-coarse and an average trend. The decomposed modes respectively capture the fluctuations caused by the extreme event or other factors during the analyzed period. It is found that the total impact of an extreme event is included in only one or several dominant modes, but the secondary modes provide valuable information on subsequent factors. For overlapping events with influences lasting for different periods, their impacts are separated and located in different modes. For illustration and verification purposes, two extreme events, the Persian Gulf War in 1991 and the Iraq War in 2003, are analyzed step by step. The empirical results reveal that the EMD-based event analysis method provides a feasible solution to estimating the impact of extreme events on crude oil prices variation. (author)

  9. Estimating the impact of extreme events on crude oil price. An EMD-based event analysis method

    International Nuclear Information System (INIS)

    Zhang, Xun; Wang, Shouyang; Yu, Lean; Lai, Kin Keung

    2009-01-01

    The impact of extreme events on crude oil markets is of great importance in crude oil price analysis due to the fact that those events generally exert strong impact on crude oil markets. For better estimation of the impact of events on crude oil price volatility, this study attempts to use an EMD-based event analysis approach for this task. In the proposed method, the time series to be analyzed is first decomposed into several intrinsic modes with different time scales from fine-to-coarse and an average trend. The decomposed modes respectively capture the fluctuations caused by the extreme event or other factors during the analyzed period. It is found that the total impact of an extreme event is included in only one or several dominant modes, but the secondary modes provide valuable information on subsequent factors. For overlapping events with influences lasting for different periods, their impacts are separated and located in different modes. For illustration and verification purposes, two extreme events, the Persian Gulf War in 1991 and the Iraq War in 2003, are analyzed step by step. The empirical results reveal that the EMD-based event analysis method provides a feasible solution to estimating the impact of extreme events on crude oil prices variation. (author)

  10. δ-Generalized Labeled Multi-Bernoulli Filter Using Amplitude Information of Neighboring Cells

    Directory of Open Access Journals (Sweden)

    Chao Liu

    2018-04-01

    Full Text Available The amplitude information (AI of echoed signals plays an important role in radar target detection and tracking. A lot of research shows that the introduction of AI enables the tracking algorithm to distinguish targets from clutter better and then improves the performance of data association. The current AI-aided tracking algorithms only consider the signal amplitude in the range-azimuth cell where measurement exists. However, since radar echoes always contain backscattered signals from multiple cells, the useful information of neighboring cells would be lost if directly applying those existing methods. In order to solve this issue, a new δ-generalized labeled multi-Bernoulli (δ-GLMB filter is proposed. It exploits the AI of radar echoes from neighboring cells to construct a united amplitude likelihood ratio, and then plugs it into the update process and the measurement-track assignment cost matrix of the δ-GLMB filter. Simulation results show that the proposed approach has better performance in target’s state and number estimation than that of the δ-GLMB only using single-cell AI in low signal-to-clutter-ratio (SCR environment.

  11. A new method for measuring the amplitude of de Haas-van Alphen oscillations

    International Nuclear Information System (INIS)

    Wilde, J. de; Meredith, D.J.

    1975-01-01

    Quantum (dHvA) oscillations in the diamagnetic susceptibility of a metal at low temperatures are usually studied by a torque balance or by the field modulation technique of Shoenberg and Stiles. A new method of measuring dHvA amplitudes in indium using a superconducting flux transformer and a ferrite core flux gate magnetometer is reported. The magnitude of the magnetization is typically 10 -6 T at 1K which is considerably greater than the minimum detectable signal of the magnetometer, and shielding the sensor from the magnetizing field of up to 4T is the main experimental problem. (Auth.)

  12. Comparison of conventional, model-based quantitative planar, and quantitative SPECT image processing methods for organ activity estimation using In-111 agents

    International Nuclear Information System (INIS)

    He, Bin; Frey, Eric C

    2006-01-01

    Accurate quantification of organ radionuclide uptake is important for patient-specific dosimetry. The quantitative accuracy from conventional conjugate view methods is limited by overlap of projections from different organs and background activity, and attenuation and scatter. In this work, we propose and validate a quantitative planar (QPlanar) processing method based on maximum likelihood (ML) estimation of organ activities using 3D organ VOIs and a projector that models the image degrading effects. Both a physical phantom experiment and Monte Carlo simulation (MCS) studies were used to evaluate the new method. In these studies, the accuracies and precisions of organ activity estimates for the QPlanar method were compared with those from conventional planar (CPlanar) processing methods with various corrections for scatter, attenuation and organ overlap, and a quantitative SPECT (QSPECT) processing method. Experimental planar and SPECT projections and registered CT data from an RSD Torso phantom were obtained using a GE Millenium VH/Hawkeye system. The MCS data were obtained from the 3D NCAT phantom with organ activity distributions that modelled the uptake of 111 In ibritumomab tiuxetan. The simulations were performed using parameters appropriate for the same system used in the RSD torso phantom experiment. The organ activity estimates obtained from the CPlanar, QPlanar and QSPECT methods from both experiments were compared. From the results of the MCS experiment, even with ideal organ overlap correction and background subtraction, CPlanar methods provided limited quantitative accuracy. The QPlanar method with accurate modelling of the physical factors increased the quantitative accuracy at the cost of requiring estimates of the organ VOIs in 3D. The accuracy of QPlanar approached that of QSPECT, but required much less acquisition and computation time. Similar results were obtained from the physical phantom experiment. We conclude that the QPlanar method, based

  13. Nonlinear (super)symmetries and amplitudes

    Energy Technology Data Exchange (ETDEWEB)

    Kallosh, Renata [Physics Department, Stanford University,382 Via Pueblo Mall, Stanford, CA 94305-4060 (United States)

    2017-03-07

    There is an increasing interest in nonlinear supersymmetries in cosmological model building. Independently, elegant expressions for the all-tree amplitudes in models with nonlinear symmetries, like D3 brane Dirac-Born-Infeld-Volkov-Akulov theory, were recently discovered. Using the generalized background field method we show how, in general, nonlinear symmetries of the action, bosonic and fermionic, constrain amplitudes beyond soft limits. The same identities control, for example, bosonic E{sub 7(7)} scalar sector symmetries as well as the fermionic goldstino symmetries. We present a universal derivation of the vanishing amplitudes in the single (bosonic or fermionic) soft limit. We explain why, universally, the double-soft limit probes the coset space algebra. We also provide identities describing the multiple-soft limit. We discuss loop corrections to N≥5 supergravity, to the D3 brane, and the UV completion of constrained multiplets in string theory.

  14. GENERAL APROACH TO MODELING NONLINEAR AMPLITUDE AND FREQUENCY DEPENDENT HYSTERESIS EFFECTS BASED ON EXPERIMENTAL RESULTS

    Directory of Open Access Journals (Sweden)

    Christopher Heine

    2014-08-01

    Full Text Available A detailed description of the rubber parts’ properties is gaining in importance in the current simulation models of multi-body simulation. One application example is a multi-body simulation of the washing machine movement. Inside the washing machine, there are different force transmission elements, which consist completely or partly of rubber. Rubber parts or, generally, elastomers usually have amplitude-dependant and frequency-dependent force transmission properties. Rheological models are used to describe these properties. A method for characterization of the amplitude and frequency dependence of such a rheological model is presented within this paper. Within this method, the used rheological model can be reduced or expanded in order to illustrate various non-linear effects. An original result is given with the automated parameter identification. It is fully implemented in Matlab. Such identified rheological models are intended for subsequent implementation in a multi-body model. This allows a significant enhancement of the overall model quality.

  15. Triggerless Readout with Time and Amplitude Reconstruction of Event Based on Deconvolution Algorithm

    International Nuclear Information System (INIS)

    Kulis, S.; Idzik, M.

    2011-01-01

    In future linear colliders like CLIC, where the period between the bunch crossings is in a sub-nanoseconds range ( 500 ps), an appropriate detection technique with triggerless signal processing is needed. In this work we discuss a technique, based on deconvolution algorithm, suitable for time and amplitude reconstruction of an event. In the implemented method the output of a relatively slow shaper (many bunch crossing periods) is sampled and digitalised in an ADC and then the deconvolution procedure is applied to digital data. The time of an event can be found with a precision of few percent of sampling time. The signal to noise ratio is only slightly decreased after passing through the deconvolution filter. The performed theoretical and Monte Carlo studies are confirmed by the results of preliminary measurements obtained with the dedicated system comprising of radiation source, silicon sensor, front-end electronics, ADC and further digital processing implemented on a PC computer. (author)

  16. N-loop string amplitude

    International Nuclear Information System (INIS)

    Mandelstam, S.

    1986-06-01

    Work on the derivation of an explicit perturbation series for string and superstring amplitudes is reviewed. The light-cone approach is emphasized, but some work on the Polyakov approach is also mentioned, and the two methods are compared. The calculation of the measure factor is outlined in the interacting-string picture

  17. Brillouin Scattering Spectrum Analysis Based on Auto-Regressive Spectral Estimation

    Science.gov (United States)

    Huang, Mengyun; Li, Wei; Liu, Zhangyun; Cheng, Linghao; Guan, Bai-Ou

    2018-06-01

    Auto-regressive (AR) spectral estimation technology is proposed to analyze the Brillouin scattering spectrum in Brillouin optical time-domain refelectometry. It shows that AR based method can reliably estimate the Brillouin frequency shift with an accuracy much better than fast Fourier transform (FFT) based methods provided the data length is not too short. It enables about 3 times improvement over FFT at a moderate spatial resolution.

  18. Brillouin Scattering Spectrum Analysis Based on Auto-Regressive Spectral Estimation

    Science.gov (United States)

    Huang, Mengyun; Li, Wei; Liu, Zhangyun; Cheng, Linghao; Guan, Bai-Ou

    2018-03-01

    Auto-regressive (AR) spectral estimation technology is proposed to analyze the Brillouin scattering spectrum in Brillouin optical time-domain refelectometry. It shows that AR based method can reliably estimate the Brillouin frequency shift with an accuracy much better than fast Fourier transform (FFT) based methods provided the data length is not too short. It enables about 3 times improvement over FFT at a moderate spatial resolution.

  19. A simple method to estimate interwell autocorrelation

    Energy Technology Data Exchange (ETDEWEB)

    Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)

    1997-08-01

    The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.

  20. An energy estimation framework for event-based methods in Non-Intrusive Load Monitoring

    International Nuclear Information System (INIS)

    Giri, Suman; Bergés, Mario

    2015-01-01

    Highlights: • Energy estimation is NILM has not yet accounted for complexity of appliance models. • We present a data-driven framework for appliance modeling in supervised NILM. • We test the framework on 3 houses and report average accuracies of 5.9–22.4%. • Appliance models facilitate the estimation of energy consumed by the appliance. - Abstract: Non-Intrusive Load Monitoring (NILM) is a set of techniques used to estimate the electricity consumed by individual appliances in a building from measurements of the total electrical consumption. Most commonly, NILM works by first attributing any significant change in the total power consumption (also known as an event) to a specific load and subsequently using these attributions (i.e. the labels for the events) to estimate energy for each load. For this last step, most published work in the field makes simplifying assumptions to make the problem more tractable. In this paper, we present a framework for creating appliance models based on classification labels and aggregate power measurements that can help to relax many of these assumptions. Our framework automatically builds models for appliances to perform energy estimation. The model relies on feature extraction, clustering via affinity propagation, perturbation of extracted states to ensure that they mimic appliance behavior, creation of finite state models, correction of any errors in classification that might violate the model, and estimation of energy based on corrected labels. We evaluate our framework on 3 houses from standard datasets in the field and show that the framework can learn data-driven models based on event labels and use that to estimate energy with lower error margins (e.g., 1.1–42.3%) than when using the heuristic models used by others

  1. V and V based Fault Estimation Method for Safety-Critical Software using BNs

    International Nuclear Information System (INIS)

    Eom, Heung Seop; Park, Gee Yong; Jang, Seung Cheol; Kang, Hyun Gook

    2011-01-01

    Quantitative software reliability measurement approaches have severe limitations in demonstrating the proper level of reliability for safety-critical software. These limitations can be overcome by using some other means of assessment. One of the promising candidates is based on the quality of the software development. Particularly in the nuclear industry, regulatory bodies in most countries do not accept the concept of quantitative goals as a sole means of meeting their regulations for the reliability of digital computers in NPPs, and use deterministic criteria for both hardware and software. The point of deterministic criteria is to assess the whole development process and its related activities during the software development life cycle for the acceptance of safety-critical software, and software V and V plays an important role in this process. In this light, we studied a V and V based fault estimation method using Bayesian Nets (BNs) to assess the reliability of safety-critical software, especially reactor protection system software in a NPP. The BNs in the study were made for an estimation of software faults and were based on the V and V frame, which governs the development of safety-critical software in the nuclear field. A case study was carried out for a reactor protection system that was developed as a part of the Korea Nuclear Instrumentation and Control System. The insight from the case study is that some important factors affecting the fault number of the target software include the residual faults in the system specification, maximum number of faults introduced in the development phase, ratio between process/function characteristic, uncertainty sizing, and fault elimination rate by inspection activities

  2. Fusion rule estimation using vector space methods

    International Nuclear Information System (INIS)

    Rao, N.S.V.

    1997-01-01

    In a system of N sensors, the sensor S j , j = 1, 2 .... N, outputs Y (j) element-of Re, according to an unknown probability distribution P (Y(j) /X) , corresponding to input X element-of [0, 1]. A training n-sample (X 1 , Y 1 ), (X 2 , Y 2 ), ..., (X n , Y n ) is given where Y i = (Y i (1) , Y i (2) , . . . , Y i N ) such that Y i (j) is the output of S j in response to input X i . The problem is to estimate a fusion rule f : Re N → [0, 1], based on the sample, such that the expected square error is minimized over a family of functions Y that constitute a vector space. The function f* that minimizes the expected error cannot be computed since the underlying densities are unknown, and only an approximation f to f* is feasible. We estimate the sample size sufficient to ensure that f provides a close approximation to f* with a high probability. The advantages of vector space methods are two-fold: (a) the sample size estimate is a simple function of the dimensionality of F, and (b) the estimate f can be easily computed by well-known least square methods in polynomial time. The results are applicable to the classical potential function methods and also (to a recently proposed) special class of sigmoidal feedforward neural networks

  3. Variable aperture-based ptychographical iterative engine method

    Science.gov (United States)

    Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-02-01

    A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.

  4. PyPWA: A partial-wave/amplitude analysis software framework

    Science.gov (United States)

    Salgado, Carlos

    2016-05-01

    The PyPWA project aims to develop a software framework for Partial Wave and Amplitude Analysis of data; providing the user with software tools to identify resonances from multi-particle final states in photoproduction. Most of the code is written in Python. The software is divided into two main branches: one general-shell where amplitude's parameters (or any parametric model) are to be estimated from the data. This branch also includes software to produce simulated data-sets using the fitted amplitudes. A second branch contains a specific realization of the isobar model (with room to include Deck-type and other isobar model extensions) to perform PWA with an interface into the computer resources at Jefferson Lab. We are currently implementing parallelism and vectorization using the Intel's Xeon Phi family of coprocessors.

  5. Comparison of methods for estimating herbage intake in grazing dairy cows

    DEFF Research Database (Denmark)

    Hellwing, Anne Louise Frydendahl; Lund, Peter; Weisbjerg, Martin Riis

    2015-01-01

    Estimation of herbage intake is a challenge both under practical and experimental conditions. The aim of this study was to estimate herbage intake with different methods for cows grazing 7 h daily on either spring or autumn pastures. In order to generate variation between cows, the 20 cows per...... season, and the herbage intake was estimated twice during each season. Cows were on pasture from 8:00 until 15:00, and were subsequently housed inside and fed a mixed ration (MR) based on maize silage ad libitum. Herbage intake was estimated with nine different methods: (1) animal performance (2) intake...

  6. Attitude tracking control of flexible spacecraft with large amplitude slosh

    Science.gov (United States)

    Deng, Mingle; Yue, Baozeng

    2017-12-01

    This paper is focused on attitude tracking control of a spacecraft that is equipped with flexible appendage and partially filled liquid propellant tank. The large amplitude liquid slosh is included by using a moving pulsating ball model that is further improved to estimate the settling location of liquid in microgravity or a zero-g environment. The flexible appendage is modelled as a three-dimensional Bernoulli-Euler beam, and the assumed modal method is employed. A hybrid controller that combines sliding mode control with an adaptive algorithm is designed for spacecraft to perform attitude tracking. The proposed controller has proved to be asymptotically stable. A nonlinear model for the overall coupled system including spacecraft attitude dynamics, liquid slosh, structural vibration and control action is established. Numerical simulation results are presented to show the dynamic behaviors of the coupled system and to verify the effectiveness of the control approach when the spacecraft undergoes the disturbance produced by large amplitude slosh and appendage vibration. Lastly, the designed adaptive algorithm is found to be effective to improve the precision of attitude tracking.

  7. Time-domain hybrid method for simulating large amplitude motions of ships advancing in waves

    Directory of Open Access Journals (Sweden)

    Shukui Liu

    2011-03-01

    Full Text Available Typical results obtained by a newly developed, nonlinear time domain hybrid method for simulating large amplitude motions of ships advancing with constant forward speed in waves are presented. The method is hybrid in the way of combining a time-domain transient Green function method and a Rankine source method. The present approach employs a simple double integration algorithm with respect to time to simulate the free-surface boundary condition. During the simulation, the diffraction and radiation forces are computed by pressure integration over the mean wetted surface, whereas the incident wave and hydrostatic restoring forces/moments are calculated on the instantaneously wetted surface of the hull. Typical numerical results of application of the method to the seakeeping performance of a standard containership, namely the ITTC S175, are herein presented. Comparisons have been made between the results from the present method, the frequency domain 3D panel method (NEWDRIFT of NTUA-SDL and available experimental data and good agreement has been observed for all studied cases between the results of the present method and comparable other data.

  8. ANFIS-Based Modeling for Photovoltaic Characteristics Estimation

    Directory of Open Access Journals (Sweden)

    Ziqiang Bi

    2016-09-01

    Full Text Available Due to the high cost of photovoltaic (PV modules, an accurate performance estimation method is significantly valuable for studying the electrical characteristics of PV generation systems. Conventional analytical PV models are usually composed by nonlinear exponential functions and a good number of unknown parameters must be identified before using. In this paper, an adaptive-network-based fuzzy inference system (ANFIS based modeling method is proposed to predict the current-voltage characteristics of PV modules. The effectiveness of the proposed modeling method is evaluated through comparison with Villalva’s model, radial basis function neural networks (RBFNN based model and support vector regression (SVR based model. Simulation and experimental results confirm both the feasibility and the effectiveness of the proposed method.

  9. Object Detection and Tracking-Based Camera Calibration for Normalized Human Height Estimation

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-01-01

    Full Text Available This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration. The proposed method consists of three steps: (i moving human detection and tracking, (ii automatic camera calibration, and (iii human height estimation and error correction. The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic.

  10. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong; Sun, Shuyu; Xie, Xiaoping

    2015-01-01

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  11. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong

    2015-10-26

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  12. Methods to estimate breeding values in honey bees

    NARCIS (Netherlands)

    Brascamp, E.W.; Bijma, P.

    2014-01-01

    Background Efficient methodologies based on animal models are widely used to estimate breeding values in farm animals. These methods are not applicable in honey bees because of their mode of reproduction. Observations are recorded on colonies, which consist of a single queen and thousands of workers

  13. Reliability of Estimation Pile Load Capacity Methods

    Directory of Open Access Journals (Sweden)

    Yudhi Lastiasih

    2014-04-01

    Full Text Available None of numerous previous methods for predicting pile capacity is known how accurate any of them are when compared with the actual ultimate capacity of piles tested to failure. The author’s of the present paper have conducted such an analysis, based on 130 data sets of field loading tests. Out of these 130 data sets, only 44 could be analysed, of which 15 were conducted until the piles actually reached failure. The pile prediction methods used were: Brinch Hansen’s method (1963, Chin’s method (1970, Decourt’s Extrapolation Method (1999, Mazurkiewicz’s method (1972, Van der Veen’s method (1953, and the Quadratic Hyperbolic Method proposed by Lastiasih et al. (2012. It was obtained that all the above methods were sufficiently reliable when applied to data from pile loading tests that loaded to reach failure. However, when applied to data from pile loading tests that loaded without reaching failure, the methods that yielded lower values for correction factor N are more recommended. Finally, the empirical method of Reese and O’Neill (1988 was found to be reliable enough to be used to estimate the Qult of a pile foundation based on soil data only.

  14. A novel method of methanol concentration control through feedback of the amplitudes of output voltage fluctuations for direct methanol fuel cells

    International Nuclear Information System (INIS)

    An, Myung-Gi; Mehmood, Asad; Hwang, Jinyeon; Ha, Heung Yong

    2016-01-01

    This study proposes a novel method for controlling the methanol concentration without using methanol sensors for DMFC (direct methanol fuel cell) systems that have a recycling methanol-feed loop. This method utilizes the amplitudes of output voltage fluctuations of DMFC as a feedback parameter to control the methanol concentration. The relationship between the methanol concentrations and the amplitudes of output voltage fluctuations is correlated under various operating conditions and, based on the experimental correlations, an algorithm to control the methanol concentration with no sensor is established. Feasibility tests of the algorithm have been conducted under various operating conditions including varying ambient temperature with a 200 W-class DMFC system. It is demonstrated that the sensor-less controller is able to control the methanol-feed concentration precisely and to run the DMFC systems more energy-efficiently as compared with other control systems. - Highlights: • A new sensor-less algorithm is proposed to control the methanol concentration without using a sensor. • The algorithm utilizes the voltage fluctuations of DMFC as a feedback parameter to control the methanol feed concentration. • A 200 W DMFC system is operated to evaluate the validity of the sensor-less algorithm. • The algorithm successfully controls the methanol feed concentration within a small error bound.

  15. A method for state-of-charge estimation of Li-ion batteries based on multi-model switching strategy

    International Nuclear Information System (INIS)

    Wang, Yujie; Zhang, Chenbin; Chen, Zonghai

    2015-01-01

    Highlights: • Build a multi-model switching SOC estimate method for Li-ion batteries. • Build an improved interpretative structural modeling method for model switching. • The feedback strategy of bus delay is applied to improve the real-time performance. • The EKF method is used for SOC estimation to improve the estimated accuracy. - Abstract: The accurate state-of-charge (SOC) estimation and real-time performance are critical evaluation indexes for Li-ion battery management systems (BMS). High accuracy algorithms often take long program execution time (PET) in the resource-constrained embedded application systems, which will undoubtedly lead to the decrease of the time slots of other processes, thereby reduce the overall performance of BMS. Considering the resource optimization and the computational load balance, this paper proposes a multi-model switching SOC estimation method for Li-ion batteries. Four typical battery models are employed to build a close-loop SOC estimation system. The extended Kalman filter (EKF) method is employed to eliminate the effect of the current noise and improve the accuracy of SOC. The experiments under dynamic current conditions are conducted to verify the accuracy and real-time performance of the proposed method. The experimental results indicate that accurate estimation results and reasonable PET can be obtained by the proposed method

  16. Particle filter based MAP state estimation: A comparison

    NARCIS (Netherlands)

    Saha, S.; Boers, Y.; Driessen, J.N.; Mandal, Pranab K.; Bagchi, Arunabha

    2009-01-01

    MAP estimation is a good alternative to MMSE for certain applications involving nonlinear non Gaussian systems. Recently a new particle filter based MAP estimator has been derived. This new method extracts the MAP directly from the output of a running particle filter. In the recent past, a Viterbi

  17. History based batch method preserving tally means

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Choi, Sung Hoon

    2012-01-01

    In the Monte Carlo (MC) eigenvalue calculations, the sample variance of a tally mean calculated from its cycle-wise estimates is biased because of the inter-cycle correlations of the fission source distribution (FSD). Recently, we proposed a new real variance estimation method named the history-based batch method in which a MC run is treated as multiple runs with small number of histories per cycle to generate independent tally estimates. In this paper, the history-based batch method based on the weight correction is presented to preserve the tally mean from the original MC run. The effectiveness of the new method is examined for the weakly coupled fissile array problem as a function of the dominance ratio and the batch size, in comparison with other schemes available

  18. A different approach to estimate nonlinear regression model using numerical methods

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  19. Large amplitude oscillatory motion along a solar filament

    Science.gov (United States)

    Vršnak, B.; Veronig, A. M.; Thalmann, J. K.; Žic, T.

    2007-08-01

    Context: Large amplitude oscillations of solar filaments is a phenomenon that has been known for more than half a century. Recently, a new mode of oscillations, characterized by periodical plasma motions along the filament axis, was discovered. Aims: We analyze such an event, recorded on 23 January 2002 in Big Bear Solar Observatory Hα filtergrams, to infer the triggering mechanism and the nature of the restoring force. Methods: Motion along the filament axis of a distinct buldge-like feature was traced, to quantify the kinematics of the oscillatory motion. The data were fitted by a damped sine function to estimate the basic parameters of the oscillations. To identify the triggering mechanism, morphological changes in the vicinity of the filament were analyzed. Results: The observed oscillations of the plasma along the filament were characterized by an initial displacement of 24 Mm, an initial velocity amplitude of 51 km s-1, a period of 50 min, and a damping time of 115 min. We interpret the trigger in terms of poloidal magnetic flux injection by magnetic reconnection at one of the filament legs. The restoring force is caused by the magnetic pressure gradient along the filament axis. The period of oscillations, derived from the linearized equation of motion (harmonic oscillator) can be expressed as P=π√{2}L/v_Aϕ≈4.4L/v_Aϕ, where v_Aϕ =Bϕ0/√μ_0ρ represents the Alfvén speed based on the equilibrium poloidal field Bϕ0. Conclusions: Combination of our measurements with some previous observations of the same kind of oscillations shows good agreement with the proposed interpretation. Movie to Fig. 1 is only available in electronic form at http://www.aanda.org

  20. Relative amplitude of medium-scale traveling ionospheric disturbances as deduced from global GPS network

    Science.gov (United States)

    Voeykov, S. V.; Afraimovich, E. L.; Kosogorov, E. A.; Perevalova, N. P.; Zhivetiev, I. V.

    We worked out a new method for estimation of relative amplitude dI I of total electron content TEC variations corresponding to medium-scale 30-300 km traveling ionospheric disturbances MS TIDs Daily and latitudinal dependences of dI I and dI I probability distributions are obtained for 52 days of 1999-2005 with different level of geomagnetic activity Statistical estimations were obtained for the analysis of 10 6 series of TEC with 2 3-hour duration To obtain statistically significant results three latitudinal regions were chosen North America high-latitudinal region 50-80 r N 200-300 r E 59 GPS receivers North America mid-latitudinal region 20-50 r N 200-300 r E 817 receivers equatorial belt -20 20 r N 0-360 r E 76 receivers We found that average daily value of the relative amplitude of TEC variations dI I changes from 0 3 to 10 proportionally to the value of geomagnetic index Kp This dependence is strong at high latitudes dI I 0 37 cdot Kp 1 5 and it is some weaker at mid latitudes dI I 0 2 cdot Kp 0 35 At the equator belt we found the weakest dependence dI I on the geomagnetic activity level dI I 0 1 cdot Kp 0 6 The most important and the most interesting result of our work is that during geomagnetic quiet conditions the relative amplitude of TEC variations at night considerably exceeds daily values by 3-5 times at equatorial and at high latitudes and by 2 times at mid latitudes But during strong magnetic storms the relative amplitude dI I at high

  1. Methods of albumin estimation in clinical biochemistry: Past, present, and future.

    Science.gov (United States)

    Kumar, Deepak; Banerjee, Dibyajyoti

    2017-06-01

    Estimation of serum and urinary albumin is routinely performed in clinical biochemistry laboratories. In the past, precipitation-based methods were popular for estimation of human serum albumin (HSA). Currently, dye-binding or immunochemical methods are widely practiced. Each of these methods has its limitations. Research endeavors to overcome such limitations are on-going. The current trends in methodological aspects of albumin estimation guiding the field have not been reviewed. Therefore, it is the need of the hour to review several aspects of albumin estimation. The present review focuses on the modern trends of research from a conceptual point of view and gives an overview of recent developments to offer the readers a comprehensive understanding of the subject. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A feasibility study of mutual information based setup error estimation for radiotherapy

    International Nuclear Information System (INIS)

    Kim, Jeongtae; Fessler, Jeffrey A.; Lam, Kwok L.; Balter, James M.; Haken, Randall K. ten

    2001-01-01

    We have investigated a fully automatic setup error estimation method that aligns DRRs (digitally reconstructed radiographs) from a three-dimensional planning computed tomography image onto two-dimensional radiographs that are acquired in a treatment room. We have chosen a MI (mutual information)-based image registration method, hoping for robustness to intensity differences between the DRRs and the radiographs. The MI-based estimator is fully automatic since it is based on the image intensity values without segmentation. Using 10 repeated scans of an anthropomorphic chest phantom in one position and two single scans in two different positions, we evaluated the performance of the proposed method and a correlation-based method against the setup error determined by fiducial marker-based method. The mean differences between the proposed method and the fiducial marker-based method were smaller than 1 mm for translational parameters and 0.8 degree for rotational parameters. The standard deviations of estimates from the proposed method due to detector noise were smaller than 0.3 mm and 0.07 degree for the translational parameters and rotational parameters, respectively

  3. Iteration of planar amplitudes in maximally supersymmetric Yang-Mills theory at three loops and beyond

    International Nuclear Information System (INIS)

    Bern, Zvi; Dixon, Lance J.; Smirnov, Vladimir A.

    2005-01-01

    We compute the leading-color (planar) three-loop four-point amplitude of N=4 supersymmetric Yang-Mills theory in 4-2ε dimensions, as a Laurent expansion about ε=0 including the finite terms. The amplitude was constructed previously via the unitarity method, in terms of two Feynman loop integrals, one of which has been evaluated already. Here we use the Mellin-Barnes integration technique to evaluate the Laurent expansion of the second integral. Strikingly, the amplitude is expressible, through the finite terms, in terms of the corresponding one- and two-loop amplitudes, which provides strong evidence for a previous conjecture that higher-loop planar N=4 amplitudes have an iterative structure. The infrared singularities of the amplitude agree with the predictions of Sterman and Tejeda-Yeomans based on resummation. Based on the four-point result and the exponentiation of infrared singularities, we give an exponentiated Ansatz for the maximally helicity-violating n-point amplitudes to all loop orders. The 1/ε 2 pole in the four-point amplitude determines the soft, or cusp, anomalous dimension at three loops in N=4 supersymmetric Yang-Mills theory. The result confirms a prediction by Kotikov, Lipatov, Onishchenko and Velizhanin, which utilizes the leading-twist anomalous dimensions in QCD computed by Moch, Vermaseren and Vogt. Following similar logic, we are able to predict a term in the three-loop quark and gluon form factors in QCD

  4. Interferometric Imaging Directly with Closure Phases and Closure Amplitudes

    Science.gov (United States)

    Chael, Andrew A.; Johnson, Michael D.; Bouman, Katherine L.; Blackburn, Lindy L.; Akiyama, Kazunori; Narayan, Ramesh

    2018-04-01

    Interferometric imaging now achieves angular resolutions as fine as ∼10 μas, probing scales that are inaccessible to single telescopes. Traditional synthesis imaging methods require calibrated visibilities; however, interferometric calibration is challenging, especially at high frequencies. Nevertheless, most studies present only a single image of their data after a process of “self-calibration,” an iterative procedure where the initial image and calibration assumptions can significantly influence the final image. We present a method for efficient interferometric imaging directly using only closure amplitudes and closure phases, which are immune to station-based calibration errors. Closure-only imaging provides results that are as noncommittal as possible and allows for reconstructing an image independently from separate amplitude and phase self-calibration. While closure-only imaging eliminates some image information (e.g., the total image flux density and the image centroid), this information can be recovered through a small number of additional constraints. We demonstrate that closure-only imaging can produce high-fidelity results, even for sparse arrays such as the Event Horizon Telescope, and that the resulting images are independent of the level of systematic amplitude error. We apply closure imaging to VLBA and ALMA data and show that it is capable of matching or exceeding the performance of traditional self-calibration and CLEAN for these data sets.

  5. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  6. Estimation of deuterium content in organic compounds by mass spectrometric methods

    International Nuclear Information System (INIS)

    Dave, S.M.; Goomer, N.C.

    1979-01-01

    Many organic sompounds are finding increasing importance in heavy water enrichment programme. New methods based on quantitative chemical conversion have been developed and standardized in for estimating deuterium contents of the exchanging organic molecules by mass spectrometry. The methods have been selected in such a way that the deuterium contents of both exchangeable as well as total hydrogens in the molecule can be conveniently estimated. (auth.)

  7. Estimating fractional vegetation cover and the vegetation index of bare soil and highly dense vegetation with a physically based method

    Science.gov (United States)

    Song, Wanjuan; Mu, Xihan; Ruan, Gaiyan; Gao, Zhan; Li, Linyuan; Yan, Guangjian

    2017-06-01

    Normalized difference vegetation index (NDVI) of highly dense vegetation (NDVIv) and bare soil (NDVIs), identified as the key parameters for Fractional Vegetation Cover (FVC) estimation, are usually obtained with empirical statistical methods However, it is often difficult to obtain reasonable values of NDVIv and NDVIs at a coarse resolution (e.g., 1 km), or in arid, semiarid, and evergreen areas. The uncertainty of estimated NDVIs and NDVIv can cause substantial errors in FVC estimations when a simple linear mixture model is used. To address this problem, this paper proposes a physically based method. The leaf area index (LAI) and directional NDVI are introduced in a gap fraction model and a linear mixture model for FVC estimation to calculate NDVIv and NDVIs. The model incorporates the Moderate Resolution Imaging Spectroradiometer (MODIS) Bidirectional Reflectance Distribution Function (BRDF) model parameters product (MCD43B1) and LAI product, which are convenient to acquire. Two types of evaluation experiments are designed 1) with data simulated by a canopy radiative transfer model and 2) with satellite observations. The root-mean-square deviation (RMSD) for simulated data is less than 0.117, depending on the type of noise added on the data. In the real data experiment, the RMSD for cropland is 0.127, for grassland is 0.075, and for forest is 0.107. The experimental areas respectively lack fully vegetated and non-vegetated pixels at 1 km resolution. Consequently, a relatively large uncertainty is found while using the statistical methods and the RMSD ranges from 0.110 to 0.363 based on the real data. The proposed method is convenient to produce NDVIv and NDVIs maps for FVC estimation on regional and global scales.

  8. Dual ant colony operational modal analysis parameter estimation method

    Science.gov (United States)

    Sitarz, Piotr; Powałka, Bartosz

    2018-01-01

    Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.

  9. A test of the ADV-based Reynolds flux method for in situ estimation of sediment settling velocity in a muddy estuary

    Science.gov (United States)

    Cartwright, Grace M.; Friedrichs, Carl T.; Smith, S. Jarrell

    2013-12-01

    Under conditions common in muddy coastal and estuarine environments, acoustic Doppler velocimeters (ADVs) can serve to estimate sediment settling velocity ( w s) by assuming a balance between upward turbulent Reynolds flux and downward gravitational settling. Advantages of this method include simple instrument deployment, lack of flow disturbance, and relative insensitivity to biofouling and water column stratification. Although this method is being used with increasing frequency in coastal and estuarine environments, to date it has received little direct ground truthing. This study compared in situ estimates of w s inferred by a 5-MHz ADV to independent in situ observations from a high-definition video settling column over the course of a flood tide in the bottom boundary layer of the York River estuary, Virginia, USA. The ADV-based measurements were found to agree with those of the settling column when the current speed at about 40 cm above the bed was greater than about 20 cm/s. This corresponded to periods when the estimated magnitude of the settling term in the suspended sediment continuity equation was four or more times larger than the time rate of change of concentration. For ADV observations restricted to these conditions, ADV-based estimates of w s (mean 0.48±0.04 mm/s) were highly consistent with those observed by the settling column (mean 0.45±0.02 mm/s). However, the ADV-based method for estimating w s was sensitive to the prescribed concentration of the non-settling washload, C wash. In an objective operational definition, C wash can be set equal to the lowest suspended solids concentration observed around slack water.

  10. Parallel Factor-Based Model for Two-Dimensional Direction Estimation

    Directory of Open Access Journals (Sweden)

    Nizar Tayem

    2017-01-01

    Full Text Available Two-dimensional (2D Direction-of-Arrivals (DOA estimation for elevation and azimuth angles assuming noncoherent, mixture of coherent and noncoherent, and coherent sources using extended three parallel uniform linear arrays (ULAs is proposed. Most of the existing schemes have drawbacks in estimating 2D DOA for multiple narrowband incident sources as follows: use of large number of snapshots, estimation failure problem for elevation and azimuth angles in the range of typical mobile communication, and estimation of coherent sources. Moreover, the DOA estimation for multiple sources requires complex pair-matching methods. The algorithm proposed in this paper is based on first-order data matrix to overcome these problems. The main contributions of the proposed method are as follows: (1 it avoids estimation failure problem using a new antenna configuration and estimates elevation and azimuth angles for coherent sources; (2 it reduces the estimation complexity by constructing Toeplitz data matrices, which are based on a single or few snapshots; (3 it derives parallel factor (PARAFAC model to avoid pair-matching problems between multiple sources. Simulation results demonstrate the effectiveness of the proposed algorithm.

  11. The efficiency of different estimation methods of hydro-physical limits

    Directory of Open Access Journals (Sweden)

    Emma María Martínez

    2012-12-01

    Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.

  12. Discussion and a new method of optical cryptosystem based on interference

    Science.gov (United States)

    Lu, Dajiang; He, Wenqi; Liao, Meihua; Peng, Xiang

    2017-02-01

    A discussion and an objective security analysis of the well-known optical image encryption based on interference are presented in this paper. A new method is also proposed to eliminate the security risk of the original cryptosystem. For a possible practical application, we expand this new method into a hierarchical authentication scheme. In this authentication system, with a pre-generated and fixed random phase lock, different target images indicating different authentication levels are analytically encoded into corresponding phase-only masks (phase keys) and amplitude-only masks (amplitude keys). For the authentication process, a legal user can obtain a specified target image at the output plane if his/her phase key, and amplitude key, which should be settled close against the fixed internal phase lock, are respectively illuminated by two coherent beams. By comparing the target image with all the standard certification images in the database, the system can thus verify the user's legality even his/her identity level. Moreover, in despite of the internal phase lock of this system being fixed, the crosstalk between different pairs of keys held by different users is low. Theoretical analysis and numerical simulation are both provided to demonstrate the validity of this method.

  13. Improvement of economic potential estimation methods for enterprise with potential branch clusters use

    Directory of Open Access Journals (Sweden)

    V.Ya. Nusinov

    2017-08-01

    Full Text Available The research determines that the current existing methods of enterprise’s economic potential estimation are based on the use of additive, multiplicative and rating models. It is determined that the existing methods have a row of defects. For example, not all the methods take into account the branch features of the analysis, and also the level of development of the enterprise comparatively with other enterprises. It is suggested to level such defects by an account at the estimation of potential integral level not only by branch features of enterprises activity but also by the intra-account economic clusterization of such enterprises. Scientific works which are connected with the using of clusters for the estimation of economic potential are generalized. According to the results of generalization it is determined that it is possible to distinguish 9 scientific approaches in this direction: the use of natural clusterization of enterprises with the purpose of estimation and increase of region potential; the use of natural clusterization of enterprises with the purpose of estimation and increase of industry potential; use of artificial clusterization of enterprises with the purpose of estimation and increase of region potential; use of artificial clusterization of enterprises with the purpose of estimation and increase of industry potential; the use of artificial clusterization of enterprises with the purpose of clustering potential estimation; the use of artificial clusterization of enterprises with the purpose of estimation of clustering competitiveness potential; the use of natural (artificial clusterization for the estimation of clustering efficiency; the use of natural (artificial clusterization for the increase of level at region (industries development; the use of methods of economic potential of region (industries estimation or its constituents for the construction of the clusters. It is determined that the use of clusterization method in

  14. The detection and estimation of spurious pulses

    International Nuclear Information System (INIS)

    1976-01-01

    Spurious pulses which may interfere with the counting of particles can sometimes easily be detected by integral counting as a function of amplification or by pulse-height analysis. However, in order to estimate their count rate, more elaborate methods based on their time relationship are needed. Direct techniques (delayed coincidences, use of a multichannel analyser in time mode, time-to-amplitude conversion) and gating techniques (simple subtraction, correlation counting, pulsed sources, modulo counting) are discussed. These techniques are compared to each other and their application to various detectors is studied as well as the influence of a dead time on spurious pulses

  15. Investigation on method of estimating the excitation spectrum of vibration source

    International Nuclear Information System (INIS)

    Zhang Kun; Sun Lei; Lin Song

    2010-01-01

    In practical engineer area, it is hard to obtain the excitation spectrum of the auxiliary machines of nuclear reactor through direct measurement. To solve this problem, the general method of estimating the excitation spectrum of vibration source through indirect measurement is proposed. First, the dynamic transfer matrix between the virtual excitation points and the measure points is obtained through experiment. The matrix combined with the response spectrum at the measure points under practical work condition can be used to calculate the excitation spectrum acts on the virtual excitation points. Then a simplified method is proposed which is based on the assumption that the vibration machine can be regarded as rigid body. The method treats the centroid as the excitation point and the dynamic transfer matrix is derived by using the sub structure mobility synthesis method. Thus, the excitation spectrum can be obtained by the inverse of the transfer matrix combined with the response spectrum at the measure points. Based on the above method, a computing example is carried out to estimate the excitation spectrum acts on the centroid of a electrical pump. By comparing the input excitation and the estimated excitation, the reliability of this method is verified. (authors)

  16. Estimation of daily reference evapotranspiration (ETo) using artificial intelligence methods: Offering a new approach for lagged ETo data-based modeling

    Science.gov (United States)

    Mehdizadeh, Saeid

    2018-04-01

    Evapotranspiration (ET) is considered as a key factor in hydrological and climatological studies, agricultural water management, irrigation scheduling, etc. It can be directly measured using lysimeters. Moreover, other methods such as empirical equations and artificial intelligence methods can be used to model ET. In the recent years, artificial intelligence methods have been widely utilized to estimate reference evapotranspiration (ETo). In the present study, local and external performances of multivariate adaptive regression splines (MARS) and gene expression programming (GEP) were assessed for estimating daily ETo. For this aim, daily weather data of six stations with different climates in Iran, namely Urmia and Tabriz (semi-arid), Isfahan and Shiraz (arid), Yazd and Zahedan (hyper-arid) were employed during 2000-2014. Two types of input patterns consisting of weather data-based and lagged ETo data-based scenarios were considered to develop the models. Four statistical indicators including root mean square error (RMSE), mean absolute error (MAE), coefficient of determination (R2), and mean absolute percentage error (MAPE) were used to check the accuracy of models. The local performance of models revealed that the MARS and GEP approaches have the capability to estimate daily ETo using the meteorological parameters and the lagged ETo data as inputs. Nevertheless, the MARS had the best performance in the weather data-based scenarios. On the other hand, considerable differences were not observed in the models' accuracy for the lagged ETo data-based scenarios. In the innovation of this study, novel hybrid models were proposed in the lagged ETo data-based scenarios through combination of MARS and GEP models with autoregressive conditional heteroscedasticity (ARCH) time series model. It was concluded that the proposed novel models named MARS-ARCH and GEP-ARCH improved the performance of ETo modeling compared to the single MARS and GEP. In addition, the external

  17. Simple method for quick estimation of aquifer hydrogeological parameters

    Science.gov (United States)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  18. Heuristic introduction to estimation methods

    International Nuclear Information System (INIS)

    Feeley, J.J.; Griffith, J.M.

    1982-08-01

    The methods and concepts of optimal estimation and control have been very successfully applied in the aerospace industry during the past 20 years. Although similarities exist between the problems (control, modeling, measurements) in the aerospace and nuclear power industries, the methods and concepts have found only scant acceptance in the nuclear industry. Differences in technical language seem to be a major reason for the slow transfer of estimation and control methods to the nuclear industry. Therefore, this report was written to present certain important and useful concepts with a minimum of specialized language. By employing a simple example throughout the report, the importance of several information and uncertainty sources is stressed and optimal ways of using or allowing for these sources are presented. This report discusses optimal estimation problems. A future report will discuss optimal control problems

  19. Estimation method of state-of-charge for lithium-ion battery used in hybrid electric vehicles based on variable structure extended kalman filter

    Science.gov (United States)

    Sun, Yong; Ma, Zilin; Tang, Gongyou; Chen, Zheng; Zhang, Nong

    2016-07-01

    Since the main power source of hybrid electric vehicle(HEV) is supplied by the power battery, the predicted performance of power battery, especially the state-of-charge(SOC) estimation has attracted great attention in the area of HEV. However, the value of SOC estimation could not be greatly precise so that the running performance of HEV is greatly affected. A variable structure extended kalman filter(VSEKF)-based estimation method, which could be used to analyze the SOC of lithium-ion battery in the fixed driving condition, is presented. First, the general lower-order battery equivalent circuit model(GLM), which includes column accumulation model, open circuit voltage model and the SOC output model, is established, and the off-line and online model parameters are calculated with hybrid pulse power characteristics(HPPC) test data. Next, a VSEKF estimation method of SOC, which integrates the ampere-hour(Ah) integration method and the extended Kalman filter(EKF) method, is executed with different adaptive weighting coefficients, which are determined according to the different values of open-circuit voltage obtained in the corresponding charging or discharging processes. According to the experimental analysis, the faster convergence speed and more accurate simulating results could be obtained using the VSEKF method in the running performance of HEV. The error rate of SOC estimation with the VSEKF method is focused in the range of 5% to 10% comparing with the range of 20% to 30% using the EKF method and the Ah integration method. In Summary, the accuracy of the SOC estimation in the lithium-ion battery cell and the pack of lithium-ion battery system, which is obtained utilizing the VSEKF method has been significantly improved comparing with the Ah integration method and the EKF method. The VSEKF method utilizing in the SOC estimation in the lithium-ion pack of HEV can be widely used in practical driving conditions.

  20. A Bayes linear Bayes method for estimation of correlated event rates.

    Science.gov (United States)

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.