WorldWideScience

Sample records for down-hole waveform deconvolution

  1. Waveform inversion with exponential damping using a deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2016-09-06

    The lack of low frequency components in seismic data usually leads full waveform inversion into the local minima of its objective function. An exponential damping of the data, on the other hand, generates artificial low frequencies, which can be used to admit long wavelength updates for waveform inversion. Another feature of exponential damping is that the energy of each trace also exponentially decreases with source-receiver offset, where the leastsquare misfit function does not work well. Thus, we propose a deconvolution-based objective function for waveform inversion with an exponential damping. Since the deconvolution filter includes a division process, it can properly address the unbalanced energy levels of the individual traces of the damped wavefield. Numerical examples demonstrate that our proposed FWI based on the deconvolution filter can generate a convergent long wavelength structure from the artificial low frequency components coming from an exponential damping.

  2. A Robust Gold Deconvolution Approach for LiDAR Waveform Data Processing to Characterize Vegetation Structure

    Science.gov (United States)

    Zhou, T.; Popescu, S. C.; Krause, K.; Sheridan, R.; Ku, N. W.

    2014-12-01

    Increasing attention has been paid in the remote sensing community to the next generation Light Detection and Ranging (lidar) waveform data systems for extracting information on topography and the vertical structure of vegetation. However, processing waveform lidar data raises some challenges compared to analyzing discrete return data. The overall goal of this study was to present a robust de-convolution algorithm- Gold algorithm used to de-convolve waveforms in a lidar dataset acquired within a 60 x 60m study area located in the Harvard Forest in Massachusetts. The waveform lidar data was collected by the National Ecological Observatory Network (NEON). Specific objectives were to: (1) explore advantages and limitations of various waveform processing techniques to derive topography and canopy height information; (2) develop and implement a novel de-convolution algorithm, the Gold algorithm, to extract elevation and canopy metrics; and (3) compare results and assess accuracy. We modeled lidar waveforms with a mixture of Gaussian functions using the Non-least squares (NLS) algorithm implemented in R and derived a Digital Terrain Model (DTM) and canopy height. We compared our waveform-derived topography and canopy height measurements using the Gold de-convolution algorithm to results using the Richardson-Lucy algorithm. Our findings show that the Gold algorithm performed better than the Richardson-Lucy algorithm in terms of recovering the hidden echoes and detecting false echoes for generating a DTM, which indicates that the Gold algorithm could potentially be applied to processing of waveform lidar data to derive information on terrain elevation and canopy characteristics.

  3. A study of the real-time deconvolution of digitized waveforms with pulse pile up for digital radiation spectroscopy

    International Nuclear Information System (INIS)

    Guo Weijun; Gardner, Robin P.; Mayo, Charles W.

    2005-01-01

    Two new real-time approaches have been developed and compared to the least-squares fit approach for the deconvolution of experimental waveforms with pile-up pulses. The single pulse shape chosen is typical for scintillators such as LSO and NaI(Tl). Simulated waveforms with pulse pile up were also generated and deconvolved to compare these three different approaches under cases where the single pulse component has a constant shape and the digitization error dominates. The effects of temporal separation and amplitude ratio between pile-up component pulses were also investigated and statistical tests were applied to quantify the consistency of deconvolution results for each case. Monte Carlo simulation demonstrated that applications of these pile-up deconvolution techniques to radiation spectroscopy are effective in extending the counting-rate range while preserving energy resolution for scintillation detectors

  4. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok

    2017-11-15

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  5. Time-domain full waveform inversion of exponentially damped wavefield using the deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2017-01-01

    Full waveform inversion (FWI) suffers from the cycle-skipping problem when the available frequency-band of data is not low enough. We apply an exponential damping to the data to generate artificial low frequencies, which helps FWI avoid cycle skipping. In this case, the least-square misfit function does not properly deal with the exponentially damped wavefield in FWI, because the amplitude of traces decays almost exponentially with increasing offset in a damped wavefield. Thus, we use a deconvolution-based objective function for FWI of the exponentially damped wavefield. The deconvolution filter includes inherently a normalization between the modeled and observed data, thus it can address the unbalanced amplitude of a damped wavefield. We, specifically, normalize the modeled data with the observed data in the frequency-domain to estimate the deconvolution filter and selectively choose a frequency-band for normalization that mainly includes the artificial low frequencies. We calculate the gradient of the objective function using the adjoint-state method. The synthetic and benchmark data examples show that our FWI algorithm generates a convergent long wavelength structure without low frequency information in the recorded data.

  6. Waveform inversion with exponential damping using a deconvolution-based objective function

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2016-01-01

    The lack of low frequency components in seismic data usually leads full waveform inversion into the local minima of its objective function. An exponential damping of the data, on the other hand, generates artificial low frequencies, which can

  7. Gamma-ray spectrometry applied to down-hole logging

    International Nuclear Information System (INIS)

    Dumesnil, P.; Umiastowsky, K.

    1983-11-01

    Gamma-ray spectrometry permits to improve the accuracy of natural gamma, gamma-gamma and neutron-gamma geophysical measurements. The probe developed at Centre d'Etudes Nucleaires de Saclay allows down-hole gamma-ray spectrometry. Among others, this probe can be applied to the uranium content determination by selective natural gamma method, down-hole determination of the ash content in the coal by gamma-gamma selective method and elemental analysis by neutron-gamma method. For the calibration and an exact interpretation of the measurements it is important to know the gamma-ray and neutron characteristics of the different kinds of rocks considered as probabilistic variables

  8. Down-hole catalytic upgrading of heavy crude oil

    Energy Technology Data Exchange (ETDEWEB)

    Weissman, J.G.; Kessler, R.V.; Sawicki, R.A.; Belgrave, J.D.M.; Laureshen, C.J.; Mehta, S.A.; Moore, R.G.; Ursenbach, M.G. [University of Calgary, Calgary, AB (Canada). Dept. of Chemical and Petroleum Engineering

    1996-07-01

    Several processing options have been developed to accomplish near-well bore in-situ upgrading of heavy crude oils. These processes are designed to pass oil over a fixed bed of catalyst prior to entering the production well, the catalyst being placed by conventional gravel pack methods. The presence of brine and the need to provide heat and reactant gases in a down-hole environment provide challenges not present in conventional processing. These issues were addressed and the processes demonstrated by use of a modified combustion tube apparatus. Middle-Eastern heavy crude oil and the corresponding brine were used at the appropriate reservoir conditions. In-situ combustion was used to generate reactive gases and to drive fluids over a heated sand or catalysts bed, simulating the catalyst contacting portion of the proposed processes. The heavy crude oil was found to be amenable to in-situ combustion at anticipated reservoir conditions, with a relatively low air requirement. Forcing the oil to flow over a heated zone prior to production results in some upgrading of the oil, as compared to the original oil, due to thermal effects. Passing the oil over a hydroprocessing catalyst located in the heated zone results in a product that is significantly upgraded as compared to either the original oil or thermally processed oil. Catalytic upgrading is due to hydrogenation and the results in about a 50% sulfur removal and an 8{degree} API gravity increase. Additionally, the heated catalyst was found to be efficient at converting CO to additional H{sub 2}. While all of the technologies needed for a successful field trial of in-situ catalytic upgrading exist, a demonstration has yet to be undertaken. 27 refs., 5 figs., 5 tabs.

  9. INTEGRATED DRILLING SYSTEM USING MUD ACTUATED DOWN HOLE HAMMER AS PRIMARY ENGINE

    Energy Technology Data Exchange (ETDEWEB)

    John V. Fernandez; David S. Pixton

    2005-12-01

    A history and project summary of the development of an integrated drilling system using a mud-actuated down-hole hammer as its primary engine are given. The summary includes laboratory test results, including atmospheric tests of component parts and simulated borehole tests of the hammer system. Several remaining technical hurdles are enumerated. A brief explanation of commercialization potential is included. The primary conclusion for this work is that a mud actuated hammer can yield substantial improvements to drilling rate in overbalanced, hard rock formations. A secondary conclusion is that the down-hole mud actuated hammer can serve to provide other useful down-hole functions including generation of high pressure mud jets, generation of seismic and sonic signals, and generation of diagnostic information based on hammer velocity profiles.

  10. Numerical and experimental investigation of thermoelectric cooling in down-hole measuring tools; a case study

    Directory of Open Access Journals (Sweden)

    Rohitha Weerasinghe

    2017-09-01

    Full Text Available Use of Peltier cooling in down-hole seismic tooling has been restricted by the performance of such devices at elevated temperatures. Present paper analyses the performance of Peltier cooling in temperatures suited for down-hole measuring equipment using measurements, predicted manufacturer data and computational fluid dynamic analysis. Peltier performance prediction techniques is presented with measurements. Validity of the extrapolation of thermoelectric cooling performance at elevated temperatures has been tested using computational models for thermoelectric cooling device. This method has been used to model cooling characteristics of a prototype downhole tool and the computational technique used has been proven valid.

  11. Blind source deconvolution for deep Earth seismology

    Science.gov (United States)

    Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.

    2007-12-01

    We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.

  12. Casingless down-hole for sealing an ablation volume and obtaining a sample for analysis

    Science.gov (United States)

    Noble, Donald T.; Braymen, Steven D.; Anderson, Marvin S.

    1996-10-01

    A casing-less down hole sampling system for acquiring a subsurface sample for analysis using an inductively coupled plasma system is disclosed. The system includes a probe which is pushed into the formation to be analyzed using a hydraulic ram system. The probe includes a detachable tip member which has a soil point mad a barb, with the soil point aiding the penetration of the earth, and the barb causing the tip member to disengage from the probe and remain in the formation when the probe is pulled up. The probe is forced into the formation to be tested, and then pulled up slightly, to disengage the tip member and expose a column of the subsurface formation to be tested. An instrumentation tube mounted in the probe is then extended outward from the probe to longitudinally extend into the exposed column. A balloon seal mounted on the end of the instrumentation tube allows the bottom of the column to be sealed. A source of laser radiation is emitted from the instrumentation tube to ablate a sample from the exposed column. The instrumentation tube can be rotated in the probe to sweep the laser source across the surface of the exposed column. An aerosol transport system carries the ablated sample from the probe to the surface for testing in an inductively coupled plasma system. By testing at various levels in the down-hole as the probe is extracted from the soil, a profile of the subsurface formation may be obtained.

  13. Harsh-Environment Solid-State Gamma Detector for Down-hole Gas and Oil Exploration

    International Nuclear Information System (INIS)

    Peter Sandvik; Stanislav Soloviev; Emad Andarawis; Ho-Young Cha; Jim Rose; Kevin Durocher; Robert Lyons; Bob Pieciuk; Jim Williams; David O'Connor

    2007-01-01

    The goal of this program was to develop a revolutionary solid-state gamma-ray detector suitable for use in down-hole gas and oil exploration. This advanced detector would employ wide-bandgap semiconductor technology to extend the gamma sensor's temperature capability up to 200 C as well as extended reliability, which significantly exceeds current designs based on photomultiplier tubes. In Phase II, project tasks were focused on optimization of the final APD design, growing and characterizing the full scintillator crystals of the selected composition, arranging the APD device packaging, developing the needed optical coupling between scintillator and APD, and characterizing the combined elements as a full detector system preparing for commercialization. What follows is a summary report from the second 18-month phase of this program

  14. Harsh-Environment Solid-State Gamma Detector for Down-hole Gas and Oil Exploration

    Energy Technology Data Exchange (ETDEWEB)

    Peter Sandvik; Stanislav Soloviev; Emad Andarawis; Ho-Young Cha; Jim Rose; Kevin Durocher; Robert Lyons; Bob Pieciuk; Jim Williams; David O' Connor

    2007-08-10

    The goal of this program was to develop a revolutionary solid-state gamma-ray detector suitable for use in down-hole gas and oil exploration. This advanced detector would employ wide-bandgap semiconductor technology to extend the gamma sensor's temperature capability up to 200 C as well as extended reliability, which significantly exceeds current designs based on photomultiplier tubes. In Phase II, project tasks were focused on optimization of the final APD design, growing and characterizing the full scintillator crystals of the selected composition, arranging the APD device packaging, developing the needed optical coupling between scintillator and APD, and characterizing the combined elements as a full detector system preparing for commercialization. What follows is a summary report from the second 18-month phase of this program.

  15. Down-Hole Heat Exchangers: Modelling of a Low-Enthalpy Geothermal System for District Heating

    Directory of Open Access Journals (Sweden)

    M. Carlini

    2012-01-01

    Full Text Available In order to face the growing energy demands, renewable energy sources can provide an alternative to fossil fuels. Thus, low-enthalpy geothermal plants may play a fundamental role in those areas—such as the Province of Viterbo—where shallow groundwater basins occur and conventional geothermal plants cannot be developed. This may lead to being fuelled by locally available sources. The aim of the present paper is to exploit the heat coming from a low-enthalpy geothermal system. The experimental plant consists in a down-hole heat exchanger for civil purposes and can supply thermal needs by district heating. An implementation in MATLAB environment is provided in order to develop a mathematical model. As a consequence, the amount of withdrawable heat can be successfully calculated.

  16. Assessment and interpretation of cross- and down-hole seismograms at the Paducah Gaseous Diffusion Plant

    Energy Technology Data Exchange (ETDEWEB)

    Staub, W.P.; Wang, J.C. (Oak Ridge National Lab., TN (United States)); Selfridge, R.J. (Automated Sciences Group, (United States))

    1991-09-01

    This paper is an assessment and interpretation of cross-and down-hole seismograms recorded at four sites in the vicinity of the Paducah Gaseous Diffusion Plant (PGDP). Arrival times of shear (S-) and compressional (P-) waves are recorded on these seismograms in milliseconds. Together with known distances between energy sources and seismometers lowered into boreholes, these arrival times are used to calculate S- and P-wave velocities in unconsolidated soils and sediments that overlie bedrock approximately 320 ft beneath PGDP. The soil columns are modified after an earlier draft by ERC Environmental and Energy Services Company (ERCE), 1990. In addition to S- and P- wave velocity estimates from this paper, the soil columns contain ERCE's lithologic and other geotechnical data for unconsolidated soils and sediments from the surface to bedrock. Soil columns for Sites 1 through 4 and a site location map are in Plates 1 through 5 of Appendix 6. The velocities in the four columns are input parameters for the SHAKE computer program, a nationally recognized computer model that simulates ground response of unconsolidated materials to earthquake generated seismic waves. The results of the SHAKE simulation are combined with predicted ground responses on rock foundations (caused by a given design earthquake) to predict ground responses of facilities with foundations placed on unconsolidated materials. 3 refs.

  17. The discrete Kalman filtering approach for seismic signals deconvolution

    International Nuclear Information System (INIS)

    Kurniadi, Rizal; Nurhandoko, Bagus Endar B.

    2012-01-01

    Seismic signals are a convolution of reflectivity and seismic wavelet. One of the most important stages in seismic data processing is deconvolution process; the process of deconvolution is inverse filters based on Wiener filter theory. This theory is limited by certain modelling assumptions, which may not always valid. The discrete form of the Kalman filter is then used to generate an estimate of the reflectivity function. The main advantage of Kalman filtering is capability of technique to handling continually time varying models and has high resolution capabilities. In this work, we use discrete Kalman filter that it was combined with primitive deconvolution. Filtering process works on reflectivity function, hence the work flow of filtering is started with primitive deconvolution using inverse of wavelet. The seismic signals then are obtained by convoluting of filtered reflectivity function with energy waveform which is referred to as the seismic wavelet. The higher frequency of wavelet gives smaller wave length, the graphs of these results are presented.

  18. Deconvolution of time series in the laboratory

    Science.gov (United States)

    John, Thomas; Pietschmann, Dirk; Becker, Volker; Wagner, Christian

    2016-10-01

    In this study, we present two practical applications of the deconvolution of time series in Fourier space. First, we reconstruct a filtered input signal of sound cards that has been heavily distorted by a built-in high-pass filter using a software approach. Using deconvolution, we can partially bypass the filter and extend the dynamic frequency range by two orders of magnitude. Second, we construct required input signals for a mechanical shaker in order to obtain arbitrary acceleration waveforms, referred to as feedforward control. For both situations, experimental and theoretical approaches are discussed to determine the system-dependent frequency response. Moreover, for the shaker, we propose a simple feedback loop as an extension to the feedforward control in order to handle nonlinearities of the system.

  19. Deconvoluting double Doppler spectra

    International Nuclear Information System (INIS)

    Ho, K.F.; Beling, C.D.; Fung, S.; Chan, K.L.; Tang, H.W.

    2001-01-01

    The successful deconvolution of data from double Doppler broadening of annihilation radiation (D-DBAR) spectroscopy is a promising area of endeavour aimed at producing momentum distributions of a quality comparable to those of the angular correlation technique. The deconvolution procedure we test in the present study is the constrained generalized least square method. Trials with computer simulated DDBAR spectra are generated and deconvoluted in order to find the best form of regularizer and the regularization parameter. For these trials the Neumann (reflective) boundary condition is used to give a single matrix operation in Fourier space. Experimental D-DBAR spectra are also subject to the same type of deconvolution after having carried out a background subtraction and using a symmetrize resolution function obtained from an 85 Sr source with wide coincidence windows. (orig.)

  20. Deconvolution of Positrons' Lifetime spectra

    International Nuclear Information System (INIS)

    Calderin Hidalgo, L.; Ortega Villafuerte, Y.

    1996-01-01

    In this paper, we explain the iterative method previously develop for the deconvolution of Doppler broadening spectra using the mathematical optimization theory. Also, we start the adaptation and application of this method to the deconvolution of positrons' lifetime annihilation spectra

  1. Studing Regional Wave Source Time Functions Using A Massive Automated EGF Deconvolution Procedure

    Science.gov (United States)

    Xie, J. "; Schaff, D. P.

    2010-12-01

    Reliably estimated source time functions (STF) from high-frequency regional waveforms, such as Lg, Pn and Pg, provide important input for seismic source studies, explosion detection, and minimization of parameter trade-off in attenuation studies. The empirical Green’s function (EGF) method can be used for estimating STF, but it requires a strict recording condition. Waveforms from pairs of events that are similar in focal mechanism, but different in magnitude must be on-scale recorded on the same stations for the method to work. Searching for such waveforms can be very time consuming, particularly for regional waves that contain complex path effects and have reduced S/N ratios due to attenuation. We have developed a massive, automated procedure to conduct inter-event waveform deconvolution calculations from many candidate event pairs. The procedure automatically evaluates the “spikiness” of the deconvolutions by calculating their “sdc”, which is defined as the peak divided by the background value. The background value is calculated as the mean absolute value of the deconvolution, excluding 10 s around the source time function. When the sdc values are about 10 or higher, the deconvolutions are found to be sufficiently spiky (pulse-like), indicating similar path Green’s functions and good estimates of the STF. We have applied this automated procedure to Lg waves and full regional wavetrains from 989 M ≥ 5 events in and around China, calculating about a million deconvolutions. Of these we found about 2700 deconvolutions with sdc greater than 9, which, if having a sufficiently broad frequency band, can be used to estimate the STF of the larger events. We are currently refining our procedure, as well as the estimated STFs. We will infer the source scaling using the STFs. We will also explore the possibility that the deconvolution procedure could complement cross-correlation in a real time event-screening process.

  2. Deconvolution using a neural network

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, S.K.

    1990-11-15

    Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.

  3. Application of deconvolution interferometry with both Hi-net and KiK-net data

    Science.gov (United States)

    Nakata, N.

    2013-12-01

    Application of deconvolution interferometry to wavefields observed by KiK-net, a strong-motion recording network in Japan, is useful for estimating wave velocities and S-wave splitting in the near surface. Using this technique, for example, Nakata and Snieder (2011, 2012) found changed in velocities caused by Tohoku-Oki earthquake in Japan. At the location of the borehole accelerometer of each KiK-net station, a velocity sensor is also installed as a part of a high-sensitivity seismograph network (Hi-net). I present a technique that uses both Hi-net and KiK-net records for computing deconvolution interferometry. The deconvolved waveform obtained from the combination of Hi-net and KiK-net data is similar to the waveform computed from KiK-net data only, which indicates that one can use Hi-net wavefields for deconvolution interferometry. Because Hi-net records have a high signal-to-noise ratio (S/N) and high dynamic resolution, the S/N and the quality of amplitude and phase of deconvolved waveforms can be improved with Hi-net data. These advantages are especially important for short-time moving-window seismic interferometry and deconvolution interferometry using later coda waves.

  4. Harmonic arbitrary waveform generator

    Science.gov (United States)

    Roberts, Brock Franklin

    2017-11-28

    High frequency arbitrary waveforms have applications in radar, communications, medical imaging, therapy, electronic warfare, and charged particle acceleration and control. State of the art arbitrary waveform generators are limited in the frequency they can operate by the speed of the Digital to Analog converters that directly create their arbitrary waveforms. The architecture of the Harmonic Arbitrary Waveform Generator allows the phase and amplitude of the high frequency content of waveforms to be controlled without taxing the Digital to Analog converters that control them. The Harmonic Arbitrary Waveform Generator converts a high frequency input, into a precision, adjustable, high frequency arbitrary waveform.

  5. Contribution of the Surface and Down-Hole Seismic Networks to the Location of Earthquakes at the Soultz-sous-Forêts Geothermal Site (France)

    Science.gov (United States)

    Kinnaert, X.; Gaucher, E.; Kohl, T.; Achauer, U.

    2018-03-01

    Seismicity induced in geo-reservoirs can be a valuable observation to image fractured reservoirs, to characterize hydrological properties, or to mitigate seismic hazard. However, this requires accurate location of the seismicity, which is nowadays an important seismological task in reservoir engineering. The earthquake location (determination of the hypocentres) depends on the model used to represent the medium in which the seismic waves propagate and on the seismic monitoring network. In this work, location uncertainties and location inaccuracies are modeled to investigate the impact of several parameters on the determination of the hypocentres: the picking uncertainty, the numerical precision of picked arrival times, a velocity perturbation and the seismic network configuration. The method is applied to the geothermal site of Soultz-sous-Forêts, which is located in the Upper Rhine Graben (France) and which was subject to detailed scientific investigations. We focus on a massive water injection performed in the year 2000 to enhance the productivity of the well GPK2 in the granitic basement, at approximately 5 km depth, and which induced more than 7000 earthquakes recorded by down-hole and surface seismic networks. We compare the location errors obtained from the joint or the separate use of the down-hole and surface networks. Besides the quantification of location uncertainties caused by picking uncertainties, the impact of the numerical precision of the picked arrival times as provided in a reference catalogue is investigated. The velocity model is also modified to mimic possible effects of a massive water injection and to evaluate its impact on earthquake hypocentres. It is shown that the use of the down-hole network in addition to the surface network provides smaller location uncertainties but can also lead to larger inaccuracies. Hence, location uncertainties would not be well representative of the location errors and interpretation of the seismicity

  6. Is deconvolution applicable to renography?

    NARCIS (Netherlands)

    Kuyvenhoven, JD; Ham, H; Piepsz, A

    The feasibility of deconvolution depends on many factors, but the technique cannot provide accurate results if the maximal transit time (MaxTT) is longer than the duration of the acquisition. This study evaluated whether, on the basis of a 20 min renogram, it is possible to predict in which cases

  7. Convolution-deconvolution in DIGES

    International Nuclear Information System (INIS)

    Philippacopoulos, A.J.; Simos, N.

    1995-01-01

    Convolution and deconvolution operations is by all means a very important aspect of SSI analysis since it influences the input to the seismic analysis. This paper documents some of the convolution/deconvolution procedures which have been implemented into the DIGES code. The 1-D propagation of shear and dilatational waves in typical layered configurations involving a stack of layers overlying a rock is treated by DIGES in a similar fashion to that of available codes, e.g. CARES, SHAKE. For certain configurations, however, there is no need to perform such analyses since the corresponding solutions can be obtained in analytic form. Typical cases involve deposits which can be modeled by a uniform halfspace or simple layered halfspaces. For such cases DIGES uses closed-form solutions. These solutions are given for one as well as two dimensional deconvolution. The type of waves considered include P, SV and SH waves. The non-vertical incidence is given special attention since deconvolution can be defined differently depending on the problem of interest. For all wave cases considered, corresponding transfer functions are presented in closed-form. Transient solutions are obtained in the frequency domain. Finally, a variety of forms are considered for representing the free field motion both in terms of deterministic as well as probabilistic representations. These include (a) acceleration time histories, (b) response spectra (c) Fourier spectra and (d) cross-spectral densities

  8. Deconvolution algorithms applied in ultrasonics

    International Nuclear Information System (INIS)

    Perrot, P.

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs

  9. Deconvolution using the complex cepstrum

    Energy Technology Data Exchange (ETDEWEB)

    Riley, H B

    1980-12-01

    The theory, description, and implementation of a generalized linear filtering system for the nonlinear filtering of convolved signals are presented. A detailed look at the problems and requirements associated with the deconvolution of signal components is undertaken. Related properties are also developed. A synthetic example is shown and is followed by an application using real seismic data. 29 figures.

  10. Programmable waveform controller

    International Nuclear Information System (INIS)

    Yeh, H.T.

    1979-01-01

    A programmable waveform controller (PWC) was developed for voltage waveform generation in the laboratory. It is based on the Intel 8080 family of chips. The hardware uses the modular board approach, sharing a common 44-pin bus. The software contains two separate programs: the first generates a single connected linear ramp waveform and is capable of bipolar operation, linear interpolation between input data points, extended time range, and cycling; the second generates four independent square waveforms with variable duration and amplitude

  11. Deconvolution under Poisson noise using exact data fidelity and synthesis or analysis sparsity priors

    OpenAIRE

    Dupé , François-Xavier; Fadili , Jalal M.; Starck , Jean-Luc

    2012-01-01

    International audience; In this paper, we propose a Bayesian MAP estimator for solving the deconvolution problems when the observations are corrupted by Poisson noise. Towards this goal, a proper data fidelity term (log-likelihood) is introduced to reflect the Poisson statistics of the noise. On the other hand, as a prior, the images to restore are assumed to be positive and sparsely represented in a dictionary of waveforms such as wavelets or curvelets. Both analysis and synthesis-type spars...

  12. Blind Deconvolution With Model Discrepancies

    Czech Academy of Sciences Publication Activity Database

    Kotera, Jan; Šmídl, Václav; Šroubek, Filip

    2017-01-01

    Roč. 26, č. 5 (2017), s. 2533-2544 ISSN 1057-7149 R&D Projects: GA ČR GA13-29225S; GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : blind deconvolution * variational Bayes * automatic relevance determination Subject RIV: JD - Computer Applications, Robotics OBOR OECD: Computer hardware and architecture Impact factor: 4.828, year: 2016 http://library.utia.cas.cz/separaty/2017/ZOI/kotera-0474858.pdf

  13. Seismic interferometry by multidimensional deconvolution as a means to compensate for anisotropic illumination

    Science.gov (United States)

    Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.

    2008-12-01

    It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.

  14. Machine Learning Approaches to Image Deconvolution

    OpenAIRE

    Schuler, Christian

    2017-01-01

    Image blur is a fundamental problem in both photography and scientific imaging. Even the most well-engineered optics are imperfect, and finite exposure times cause motion blur. To reconstruct the original sharp image, the field of image deconvolution tries to recover recorded photographs algorithmically. When the blur is known, this problem is called non-blind deconvolution. When the blur is unknown and has to be inferred from the observed image, it is called blind deconvolution. The key to r...

  15. A new deconvolution method applied to ultrasonic images

    International Nuclear Information System (INIS)

    Sallard, J.

    1999-01-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  16. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  17. A new deconvolution method applied to ultrasonic images; Etude d'une methode de deconvolution adaptee aux images ultrasonores

    Energy Technology Data Exchange (ETDEWEB)

    Sallard, J

    1999-07-01

    This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)

  18. Surrogate waveform models

    Science.gov (United States)

    Blackman, Jonathan; Field, Scott; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel

    2015-04-01

    With the advanced detector era just around the corner, there is a strong need for fast and accurate models of gravitational waveforms from compact binary coalescence. Fast surrogate models can be built out of an accurate but slow waveform model with minimal to no loss in accuracy, but may require a large number of evaluations of the underlying model. This may be prohibitively expensive if the underlying is extremely slow, for example if we wish to build a surrogate for numerical relativity. We examine alternate choices to building surrogate models which allow for a more sparse set of input waveforms. Research supported in part by NSERC.

  19. Streaming Multiframe Deconvolutions on GPUs

    Science.gov (United States)

    Lee, M. A.; Budavári, T.

    2015-09-01

    Atmospheric turbulence distorts all ground-based observations, which is especially detrimental to faint detections. The point spread function (PSF) defining this blur is unknown for each exposure and varies significantly over time, making image analysis difficult. Lucky imaging and traditional co-adding throws away lots of information. We developed blind deconvolution algorithms that can simultaneously obtain robust solutions for the background image and all the PSFs. It is done in a streaming setting, which makes it practical for large number of big images. We implemented a new tool that runs of GPUs and achieves exceptional running times that can scale to the new time-domain surveys. Our code can quickly and effectively recover high-resolution images exceeding the quality of traditional co-adds. We demonstrate the power of the method on the repeated exposures in the Sloan Digital Sky Survey's Stripe 82.

  20. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  1. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  2. Parsimonious Charge Deconvolution for Native Mass Spectrometry

    Science.gov (United States)

    2018-01-01

    Charge deconvolution infers the mass from mass over charge (m/z) measurements in electrospray ionization mass spectra. When applied over a wide input m/z or broad target mass range, charge-deconvolution algorithms can produce artifacts, such as false masses at one-half or one-third of the correct mass. Indeed, a maximum entropy term in the objective function of MaxEnt, the most commonly used charge deconvolution algorithm, favors a deconvolved spectrum with many peaks over one with fewer peaks. Here we describe a new “parsimonious” charge deconvolution algorithm that produces fewer artifacts. The algorithm is especially well-suited to high-resolution native mass spectrometry of intact glycoproteins and protein complexes. Deconvolution of native mass spectra poses special challenges due to salt and small molecule adducts, multimers, wide mass ranges, and fewer and lower charge states. We demonstrate the performance of the new deconvolution algorithm on a range of samples. On the heavily glycosylated plasma properdin glycoprotein, the new algorithm could deconvolve monomer and dimer simultaneously and, when focused on the m/z range of the monomer, gave accurate and interpretable masses for glycoforms that had previously been analyzed manually using m/z peaks rather than deconvolved masses. On therapeutic antibodies, the new algorithm facilitated the analysis of extensions, truncations, and Fab glycosylation. The algorithm facilitates the use of native mass spectrometry for the qualitative and quantitative analysis of protein and protein assemblies. PMID:29376659

  3. Compressive full waveform lidar

    Science.gov (United States)

    Yang, Weiyi; Ke, Jun

    2017-05-01

    To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

  4. Parallelization of a blind deconvolution algorithm

    Science.gov (United States)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  5. Multichannel waveform display system

    International Nuclear Information System (INIS)

    Kolvankar, V.G.

    1989-01-01

    For any multichannel data acquisition system, a multichannel paper chart recorder undoubtedly forms an essential part of the system. When deployed on-line, it instantaneously provides, for visual inspection, hard copies of the signal waveforms on common time base at any desired sensitivity and time resolution. Within the country, only a small range of these strip chart recorder s is available, and under stringent specifications imported recorders are often procured. The cost of such recorders may range from 1 to 5 lakhs of rupees in foreign exchange. A system to provide on the oscilloscope a steady display of multichannel waveforms, refreshed from the digital data stored in the memory is developed. The merits and demerits of the display system are compared with that built around a conventional paper chart recorder. Various illustrations of multichannel seismic event data acquired at Gauribidanur seismic array station are also presented. (author). 2 figs

  6. Histogram deconvolution - An aid to automated classifiers

    Science.gov (United States)

    Lorre, J. J.

    1983-01-01

    It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.

  7. Preliminary study of some problems in deconvolution

    International Nuclear Information System (INIS)

    Gilly, Louis; Garderet, Philippe; Lecomte, Alain; Max, Jacques

    1975-07-01

    After defining convolution operator, its physical meaning and principal properties are given. Several deconvolution methods are analysed: method of Fourier Transform and iterative numerical methods. Positivity of measured magnitude has been object of a new Yvon Biraud's method. Analytic prolongation of Fourier transform applied to unknow fonction, has been studied by M. Jean-Paul Sheidecker. An important bibliography is given [fr

  8. Electronics via waveform analysis

    CERN Document Server

    Craig, Edwin C

    1993-01-01

    The author believes that a good basic understanding of electronics can be achieved by detailed visual analyses of the actual voltage waveforms present in selected circuits. The voltage waveforms included in this text were photographed using a 35-rrun camera in an attempt to make the book more attractive. This book is intended for the use of students with a variety of backgrounds. For this reason considerable material has been placed in the Appendix for those students who find it useful. The Appendix includes many basic electricity and electronic concepts as well as mathematical derivations that are not vital to the understanding of the circuit being discussed in the text at that time. Also some derivations might be so long that, if included in the text, it could affect the concentration of the student on the circuit being studied. The author has tried to make the book comprehensive enough so that a student could use it as a self-study course, providing one has access to adequate laboratory equipment.

  9. Simultaneous super-resolution and blind deconvolution

    International Nuclear Information System (INIS)

    Sroubek, F; Flusser, J; Cristobal, G

    2008-01-01

    In many real applications, blur in input low-resolution images is a nuisance, which prevents traditional super-resolution methods from working correctly. This paper presents a unifying approach to the blind deconvolution and superresolution problem of multiple degraded low-resolution frames of the original scene. We introduce a method which assumes no prior information about the shape of degradation blurs and which is properly defined for any rational (fractional) resolution factor. The method minimizes a regularized energy function with respect to the high-resolution image and blurs, where regularization is carried out in both the image and blur domains. The blur regularization is based on a generalized multichannel blind deconvolution constraint. Experiments on real data illustrate robustness and utilization of the method

  10. Convex blind image deconvolution with inverse filtering

    Science.gov (United States)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  11. Constrained blind deconvolution using Wirtinger flow methods

    KAUST Repository

    Walk, Philipp; Jung, Peter; Hassibi, Babak

    2017-01-01

    In this work we consider one-dimensional blind deconvolution with prior knowledge of signal autocorrelations in the classical framework of polynomial factorization. In particular this univariate case highly suffers from several non-trivial ambiguities and therefore blind deconvolution is known to be ill-posed in general. However, if additional autocorrelation information is available and the corresponding polynomials are co-prime, blind deconvolution is uniquely solvable up to global phase. Using lifting, the outer product of the unknown vectors is the solution to a (convex) semi-definite program (SDP) demonstrating that -theoretically- recovery is computationally tractable. However, for practical applications efficient algorithms are required which should operate in the original signal space. To this end we also discuss a gradient descent algorithm (Wirtinger flow) for the original non-convex problem. We demonstrate numerically that such an approach has performance comparable to the semidefinite program in the noisy case. Our work is motivated by applications in blind communication scenarios and we will discuss a specific signaling scheme where information is encoded into polynomial roots.

  12. Constrained blind deconvolution using Wirtinger flow methods

    KAUST Repository

    Walk, Philipp

    2017-09-04

    In this work we consider one-dimensional blind deconvolution with prior knowledge of signal autocorrelations in the classical framework of polynomial factorization. In particular this univariate case highly suffers from several non-trivial ambiguities and therefore blind deconvolution is known to be ill-posed in general. However, if additional autocorrelation information is available and the corresponding polynomials are co-prime, blind deconvolution is uniquely solvable up to global phase. Using lifting, the outer product of the unknown vectors is the solution to a (convex) semi-definite program (SDP) demonstrating that -theoretically- recovery is computationally tractable. However, for practical applications efficient algorithms are required which should operate in the original signal space. To this end we also discuss a gradient descent algorithm (Wirtinger flow) for the original non-convex problem. We demonstrate numerically that such an approach has performance comparable to the semidefinite program in the noisy case. Our work is motivated by applications in blind communication scenarios and we will discuss a specific signaling scheme where information is encoded into polynomial roots.

  13. Deconvolution of the vestibular evoked myogenic potential.

    Science.gov (United States)

    Lütkenhöner, Bernd; Basel, Türker

    2012-02-07

    The vestibular evoked myogenic potential (VEMP) and the associated variance modulation can be understood by a convolution model. Two functions of time are incorporated into the model: the motor unit action potential (MUAP) of an average motor unit, and the temporal modulation of the MUAP rate of all contributing motor units, briefly called rate modulation. The latter is the function of interest, whereas the MUAP acts as a filter that distorts the information contained in the measured data. Here, it is shown how to recover the rate modulation by undoing the filtering using a deconvolution approach. The key aspects of our deconvolution algorithm are as follows: (1) the rate modulation is described in terms of just a few parameters; (2) the MUAP is calculated by Wiener deconvolution of the VEMP with the rate modulation; (3) the model parameters are optimized using a figure-of-merit function where the most important term quantifies the difference between measured and model-predicted variance modulation. The effectiveness of the algorithm is demonstrated with simulated data. An analysis of real data confirms the view that there are basically two components, which roughly correspond to the waves p13-n23 and n34-p44 of the VEMP. The rate modulation corresponding to the first, inhibitory component is much stronger than that corresponding to the second, excitatory component. But the latter is more extended so that the two modulations have almost the same equivalent rectangular duration. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  15. Waveform Sampler CAMAC Module

    International Nuclear Information System (INIS)

    Freytag, D.R.; Haller, G.M.; Kang, H.; Wang, J.

    1985-09-01

    A Waveform Sampler Module (WSM) for the measurement of signal shapes coming from the multi-hit drift chambers of the SLAC SLC detector is described. The module uses a high speed, high resolution analog storage device (AMU) developed in collaboration between SLAC and Stanford University. The AMU devices together with high speed TTL clocking circuitry are packaged in a hybrid which is also suitable for mounting on the detector. The module is in CAMAC format and provides eight signal channels, each recording signal amplitude versus time in 512 cells at a sampling rate of up to 360 MHz. Data are digitized by a 12-bit ADC with a 1 μs conversion time and stored in an on-board memory accessible through CAMAC

  16. Quantitative fluorescence microscopy and image deconvolution.

    Science.gov (United States)

    Swedlow, Jason R

    2013-01-01

    Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used

  17. Waveform Catalog, Extreme Mass Ratio Binary (Capture)

    Data.gov (United States)

    National Aeronautics and Space Administration — Numerically-generated gravitational waveforms for circular inspiral into Kerr black holes. These waveforms were developed using Scott Hughes' black hole perturbation...

  18. Multiples waveform inversion

    KAUST Repository

    Zhang, Dongliang

    2013-01-01

    To increase the illumination of the subsurface and to eliminate the dependency of FWI on the source wavelet, we propose multiples waveform inversion (MWI) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. These virtual sources are used to numerically generate downgoing wavefields that are correlated with the backprojected surface-related multiples to give the migration image. Since the recorded data are treated as the virtual sources, knowledge of the source wavelet is not required, and the subsurface illumination is greatly enhanced because the entire free surface acts as an extended source compared to the radiation pattern of a traditional point source. Numerical tests on the Marmousi2 model show that the convergence rate and the spatial resolution of MWI is, respectively, faster and more accurate then FWI. The potential pitfall with this method is that the multiples undergo more than one roundtrip to the surface, which increases attenuation and reduces spatial resolution. This can lead to less resolved tomograms compared to conventional FWI. The possible solution is to combine both FWI and MWI in inverting for the subsurface velocity distribution.

  19. Deconvolution algorithms applied in ultrasonics; Methodes de deconvolution en echographie ultrasonore

    Energy Technology Data Exchange (ETDEWEB)

    Perrot, P

    1993-12-01

    In a complete system of acquisition and processing of ultrasonic signals, it is often necessary at one stage to use some processing tools to get rid of the influence of the different elements of that system. By that means, the final quality of the signals in terms of resolution is improved. There are two main characteristics of ultrasonic signals which make this task difficult. Firstly, the signals generated by transducers are very often non-minimum phase. The classical deconvolution algorithms are unable to deal with such characteristics. Secondly, depending on the medium, the shape of the propagating pulse is evolving. The spatial invariance assumption often used in classical deconvolution algorithms is rarely valid. Many classical algorithms, parametric and non-parametric, have been investigated: the Wiener-type, the adaptive predictive techniques, the Oldenburg technique in the frequency domain, the minimum variance deconvolution. All the algorithms have been firstly tested on simulated data. One specific experimental set-up has also been analysed. Simulated and real data has been produced. This set-up demonstrated the interest in applying deconvolution, in terms of the achieved resolution. (author). 32 figs., 29 refs.

  20. Constrained variable projection method for blind deconvolution

    International Nuclear Information System (INIS)

    Cornelio, A; Piccolomini, E Loli; Nagy, J G

    2012-01-01

    This paper is focused on the solution of the blind deconvolution problem, here modeled as a separable nonlinear least squares problem. The well known ill-posedness, both on recovering the blurring operator and the true image, makes the problem really difficult to handle. We show that, by imposing appropriate constraints on the variables and with well chosen regularization parameters, it is possible to obtain an objective function that is fairly well behaved. Hence, the resulting nonlinear minimization problem can be effectively solved by classical methods, such as the Gauss-Newton algorithm.

  1. Blind image deconvolution methods and convergence

    CERN Document Server

    Chaudhuri, Subhasis; Rameshan, Renu

    2014-01-01

    Blind deconvolution is a classical image processing problem which has been investigated by a large number of researchers over the last four decades. The purpose of this monograph is not to propose yet another method for blind image restoration. Rather the basic issue of deconvolvability has been explored from a theoretical view point. Some authors claim very good results while quite a few claim that blind restoration does not work. The authors clearly detail when such methods are expected to work and when they will not. In order to avoid the assumptions needed for convergence analysis in the

  2. Comparison of Deconvolution Filters for Photoacoustic Tomography.

    Directory of Open Access Journals (Sweden)

    Dominique Van de Sompel

    Full Text Available In this work, we compare the merits of three temporal data deconvolution methods for use in the filtered backprojection algorithm for photoacoustic tomography (PAT. We evaluate the standard Fourier division technique, the Wiener deconvolution filter, and a Tikhonov L-2 norm regularized matrix inversion method. Our experiments were carried out on subjects of various appearances, namely a pencil lead, two man-made phantoms, an in vivo subcutaneous mouse tumor model, and a perfused and excised mouse brain. All subjects were scanned using an imaging system with a rotatable hemispherical bowl, into which 128 ultrasound transducer elements were embedded in a spiral pattern. We characterized the frequency response of each deconvolution method, compared the final image quality achieved by each deconvolution technique, and evaluated each method's robustness to noise. The frequency response was quantified by measuring the accuracy with which each filter recovered the ideal flat frequency spectrum of an experimentally measured impulse response. Image quality under the various scenarios was quantified by computing noise versus resolution curves for a point source phantom, as well as the full width at half maximum (FWHM and contrast-to-noise ratio (CNR of selected image features such as dots and linear structures in additional imaging subjects. It was found that the Tikhonov filter yielded the most accurate balance of lower and higher frequency content (as measured by comparing the spectra of deconvolved impulse response signals to the ideal flat frequency spectrum, achieved a competitive image resolution and contrast-to-noise ratio, and yielded the greatest robustness to noise. While the Wiener filter achieved a similar image resolution, it tended to underrepresent the lower frequency content of the deconvolved signals, and hence of the reconstructed images after backprojection. In addition, its robustness to noise was poorer than that of the Tikhonov

  3. Enhancement of the Signal-to-Noise Ratio in Sonic Logging Waveforms by Seismic Interferometry

    KAUST Repository

    Aldawood, Ali

    2012-04-01

    Sonic logs are essential tools for reliably identifying interval velocities which, in turn, are used in many seismic processes. One problem that arises, while logging, is irregularities due to washout zones along the borehole surfaces that scatters the transmitted energy and hence weakens the signal recorded at the receivers. To alleviate this problem, I have extended the theory of super-virtual refraction interferometry to enhance the signal-to-noise ratio (SNR) sonic waveforms. Tests on synthetic and real data show noticeable signal-to-noise ratio (SNR) enhancements of refracted P-wave arrivals in the sonic waveforms. The theory of super-virtual interferometric stacking is composed of two redatuming steps followed by a stacking procedure. The first redatuming procedure is of correlation type, where traces are correlated together to get virtual traces with the sources datumed to the refractor. The second datuming step is of convolution type, where traces are convolved together to dedatum the sources back to their original positions. The stacking procedure following each step enhances the signal to noise ratio of the refracted P-wave first arrivals. Datuming with correlation and convolution of traces introduces severe artifacts denoted as correlation artifacts in super-virtual data. To overcome this problem, I replace the datuming with correlation step by datuming with deconvolution. Although the former datuming method is more robust, the latter one reduces the artifacts significantly. Moreover, deconvolution can be a noise amplifier which is why a regularization term is utilized, rendering the datuming with deconvolution more stable. Tests of datuming with deconvolution instead of correlation with synthetic and real data examples show significant reduction of these artifacts. This is especially true when compared with the conventional way of applying the super-virtual refraction interferometry method.

  4. Propagation compensation by waveform predistortion

    Science.gov (United States)

    Halpin, Thomas F.; Urkowitz, Harry; Maron, David E.

    Certain modifications of the Cobra Dane radar are considered, particularly modernization of the waveform generator. For wideband waveforms, the dispersive effects of the ionosphere become increasingly significant. The technique of predistorting the transmitted waveform so that a linear chirp is received after two-way passage is one way to overcome that dispersion. This approach is maintained for the modified system, but with a specific predistortion waveform well suited to the modification. The appropriate form of predistortion was derived in an implicit form of time as a function of frequency. The exact form was approximated by Taylor series and pseudo-Chebyshev approximation. The latter proved better, as demonstrated by the resulting smaller loss in detection sensitivity, less coarsening of range resolution, and a lower peak sidelobe. The effects of error in determining the plasma delay constant were determined and are given in graphical form. A suggestion for in-place determination of the plasma delay constant is given.

  5. Data-driven efficient score tests for deconvolution hypotheses

    NARCIS (Netherlands)

    Langovoy, M.

    2008-01-01

    We consider testing statistical hypotheses about densities of signals in deconvolution models. A new approach to this problem is proposed. We constructed score tests for the deconvolution density testing with the known noise density and efficient score tests for the case of unknown density. The

  6. Improving the efficiency of deconvolution algorithms for sound source localization

    DEFF Research Database (Denmark)

    Lylloff, Oliver Ackermann; Fernandez Grande, Efren; Agerkvist, Finn T.

    2015-01-01

    of the unknown acoustic source distribution and the beamformer's response to a point source, i.e., point-spread function. A significant limitation of deconvolution is, however, an additional computational effort compared to beamforming. In this paper, computationally efficient deconvolution algorithms...

  7. Advanced Source Deconvolution Methods for Compton Telescopes

    Science.gov (United States)

    Zoglauer, Andreas

    The next generation of space telescopes utilizing Compton scattering for astrophysical observations is destined to one day unravel the mysteries behind Galactic nucleosynthesis, to determine the origin of the positron annihilation excess near the Galactic center, and to uncover the hidden emission mechanisms behind gamma-ray bursts. Besides astrophysics, Compton telescopes are establishing themselves in heliophysics, planetary sciences, medical imaging, accelerator physics, and environmental monitoring. Since the COMPTEL days, great advances in the achievable energy and position resolution were possible, creating an extremely vast, but also extremely sparsely sampled data space. Unfortunately, the optimum way to analyze the data from the next generation of Compton telescopes has not yet been found, which can retrieve all source parameters (location, spectrum, polarization, flux) and achieves the best possible resolution and sensitivity at the same time. This is especially important for all sciences objectives looking at the inner Galaxy: the large amount of expected sources, the high background (internal and Galactic diffuse emission), and the limited angular resolution, make it the most taxing case for data analysis. In general, two key challenges exist: First, what are the best data space representations to answer the specific science questions? Second, what is the best way to deconvolve the data to fully retrieve the source parameters? For modern Compton telescopes, the existing data space representations can either correctly reconstruct the absolute flux (binned mode) or achieve the best possible resolution (list-mode), both together were not possible up to now. Here we propose to develop a two-stage hybrid reconstruction method which combines the best aspects of both. Using a proof-of-concept implementation we can for the first time show that it is possible to alternate during each deconvolution step between a binned-mode approach to get the flux right and a

  8. Determine Earthquake Rupture Directivity Using Taiwan TSMIP Strong Motion Waveforms

    Science.gov (United States)

    Chang, Kaiwen; Chi, Wu-Cheng; Lai, Ying-Ju; Gung, YuanCheng

    2013-04-01

    Inverting seismic waveforms for the finite fault source parameters is important for studying the physics of earthquake rupture processes. It is also significant to image seismogenic structures in urban areas. Here we analyze the finite-source process and test for the causative fault plane using the accelerograms recorded by the Taiwan Strong-Motion Instrumentation Program (TSMIP) stations. The point source parameters for the mainshock and aftershocks were first obtained by complete waveform moment tensor inversions. We then use the seismograms generated by the aftershocks as empirical Green's functions (EGFs) to retrieve the apparent source time functions (ASTFs) of near-field stations using projected Landweber deconvolution approach. The method for identifying the fault plane relies on the spatial patterns of the apparent source time function durations which depend on the angle between rupture direction and the take-off angle and azimuth of the ray. These derived duration patterns then are compared with the theoretical patterns, which are functions of the following parameters, including focal depth, epicentral distance, average crustal 1D velocity, fault plane attitude, and rupture direction on the fault plane. As a result, the ASTFs derived from EGFs can be used to infer the ruptured fault plane and the rupture direction. Finally we used part of the catalogs to study important seismogenic structures in the area near Chiayi, Taiwan, where a damaging earthquake has occurred about a century ago. The preliminary results show a strike-slip earthquake on 22 October 1999 (Mw 5.6) has ruptured unilaterally toward SSW on a sub-vertical fault. The procedure developed from this study can be applied to other strong motion waveforms recorded from other earthquakes to better understand their kinematic source parameters.

  9. Towards robust deconvolution of low-dose perfusion CT: Sparse perfusion deconvolution using online dictionary learning

    Science.gov (United States)

    Fang, Ruogu; Chen, Tsuhan; Sanelli, Pina C.

    2014-01-01

    Computed tomography perfusion (CTP) is an important functional imaging modality in the evaluation of cerebrovascular diseases, particularly in acute stroke and vasospasm. However, the post-processed parametric maps of blood flow tend to be noisy, especially in low-dose CTP, due to the noisy contrast enhancement profile and the oscillatory nature of the results generated by the current computational methods. In this paper, we propose a robust sparse perfusion deconvolution method (SPD) to estimate cerebral blood flow in CTP performed at low radiation dose. We first build a dictionary from high-dose perfusion maps using online dictionary learning and then perform deconvolution-based hemodynamic parameters estimation on the low-dose CTP data. Our method is validated on clinical data of patients with normal and pathological CBF maps. The results show that we achieve superior performance than existing methods, and potentially improve the differentiation between normal and ischemic tissue in the brain. PMID:23542422

  10. Solving a Deconvolution Problem in Photon Spectrometry

    CERN Document Server

    Aleksandrov, D; Hille, P T; Polichtchouk, B; Kharlov, Y; Sukhorukov, M; Wang, D; Shabratova, G; Demanov, V; Wang, Y; Tveter, T; Faltys, M; Mao, Y; Larsen, D T; Zaporozhets, S; Sibiryak, I; Lovhoiden, G; Potcheptsov, T; Kucheryaev, Y; Basmanov, V; Mares, J; Yanovsky, V; Qvigstad, H; Zenin, A; Nikolaev, S; Siemiarczuk, T; Yuan, X; Cai, X; Redlich, K; Pavlinov, A; Roehrich, D; Manko, V; Deloff, A; Ma, K; Maruyama, Y; Dobrowolski, T; Shigaki, K; Nikulin, S; Wan, R; Mizoguchi, K; Petrov, V; Mueller, H; Ippolitov, M; Liu, L; Sadovsky, S; Stolpovsky, P; Kurashvili, P; Nomokonov, P; Xu, C; Torii, H; Il'kaev, R; Zhang, X; Peresunko, D; Soloviev, A; Vodopyanov, A; Sugitate, T; Ullaland, K; Huang, M; Zhou, D; Nystrand, J; Punin, V; Yin, Z; Batyunya, B; Karadzhev, K; Nazarov, G; Fil'chagin, S; Nazarenko, S; Buskenes, J I; Horaguchi, T; Djuvsland, O; Chuman, F; Senko, V; Alme, J; Wilk, G; Fehlker, D; Vinogradov, Y; Budilov, V; Iwasaki, T; Ilkiv, I; Budnikov, D; Vinogradov, A; Kazantsev, A; Bogolyubsky, M; Lindal, S; Polak, K; Skaali, B; Mamonov, A; Kuryakin, A; Wikne, J; Skjerdal, K

    2010-01-01

    We solve numerically a deconvolution problem to extract the undisturbed spectrum from the measured distribution contaminated by the finite resolution of the measuring device. A problem of this kind emerges when one wants to infer the momentum distribution of the neutral pions by detecting the it decay photons using the photon spectrometer of the ALICE LHC experiment at CERN {[}1]. The underlying integral equation connecting the sought for pion spectrum and the measured gamma spectrum has been discretized and subsequently reduced to a system of linear algebraic equations. The latter system, however, is known to be ill-posed and must be regularized to obtain a stable solution. This task has been accomplished here by means of the Tikhonov regularization scheme combined with the L-curve method. The resulting pion spectrum is in an excellent quantitative agreement with the pion spectrum obtained from a Monte Carlo simulation. (C) 2010 Elsevier B.V. All rights reserved.

  11. Optimal filtering values in renogram deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Puchal, R.; Pavia, J.; Gonzalez, A.; Ros, D.

    1988-07-01

    The evaluation of the isotopic renogram by means of the renal retention function (RRF) is a technique that supplies valuable information about renal function. It is not unusual to perform a smoothing of the data because of the sensitivity of the deconvolution algorithms with respect to noise. The purpose of this work is to confirm the existence of an optimal smoothing which minimises the error between the calculated RRF and the theoretical value for two filters (linear and non-linear). In order to test the effectiveness of these optimal smoothing values, some parameters of the calculated RRF were considered using this optimal smoothing. The comparison of these parameters with the theoretical ones revealed a better result in the case of the linear filter than in the non-linear case. The study was carried out simulating the input and output curves which would be obtained when using hippuran and DTPA as tracers.

  12. Z-transform Zeros in Mixed Phase Deconvolution of Speech

    DEFF Research Database (Denmark)

    Pedersen, Christian Fischer

    2013-01-01

    The present thesis addresses mixed phase deconvolution of speech by z-transform zeros. This includes investigations into stability, accuracy, and time complexity of a numerical bijection between time domain and the domain of z-transform zeros. Z-transform factorization is by no means esoteric......, but employing zeros of the z-transform (ZZT) as a signal representation, analysis, and processing domain per se, is only scarcely researched. A notable property of this domain is the translation of time domain convolution into union of sets; thus, the ZZT domain is appropriate for convolving and deconvolving...... discrimination achieves mixed phase deconvolution and equivalates complex cepstrum based deconvolution by causality, which has lower time and space complexities as demonstrated. However, deconvolution by ZZT prevents phase wrapping. Existence and persistence of ZZT domain immiscibility of the opening and closing...

  13. Elastic reflection waveform inversion with variable density

    KAUST Repository

    Li, Yuanyuan; Li, Zhenchun; Alkhalifah, Tariq Ali; Guo, Qiang

    2017-01-01

    Elastic full waveform inversion (FWI) provides a better description of the subsurface than those given by the acoustic assumption. However it suffers from a more serious cycle skipping problem compared with the latter. Reflection waveform inversion

  14. High-spatial-resolution localization algorithm based on cascade deconvolution in a distributed Sagnac interferometer invasion monitoring system.

    Science.gov (United States)

    Pi, Shaohua; Wang, Bingjie; Zhao, Jiang; Sun, Qi

    2016-10-10

    In the Sagnac fiber optic interferometer system, the phase difference signal can be illustrated as a convolution of the waveform of the invasion with its occurring-position-associated transfer function h(t); deconvolution is introduced to improve the spatial resolution of the localization. In general, to get a 26 m spatial resolution at a sampling rate of 4×106  s-1, the algorithm should mainly go through three steps after the preprocessing operations. First, the decimated phase difference signal is transformed from the time domain into the real cepstrum domain, where a probable region of invasion distance can be ascertained. Second, a narrower region of invasion distance is acquired by coarsely assuming and sweeping a transfer function h(t) within the probable region and examining where the restored invasion waveform x(t) gets its minimum standard deviation. Third, fine sweeping the narrow region point by point with the same criteria is used to get the final localization. Also, the original waveform of invasion can be restored for the first time as a by-product, which provides more accurate and pure characteristics for further processing, such as subsequent pattern recognition.

  15. Genomics assisted ancestry deconvolution in grape.

    Directory of Open Access Journals (Sweden)

    Jason Sawler

    Full Text Available The genus Vitis (the grapevine is a group of highly diverse, diploid woody perennial vines consisting of approximately 60 species from across the northern hemisphere. It is the world's most valuable horticultural crop with ~8 million hectares planted, most of which is processed into wine. To gain insights into the use of wild Vitis species during the past century of interspecific grape breeding and to provide a foundation for marker-assisted breeding programmes, we present a principal components analysis (PCA based ancestry estimation method to calculate admixture proportions of hybrid grapes in the United States Department of Agriculture grape germplasm collection using genome-wide polymorphism data. We find that grape breeders have backcrossed to both the domesticated V. vinifera and wild Vitis species and that reasonably accurate genome-wide ancestry estimation can be performed on interspecific Vitis hybrids using a panel of fewer than 50 ancestry informative markers (AIMs. We compare measures of ancestry informativeness used in selecting SNP panels for two-way admixture estimation, and verify the accuracy of our method on simulated populations of admixed offspring. Our method of ancestry deconvolution provides a first step towards selection at the seed or seedling stage for desirable admixture profiles, which will facilitate marker-assisted breeding that aims to introgress traits from wild Vitis species while retaining the desirable characteristics of elite V. vinifera cultivars.

  16. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  17. Genomics Assisted Ancestry Deconvolution in Grape

    Science.gov (United States)

    Sawler, Jason; Reisch, Bruce; Aradhya, Mallikarjuna K.; Prins, Bernard; Zhong, Gan-Yuan; Schwaninger, Heidi; Simon, Charles; Buckler, Edward; Myles, Sean

    2013-01-01

    The genus Vitis (the grapevine) is a group of highly diverse, diploid woody perennial vines consisting of approximately 60 species from across the northern hemisphere. It is the world’s most valuable horticultural crop with ~8 million hectares planted, most of which is processed into wine. To gain insights into the use of wild Vitis species during the past century of interspecific grape breeding and to provide a foundation for marker-assisted breeding programmes, we present a principal components analysis (PCA) based ancestry estimation method to calculate admixture proportions of hybrid grapes in the United States Department of Agriculture grape germplasm collection using genome-wide polymorphism data. We find that grape breeders have backcrossed to both the domesticated V. vinifera and wild Vitis species and that reasonably accurate genome-wide ancestry estimation can be performed on interspecific Vitis hybrids using a panel of fewer than 50 ancestry informative markers (AIMs). We compare measures of ancestry informativeness used in selecting SNP panels for two-way admixture estimation, and verify the accuracy of our method on simulated populations of admixed offspring. Our method of ancestry deconvolution provides a first step towards selection at the seed or seedling stage for desirable admixture profiles, which will facilitate marker-assisted breeding that aims to introgress traits from wild Vitis species while retaining the desirable characteristics of elite V. vinifera cultivars. PMID:24244717

  18. X-ray scatter removal by deconvolution

    International Nuclear Information System (INIS)

    Seibert, J.A.; Boone, J.M.

    1988-01-01

    The distribution of scattered x rays detected in a two-dimensional projection radiograph at diagnostic x-ray energies is measured as a function of field size and object thickness at a fixed x-ray potential and air gap. An image intensifier-TV based imaging system is used for image acquisition, manipulation, and analysis. A scatter point spread function (PSF) with an assumed linear, spatially invariant response is modeled as a modified Gaussian distribution, and is characterized by two parameters describing the width of the distribution and the fraction of scattered events detected. The PSF parameters are determined from analysis of images obtained with radio-opaque lead disks centrally placed on the source side of a homogeneous phantom. Analytical methods are used to convert the PSF into the frequency domain. Numerical inversion provides an inverse filter that operates on frequency transformed, scatter degraded images. Resultant inverse transformed images demonstrate the nonarbitrary removal of scatter, increased radiographic contrast, and improved quantitative accuracy. The use of the deconvolution method appears to be clinically applicable to a variety of digital projection images

  19. Workflows for Full Waveform Inversions

    Science.gov (United States)

    Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.

  20. Full cycle rapid scan EPR deconvolution algorithm.

    Science.gov (United States)

    Tseytlin, Mark

    2017-08-01

    Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan

  1. Resolving deconvolution ambiguity in gene alternative splicing

    Directory of Open Access Journals (Sweden)

    Hubbell Earl

    2009-08-01

    Full Text Available Abstract Background For many gene structures it is impossible to resolve intensity data uniquely to establish abundances of splice variants. This was empirically noted by Wang et al. in which it was called a "degeneracy problem". The ambiguity results from an ill-posed problem where additional information is needed in order to obtain an unique answer in splice variant deconvolution. Results In this paper, we analyze the situations under which the problem occurs and perform a rigorous mathematical study which gives necessary and sufficient conditions on how many and what type of constraints are needed to resolve all ambiguity. This analysis is generally applicable to matrix models of splice variants. We explore the proposal that probe sequence information may provide sufficient additional constraints to resolve real-world instances. However, probe behavior cannot be predicted with sufficient accuracy by any existing probe sequence model, and so we present a Bayesian framework for estimating variant abundances by incorporating the prediction uncertainty from the micro-model of probe responsiveness into the macro-model of probe intensities. Conclusion The matrix analysis of constraints provides a tool for detecting real-world instances in which additional constraints may be necessary to resolve splice variants. While purely mathematical constraints can be stated without error, real-world constraints may themselves be poorly resolved. Our Bayesian framework provides a generic solution to the problem of uniquely estimating transcript abundances given additional constraints that themselves may be uncertain, such as regression fit to probe sequence models. We demonstrate the efficacy of it by extensive simulations as well as various biological data.

  2. Scalar flux modeling in turbulent flames using iterative deconvolution

    Science.gov (United States)

    Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.

    2018-04-01

    In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.

  3. Evaluation of deconvolution modelling applied to numerical combustion

    Science.gov (United States)

    Mehl, Cédric; Idier, Jérôme; Fiorina, Benoît

    2018-01-01

    A possible modelling approach in the large eddy simulation (LES) of reactive flows is to deconvolve resolved scalars. Indeed, by inverting the LES filter, scalars such as mass fractions are reconstructed. This information can be used to close budget terms of filtered species balance equations, such as the filtered reaction rate. Being ill-posed in the mathematical sense, the problem is very sensitive to any numerical perturbation. The objective of the present study is to assess the ability of this kind of methodology to capture the chemical structure of premixed flames. For that purpose, three deconvolution methods are tested on a one-dimensional filtered laminar premixed flame configuration: the approximate deconvolution method based on Van Cittert iterative deconvolution, a Taylor decomposition-based method, and the regularised deconvolution method based on the minimisation of a quadratic criterion. These methods are then extended to the reconstruction of subgrid scale profiles. Two methodologies are proposed: the first one relies on subgrid scale interpolation of deconvolved profiles and the second uses parametric functions to describe small scales. Conducted tests analyse the ability of the method to capture the chemical filtered flame structure and front propagation speed. Results show that the deconvolution model should include information about small scales in order to regularise the filter inversion. a priori and a posteriori tests showed that the filtered flame propagation speed and structure cannot be captured if the filter size is too large.

  4. Deconvolution of neutron scattering data: a new computational approach

    International Nuclear Information System (INIS)

    Weese, J.; Hendricks, J.; Zorn, R.; Honerkamp, J.; Richter, D.

    1996-01-01

    In this paper we address the problem of reconstructing the scattering function S Q (E) from neutron spectroscopy data which represent a convolution of the former function with an instrument dependent resolution function. It is well known that this kind of deconvolution is an ill-posed problem. Therefore, we apply the Tikhonov regularization technique to get an estimate of S Q (E) from the data. Special features of the neutron spectroscopy data require modifications of the basic procedure, the most important one being a transformation to a non-linear problem. The method is tested by deconvolution of actual data from the IN6 time-of-flight spectrometer (resolution: 90 μeV) and simulated data. As a result the deconvolution is shown to be feasible down to an energy transfer of ∼100 μeV for this instrument without recognizable error and down to ∼20 μeV with 10% relative error. (orig.)

  5. Variation of High-Intensity Therapeutic Ultrasound (HITU) Pressure Field Characterization: Effects of Hydrophone Choice, Nonlinearity, Spatial Averaging and Complex Deconvolution.

    Science.gov (United States)

    Liu, Yunbo; Wear, Keith A; Harris, Gerald R

    2017-10-01

    Reliable acoustic characterization is fundamental for patient safety and clinical efficacy during high-intensity therapeutic ultrasound (HITU) treatment. Technical challenges, such as measurement variation and signal analysis, still exist for HITU exposimetry using ultrasound hydrophones. In this work, four hydrophones were compared for pressure measurement: a robust needle hydrophone, a small polyvinylidene fluoride capsule hydrophone and two fiberoptic hydrophones. The focal waveform and beam distribution of a single-element HITU transducer (1.05 MHz and 3.3 MHz) were evaluated. Complex deconvolution between the hydrophone voltage signal and frequency-dependent complex sensitivity was performed to obtain pressure waveforms. Compressional pressure (p + ), rarefactional pressure (p - ) and focal beam distribution were compared up to 10.6/-6.0 MPa (p + /p - ) (1.05 MHz) and 20.65/-7.20 MPa (3.3 MHz). The effects of spatial averaging, local non-linear distortion, complex deconvolution and hydrophone damage thresholds were investigated. This study showed a variation of no better than 10%-15% among hydrophones during HITU pressure characterization. Published by Elsevier Inc.

  6. Seismic waveform modeling over cloud

    Science.gov (United States)

    Luo, Cong; Friederich, Wolfgang

    2016-04-01

    With the fast growing computational technologies, numerical simulation of seismic wave propagation achieved huge successes. Obtaining the synthetic waveforms through numerical simulation receives an increasing amount of attention from seismologists. However, computational seismology is a data-intensive research field, and the numerical packages usually come with a steep learning curve. Users are expected to master considerable amount of computer knowledge and data processing skills. Training users to use the numerical packages, correctly access and utilize the computational resources is a troubled task. In addition to that, accessing to HPC is also a common difficulty for many users. To solve these problems, a cloud based solution dedicated on shallow seismic waveform modeling has been developed with the state-of-the-art web technologies. It is a web platform integrating both software and hardware with multilayer architecture: a well designed SQL database serves as the data layer, HPC and dedicated pipeline for it is the business layer. Through this platform, users will no longer need to compile and manipulate various packages on the local machine within local network to perform a simulation. By providing users professional access to the computational code through its interfaces and delivering our computational resources to the users over cloud, users can customize the simulation at expert-level, submit and run the job through it.

  7. PBX-M waveform generator

    International Nuclear Information System (INIS)

    Feng, H.; Frank, K.T.; Kaye, S.

    1987-01-01

    The PBX-M (Princeton Beta Experiment) is an unique Tokamak experiment designed to run with a highly indented plasma. The shaping control will be accomplished through a closed-loop power supply control system. The system will make use of sixteen pre-programmed reference signals and twenty signals taken from direct measurements as input to an analog computer. Through a matrix conversion in the analog computer, these input signals will be used to generate eight control signals to control the eight power supplies. The pre-programmed reference signals will be created using a Macintosh personal computer interfaced to CAMAC (Comptuer Automated Measurement And Control) hardware for down-loading waveforms. The reference signals will be created on the Macintosh by the physics operators, utilizing the full graphics capability of the system. These waveforms are transferred to CAMAC memory, which are then strobed in real time through digital-to-analog converters and fed into the analog computer. The overall system (both hardware and software) is designed to be fail-safe. Specific features of the system, such as load inhibit and discharge inhibit, are discussed

  8. Acquisition and deconvolution of seismic signals by different methods to perform direct ground-force measurements

    Science.gov (United States)

    Poletto, Flavio; Schleifer, Andrea; Zgauc, Franco; Meneghini, Fabio; Petronio, Lorenzo

    2016-12-01

    We present the results of a novel borehole-seismic experiment in which we used different types of onshore-transient-impulsive and non-impulsive-surface sources together with direct ground-force recordings. The ground-force signals were obtained by baseplate load cells located beneath the sources, and by buried soil-stress sensors installed in the very shallow-subsurface together with accelerometers. The aim was to characterize the source's emission by its complex impedance, function of the near-field vibrations and soil stress components, and above all to obtain appropriate deconvolution operators to remove the signature of the sources in the far-field seismic signals. The data analysis shows the differences in the reference measurements utilized to deconvolve the source signature. As downgoing waves, we process the signals of vertical seismic profiles (VSP) recorded in the far-field approximation by an array of permanent geophones cemented at shallow-medium depth outside the casing of an instrumented well. We obtain a significant improvement in the waveform of the radiated seismic-vibrator signals deconvolved by ground force, similar to that of the seismograms generated by the impulsive sources, and demonstrates that the results obtained by different sources present low values in their repeatability norm. The comparison evidences the potentiality of the direct ground-force measurement approach to effectively remove the far-field source signature in VSP onshore data, and to increase the performance of permanent acquisition installations for time-lapse application purposes.

  9. Gamma-ray spectra deconvolution by maximum-entropy methods

    International Nuclear Information System (INIS)

    Los Arcos, J.M.

    1996-01-01

    A maximum-entropy method which includes the response of detectors and the statistical fluctuations of spectra is described and applied to the deconvolution of γ-ray spectra. Resolution enhancement of 25% can be reached for experimental peaks and up to 50% for simulated ones, while the intensities are conserved within 1-2%. (orig.)

  10. Filtering and deconvolution for bioluminescence imaging of small animals

    International Nuclear Information System (INIS)

    Akkoul, S.

    2010-01-01

    This thesis is devoted to analysis of bioluminescence images applied to the small animal. This kind of imaging modality is used in cancerology studies. Nevertheless, some problems are related to the diffusion and the absorption of the tissues of the light of internal bioluminescent sources. In addition, system noise and the cosmic rays noise are present. This influences the quality of the images and makes it difficult to analyze. The purpose of this thesis is to overcome these disturbing effects. We first have proposed an image formation model for the bioluminescence images. The processing chain is constituted by a filtering stage followed by a deconvolution stage. We have proposed a new median filter to suppress the random value impulsive noise which corrupts the acquired images; this filter represents the first block of the proposed chain. For the deconvolution stage, we have performed a comparative study of various deconvolution algorithms. It allowed us to choose a blind deconvolution algorithm initialized with the estimated point spread function of the acquisition system. At first, we have validated our global approach by comparing our obtained results with the ground truth. Through various clinical tests, we have shown that the processing chain allows a significant improvement of the spatial resolution and a better distinction of very close tumor sources, what represents considerable contribution for the users of bioluminescence images. (author)

  11. Deconvolution of astronomical images using SOR with adaptive relaxation.

    Science.gov (United States)

    Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J

    2011-07-04

    We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.

  12. Deconvolution of EPR spectral lines with an approximate method

    International Nuclear Information System (INIS)

    Jimenez D, H.; Cabral P, A.

    1990-10-01

    A recently reported approximation expression to deconvolution Lorentzian-Gaussian spectral lines. with small Gaussian contribution, is applied to study an EPR line shape. The potassium-ammonium solution line reported in the literature by other authors was used and the results are compared with those obtained by employing a precise method. (Author)

  13. Euler deconvolution and spectral analysis of regional aeromagnetic ...

    African Journals Online (AJOL)

    Existing regional aeromagnetic data from the south-central Zimbabwe craton has been analysed using 3D Euler deconvolution and spectral analysis to obtain quantitative information on the geological units and structures for depth constraints on the geotectonic interpretation of the region. The Euler solution maps confirm ...

  14. Improvement in volume estimation from confocal sections after image deconvolution

    Czech Academy of Sciences Publication Activity Database

    Difato, Francesco; Mazzone, F.; Scaglione, S.; Fato, M.; Beltrame, F.; Kubínová, Lucie; Janáček, Jiří; Ramoino, P.; Vicidomini, G.; Diaspro, A.

    2004-01-01

    Roč. 64, č. 2 (2004), s. 151-155 ISSN 1059-910X Institutional research plan: CEZ:AV0Z5011922 Keywords : confocal microscopy * image deconvolution * point spread function Subject RIV: EA - Cell Biology Impact factor: 2.609, year: 2004

  15. Pulsatile pipe flow transition: Flow waveform effects

    Science.gov (United States)

    Brindise, Melissa C.; Vlachos, Pavlos P.

    2018-01-01

    Although transition is known to exist in various hemodynamic environments, the mechanisms that govern this flow regime and their subsequent effects on biological parameters are not well understood. Previous studies have investigated transition in pulsatile pipe flow using non-physiological sinusoidal waveforms at various Womersley numbers but have produced conflicting results, and multiple input waveform shapes have yet to be explored. In this work, we investigate the effect of the input pulsatile waveform shape on the mechanisms that drive the onset and development of transition using particle image velocimetry, three pulsatile waveforms, and six mean Reynolds numbers. The turbulent kinetic energy budget including dissipation rate, production, and pressure diffusion was computed. The results show that the waveform with a longer deceleration phase duration induced the earliest onset of transition, while the waveform with a longer acceleration period delayed the onset of transition. In accord with the findings of prior studies, for all test cases, turbulence was observed to be produced at the wall and either dissipated or redistributed into the core flow by pressure waves, depending on the mean Reynolds number. Turbulent production increased with increasing temporal velocity gradients until an asymptotic limit was reached. The turbulence dissipation rate was shown to be independent of mean Reynolds number, but a relationship between the temporal gradients of the input velocity waveform and the rate of turbulence dissipation was found. In general, these results demonstrated that the shape of the input pulsatile waveform directly affected the onset and development of transition.

  16. Waveform digitizing at 500 MHz

    International Nuclear Information System (INIS)

    Atiya, M.; Ito, M.; Haggerty, J.; Ng, C.; Sippach, F.W.

    1988-01-01

    Experiment E787 at Brookhaven National Laboratory is designed to study the decay K + → π + ν/bar /nu// to a sensitivity of 2 /times/ 10 -10 . To achieve acceptable muon rejection it is necessary to couple traditional methods (range/energy/momentum correlation) with observation of the (π + → μ + ν, μ + → e + ν/bar /nu//) decay sequence in scintillator. We report on the design and construction of 200 channels of relatively low cost solid state waveform digitizers. The distinguishing features are: 8 bits dynamic range, 500 MHz sampling, zero suppression on the fly, deep memory (up to .5 msec), and fast readout time (100 μsec for the entire system). We report on data obtained during the February-May 1988 run showing performance of the system for the observation of the above decay. 8 figs

  17. Waveform digitizing at 500 MHz

    International Nuclear Information System (INIS)

    Atiya, M.; Ito, M.; Haggerty, J.; Ng, C.; Sippach, F.W.

    1988-01-01

    Experiment E787 at Brookhaven National Laboratory is designed to study the decay K + → π + ν/bar /nu// to a sensitivity of 2 /times/ 10/sup /minus/10/. To achieve acceptable muon rejection it is necessary to couple traditional methods (range/energy/momentum correlation) with observation of the π + → μ + → e + ν/bar /nu// decay sequence in scintillator. We report on the design and construction of over 200 channels of relatively low cost solid state waveform digitizers. The distinguishing features are: 8 bits dynamic range, 500 MHz sampling, zero suppression on the fly, deep memory (up to .5 msec), and fast readout time (100 μsec for the entire system). We report on data obtained during the February--May 1988 run showing performance of the system for the observation of the above decay. 9 figs

  18. A method of PSF generation for 3D brightfield deconvolution.

    Science.gov (United States)

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.

  19. Filtering and deconvolution for bioluminescence imaging of small animals; Filtrage et deconvolution en imagerie de bioluminescence chez le petit animal

    Energy Technology Data Exchange (ETDEWEB)

    Akkoul, S.

    2010-06-22

    This thesis is devoted to analysis of bioluminescence images applied to the small animal. This kind of imaging modality is used in cancerology studies. Nevertheless, some problems are related to the diffusion and the absorption of the tissues of the light of internal bioluminescent sources. In addition, system noise and the cosmic rays noise are present. This influences the quality of the images and makes it difficult to analyze. The purpose of this thesis is to overcome these disturbing effects. We first have proposed an image formation model for the bioluminescence images. The processing chain is constituted by a filtering stage followed by a deconvolution stage. We have proposed a new median filter to suppress the random value impulsive noise which corrupts the acquired images; this filter represents the first block of the proposed chain. For the deconvolution stage, we have performed a comparative study of various deconvolution algorithms. It allowed us to choose a blind deconvolution algorithm initialized with the estimated point spread function of the acquisition system. At first, we have validated our global approach by comparing our obtained results with the ground truth. Through various clinical tests, we have shown that the processing chain allows a significant improvement of the spatial resolution and a better distinction of very close tumor sources, what represents considerable contribution for the users of bioluminescence images. (author)

  20. Multifunction waveform generator for EM receiver testing

    Science.gov (United States)

    Chen, Kai; Jin, Sheng; Deng, Ming

    2018-01-01

    In many electromagnetic (EM) methods - such as magnetotelluric, spectral-induced polarization (SIP), time-domain-induced polarization (TDIP), and controlled-source audio magnetotelluric (CSAMT) methods - it is important to evaluate and test the EM receivers during their development stage. To assess the performance of the developed EM receivers, controlled synthetic data that simulate the observed signals in different modes are required. In CSAMT and SIP mode testing, the waveform generator should use the GPS time as the reference for repeating schedule. Based on our testing, the frequency range, frequency precision, and time synchronization of the currently available function waveform generators on the market are deficient. This paper presents a multifunction waveform generator with three waveforms: (1) a wideband, low-noise electromagnetic field signal to be used for magnetotelluric, audio-magnetotelluric, and long-period magnetotelluric studies; (2) a repeating frequency sweep square waveform for CSAMT and SIP studies; and (3) a positive-zero-negative-zero signal that contains primary and secondary fields for TDIP studies. In this paper, we provide the principles of the above three waveforms along with a hardware design for the generator. Furthermore, testing of the EM receiver was conducted with the waveform generator, and the results of the experiment were compared with those calculated from the simulation and theory in the frequency band of interest.

  1. Developed vibration waveform monitoring unit for CBM

    International Nuclear Information System (INIS)

    Hamada, T.; Hotsuta, K.; Hirose, I.; Morita, E.

    2007-01-01

    In nuclear power plants, many rotating machines such as pumps and fans are in use. Shikoku Research Institute Inc. has recently developed easy-to-use tools to facilitate the maintenance of such equipment. They include a battery-operated vibration waveform monitoring unit which allows unmanned vibration monitoring on a regular basis and data collection even from intermittently operating equipment, a waveform data collector which can be used for easy collection, storage, control, and analysis of raw vibration waveform data during normal operation, and vibration analysis and evaluation tools. A combination of these tools has a high potential for optimization of rotating equipment maintenance. (author)

  2. Flow pumping system for physiological waveforms.

    Science.gov (United States)

    Tsai, William; Savaş, Omer

    2010-02-01

    A pulsatile flow pumping system is developed to replicate flow waveforms with reasonable accuracy for experiments simulating physiological blood flows at numerous points in the body. The system divides the task of flow waveform generation between two pumps: a gear pump generates the mean component and a piston pump generates the oscillatory component. The system is driven by two programmable servo controllers. The frequency response of the system is used to characterize its operation. The system has been successfully tested in vascular flow experiments where sinusoidal, carotid, and coronary flow waveforms are replicated.

  3. An alternating minimization method for blind deconvolution from Poisson data

    International Nuclear Information System (INIS)

    Prato, Marco; La Camera, Andrea; Bonettini, Silvia

    2014-01-01

    Blind deconvolution is a particularly challenging inverse problem since information on both the desired target and the acquisition system have to be inferred from the measured data. When the collected data are affected by Poisson noise, this problem is typically addressed by the minimization of the Kullback-Leibler divergence, in which the unknowns are sought in particular feasible sets depending on the a priori information provided by the specific application. If these sets are separated, then the resulting constrained minimization problem can be addressed with an inexact alternating strategy. In this paper we apply this optimization tool to the problem of reconstructing astronomical images from adaptive optics systems, and we show that the proposed approach succeeds in providing very good results in the blind deconvolution of nondense stellar clusters

  4. Automated processing for proton spectroscopic imaging using water reference deconvolution.

    Science.gov (United States)

    Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W

    1994-06-01

    Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.

  5. Deconvolution of In Vivo Ultrasound B-Mode Images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Stage, Bjarne; Mathorne, Jan

    1993-01-01

    An algorithm for deconvolution of medical ultrasound images is presented. The procedure involves estimation of the basic one-dimensional ultrasound pulse, determining the ratio of the covariance of the noise to the covariance of the reflection signal, and finally deconvolution of the rf signal from...... the transducer. Using pulse and covariance estimators makes the approach self-calibrating, as all parameters for the procedure are estimated from the patient under investigation. An example of use on a clinical, in-vivo image is given. A 2 × 2 cm region of the portal vein in a liver is deconvolved. An increase...... in axial resolution by a factor of 2.4 is obtained. The procedure can also be applied to whole images, when it is ensured that the rf signal is properly measured. A method for doing that is outlined....

  6. Deconvolution of shift-variant broadening for Compton scatter imaging

    International Nuclear Information System (INIS)

    Evans, Brian L.; Martin, Jeffrey B.; Roggemann, Michael C.

    1999-01-01

    A technique is presented for deconvolving shift-variant Doppler broadening of singly Compton scattered gamma rays from their recorded energy distribution. Doppler broadening is important in Compton scatter imaging techniques employing gamma rays with energies below roughly 100 keV. The deconvolution unfolds an approximation to the angular distribution of scattered photons from their recorded energy distribution in the presence of statistical noise and background counts. Two unfolding methods are presented, one based on a least-squares algorithm and one based on a maximum likelihood algorithm. Angular distributions unfolded from measurements made on small scattering targets show less evidence of Compton broadening. This deconvolution is shown to improve the quality of filtered backprojection images in multiplexed Compton scatter tomography. Improved sharpness and contrast are evident in the images constructed from unfolded signals

  7. Example-driven manifold priors for image deconvolution.

    Science.gov (United States)

    Ni, Jie; Turaga, Pavan; Patel, Vishal M; Chellappa, Rama

    2011-11-01

    Image restoration methods that exploit prior information about images to be estimated have been extensively studied, typically using the Bayesian framework. In this paper, we consider the role of prior knowledge of the object class in the form of a patch manifold to address the deconvolution problem. Specifically, we incorporate unlabeled image data of the object class, say natural images, in the form of a patch-manifold prior for the object class. The manifold prior is implicitly estimated from the given unlabeled data. We show how the patch-manifold prior effectively exploits the available sample class data for regularizing the deblurring problem. Furthermore, we derive a generalized cross-validation (GCV) function to automatically determine the regularization parameter at each iteration without explicitly knowing the noise variance. Extensive experiments show that this method performs better than many competitive image deconvolution methods.

  8. Retinal image restoration by means of blind deconvolution

    Czech Academy of Sciences Publication Activity Database

    Marrugo, A.; Šorel, Michal; Šroubek, Filip; Millan, M.

    2011-01-01

    Roč. 16, č. 11 (2011), 116016-1-116016-11 ISSN 1083-3668 R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : blind deconvolution * image restoration * retinal image * deblurring Subject RIV: JD - Computer Applications, Robotics Impact factor: 3.157, year: 2011 http://library.utia.cas.cz/separaty/2011/ZOI/sorel-0366061.pdf

  9. XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling

    Science.gov (United States)

    Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.

    2017-08-01

    XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.

  10. Robust Multichannel Blind Deconvolution via Fast Alternating Minimization

    Czech Academy of Sciences Publication Activity Database

    Šroubek, Filip; Milanfar, P.

    2012-01-01

    Roč. 21, č. 4 (2012), s. 1687-1700 ISSN 1057-7149 R&D Projects: GA MŠk 1M0572; GA ČR GAP103/11/1552; GA MV VG20102013064 Institutional research plan: CEZ:AV0Z10750506 Keywords : blind deconvolution * augmented Lagrangian * sparse representation Subject RIV: JD - Computer Applications, Robotics Impact factor: 3.199, year: 2012 http://library.utia.cas.cz/separaty/2012/ZOI/sroubek-0376080.pdf

  11. Real Time Deconvolution of In-Vivo Ultrasound Images

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    2013-01-01

    and two wavelengths. This can be improved by deconvolution, which increase the bandwidth and equalizes the phase to increase resolution under the constraint of the electronic noise in the received signal. A fixed interval Kalman filter based deconvolution routine written in C is employed. It uses a state...... resolution has been determined from the in-vivo liver image using the auto-covariance function. From the envelope of the estimated pulse the axial resolution at Full-Width-Half-Max is 0.581 mm corresponding to 1.13 l at 3 MHz. The algorithm increases the resolution to 0.116 mm or 0.227 l corresponding...... to a factor of 5.1. The basic pulse can be estimated in roughly 0.176 seconds on a single CPU core on an Intel i5 CPU running at 1.8 GHz. An in-vivo image consisting of 100 lines of 1600 samples can be processed in roughly 0.1 seconds making it possible to perform real-time deconvolution on ultrasound data...

  12. Point spread functions and deconvolution of ultrasonic images.

    Science.gov (United States)

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  13. Designing a stable feedback control system for blind image deconvolution.

    Science.gov (United States)

    Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan

    2018-05-01

    Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Seismic waveform classification using deep learning

    Science.gov (United States)

    Kong, Q.; Allen, R. M.

    2017-12-01

    MyShake is a global smartphone seismic network that harnesses the power of crowdsourcing. It has an Artificial Neural Network (ANN) algorithm running on the phone to distinguish earthquake motion from human activities recorded by the accelerometer on board. Once the ANN detects earthquake-like motion, it sends a 5-min chunk of acceleration data back to the server for further analysis. The time-series data collected contains both earthquake data and human activity data that the ANN confused. In this presentation, we will show the Convolutional Neural Network (CNN) we built under the umbrella of supervised learning to find out the earthquake waveform. The waveforms of the recorded motion could treat easily as images, and by taking the advantage of the power of CNN processing the images, we achieved very high successful rate to select the earthquake waveforms out. Since there are many non-earthquake waveforms than the earthquake waveforms, we also built an anomaly detection algorithm using the CNN. Both these two methods can be easily extended to other waveform classification problems.

  15. MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES

    Institute of Scientific and Technical Information of China (English)

    程乾生

    1990-01-01

    The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.

  16. Design of pulse waveform for waveform division multiple access UWB wireless communication system.

    Science.gov (United States)

    Yin, Zhendong; Wang, Zhirui; Liu, Xiaohui; Wu, Zhilu

    2014-01-01

    A new multiple access scheme, Waveform Division Multiple Access (WDMA) based on the orthogonal wavelet function, is presented. After studying the correlation properties of different categories of single wavelet functions, the one with the best correlation property will be chosen as the foundation for combined waveform. In the communication system, each user is assigned to different combined orthogonal waveform. Demonstrated by simulation, combined waveform is more suitable than single wavelet function to be a communication medium in WDMA system. Due to the excellent orthogonality, the bit error rate (BER) of multiuser with combined waveforms is so close to that of single user in a synchronous system. That is to say, the multiple access interference (MAI) is almost eliminated. Furthermore, even in an asynchronous system without multiuser detection after matched filters, the result is still pretty ideal and satisfactory by using the third combination mode that will be mentioned in the study.

  17. SCA Waveform Development for Space Telemetry

    Science.gov (United States)

    Mortensen, Dale J.; Kifle, Multi; Hall, C. Steve; Quinn, Todd M.

    2004-01-01

    The NASA Glenn Research Center is investigating and developing suitable reconfigurable radio architectures for future NASA missions. This effort is examining software-based open-architectures for space based transceivers, as well as common hardware platform architectures. The Joint Tactical Radio System's (JTRS) Software Communications Architecture (SCA) is a candidate for the software approach, but may need modifications or adaptations for use in space. An in-house SCA compliant waveform development focuses on increasing understanding of software defined radio architectures and more specifically the JTRS SCA. Space requirements put a premium on size, mass, and power. This waveform development effort is key to evaluating tradeoffs with the SCA for space applications. Existing NASA telemetry links, as well as Space Exploration Initiative scenarios, are the basis for defining the waveform requirements. Modeling and simulations are being developed to determine signal processing requirements associated with a waveform and a mission-specific computational burden. Implementation of the waveform on a laboratory software defined radio platform is proceeding in an iterative fashion. Parallel top-down and bottom-up design approaches are employed.

  18. WFCatalog: A catalogue for seismological waveform data

    Science.gov (United States)

    Trani, Luca; Koymans, Mathijs; Atkinson, Malcolm; Sleeman, Reinoud; Filgueira, Rosa

    2017-09-01

    This paper reports advances in seismic waveform description and discovery leading to a new seismological service and presents the key steps in its design, implementation and adoption. This service, named WFCatalog, which stands for waveform catalogue, accommodates features of seismological waveform data. Therefore, it meets the need for seismologists to be able to select waveform data based on seismic waveform features as well as sensor geolocations and temporal specifications. We describe the collaborative design methods and the technical solution showing the central role of seismic feature catalogues in framing the technical and operational delivery of the new service. Also, we provide an overview of the complex environment wherein this endeavour is scoped and the related challenges discussed. As multi-disciplinary, multi-organisational and global collaboration is necessary to address today's challenges, canonical representations can provide a focus for collaboration and conceptual tools for agreeing directions. Such collaborations can be fostered and formalised by rallying intellectual effort into the design of novel scientific catalogues and the services that support them. This work offers an example of the benefits generated by involving cross-disciplinary skills (e.g. data and domain expertise) from the early stages of design, and by sustaining the engagement with the target community throughout the delivery and deployment process.

  19. Fatal defect in computerized glow curve deconvolution of thermoluminescence

    International Nuclear Information System (INIS)

    Sakurai, T.

    2001-01-01

    The method of computerized glow curve deconvolution (CGCD) is a powerful tool in the study of thermoluminescence (TL). In a system where the plural trapping levels have the probability of retrapping, the electrons trapped at one level can transfer from this level to another through retrapping via the conduction band during reading TL. However, at present, the method of CGCD has no affect on the electron transition between the trapping levels; this is a fatal defect. It is shown by computer simulation that CGCD using general-order kinetics thus cannot yield the correct trap parameters. (author)

  20. Seeing deconvolution of globular clusters in M31

    International Nuclear Information System (INIS)

    Bendinelli, O.; Zavatti, F.; Parmeggiani, G.; Djorgovski, S.

    1990-01-01

    The morphology of six M31 globular clusters is examined using seeing-deconvolved CCD images. The deconvolution techniques developed by Bendinelli (1989) are reviewed and applied to the M31 globular clusters to demonstrate the methodology. It is found that the effective resolution limit of the method is about 0.1-0.3 arcsec for CCD images obtained in FWHM = 1 arcsec seeing, and sampling of 0.3 arcsec/pixel. Also, the robustness of the method is discussed. The implications of the technique for future studies using data from the Hubble Space Telescope are considered. 68 refs

  1. Nuclear pulse signal processing techniques based on blind deconvolution method

    International Nuclear Information System (INIS)

    Hong Pengfei; Yang Lei; Qi Zhong; Meng Xiangting; Fu Yanyan; Li Dongcang

    2012-01-01

    This article presents a method of measurement and analysis of nuclear pulse signal, the FPGA to control high-speed ADC measurement of nuclear radiation signals and control the high-speed transmission status of the USB to make it work on the Slave FIFO mode, using the LabVIEW online data processing and display, using the blind deconvolution method to remove the accumulation of signal acquisition, and to restore the nuclear pulse signal with a transmission speed, real-time measurements show that the advantages. (authors)

  2. Nuclear pulse signal processing technique based on blind deconvolution method

    International Nuclear Information System (INIS)

    Hong Pengfei; Yang Lei; Fu Tingyan; Qi Zhong; Li Dongcang; Ren Zhongguo

    2012-01-01

    In this paper, we present a method for measurement and analysis of nuclear pulse signal, with which pile-up signal is removed, the signal baseline is restored, and the original signal is obtained. The data acquisition system includes FPGA, ADC and USB. The FPGA controls the high-speed ADC to sample the signal of nuclear radiation, and the USB makes the ADC work on the Slave FIFO mode to implement high-speed transmission status. Using the LabVIEW, it accomplishes online data processing of the blind deconvolution algorithm and data display. The simulation and experimental results demonstrate advantages of the method. (authors)

  3. Approximate deconvolution models of turbulence analysis, phenomenology and numerical analysis

    CERN Document Server

    Layton, William J

    2012-01-01

    This volume presents a mathematical development of a recent approach to the modeling and simulation of turbulent flows based on methods for the approximate solution of inverse problems. The resulting Approximate Deconvolution Models or ADMs have some advantages over more commonly used turbulence models – as well as some disadvantages. Our goal in this book is to provide a clear and complete mathematical development of ADMs, while pointing out the difficulties that remain. In order to do so, we present the analytical theory of ADMs, along with its connections, motivations and complements in the phenomenology of and algorithms for ADMs.

  4. Deconvolution map-making for cosmic microwave background observations

    International Nuclear Information System (INIS)

    Armitage, Charmaine; Wandelt, Benjamin D.

    2004-01-01

    We describe a new map-making code for cosmic microwave background observations. It implements fast algorithms for convolution and transpose convolution of two functions on the sphere [B. Wandelt and K. Gorski, Phys. Rev. D 63, 123002 (2001)]. Our code can account for arbitrary beam asymmetries and can be applied to any scanning strategy. We demonstrate the method using simulated time-ordered data for three beam models and two scanning patterns, including a coarsened version of the WMAP strategy. We quantitatively compare our results with a standard map-making method and demonstrate that the true sky is recovered with high accuracy using deconvolution map-making

  5. Stable Blind Deconvolution over the Reals from Additional Autocorrelations

    KAUST Repository

    Walk, Philipp

    2017-10-22

    Recently the one-dimensional time-discrete blind deconvolution problem was shown to be solvable uniquely, up to a global phase, by a semi-definite program for almost any signal, provided its autocorrelation is known. We will show in this work that under a sufficient zero separation of the corresponding signal in the $z-$domain, a stable reconstruction against additive noise is possible. Moreover, the stability constant depends on the signal dimension and on the signals magnitude of the first and last coefficients. We give an analytical expression for this constant by using spectral bounds of Vandermonde matrices.

  6. Photonic arbitrary waveform generator based on Taylor synthesis method

    DEFF Research Database (Denmark)

    Liao, Shasha; Ding, Yunhong; Dong, Jianji

    2016-01-01

    Arbitrary waveform generation has been widely used in optical communication, radar system and many other applications. We propose and experimentally demonstrate a silicon-on-insulator (SOI) on chip optical arbitrary waveform generator, which is based on Taylor synthesis method. In our scheme......, a Gaussian pulse is launched to some cascaded microrings to obtain first-, second- and third-order differentiations. By controlling amplitude and phase of the initial pulse and successive differentiations, we can realize an arbitrary waveform generator according to Taylor expansion. We obtain several typical...... waveforms such as square waveform, triangular waveform, flat-top waveform, sawtooth waveform, Gaussian waveform and so on. Unlike other schemes based on Fourier synthesis or frequency-to-time mapping, our scheme is based on Taylor synthesis method. Our scheme does not require any spectral disperser or large...

  7. Wavelet analysis of the impedance cardiogram waveforms

    Science.gov (United States)

    Podtaev, S.; Stepanov, R.; Dumler, A.; Chugainov, S.; Tziberkin, K.

    2012-12-01

    Impedance cardiography has been used for diagnosing atrial and ventricular dysfunctions, valve disorders, aortic stenosis, and vascular diseases. Almost all the applications of impedance cardiography require determination of some of the characteristic points of the ICG waveform. The ICG waveform has a set of characteristic points known as A, B, E ((dZ/dt)max) X, Y, O and Z. These points are related to distinct physiological events in the cardiac cycle. Objective of this work is an approbation of a new method of processing and interpretation of the impedance cardiogram waveforms using wavelet analysis. A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. Use of original wavelet differentiation algorithm allows combining filtration and calculation of the derivatives of rheocardiogram. The proposed approach can be used in clinical practice for early diagnostics of cardiovascular system remodelling in the course of different pathologies.

  8. Wavelet analysis of the impedance cardiogram waveforms

    International Nuclear Information System (INIS)

    Podtaev, S; Stepanov, R; Dumler, A; Chugainov, S; Tziberkin, K

    2012-01-01

    Impedance cardiography has been used for diagnosing atrial and ventricular dysfunctions, valve disorders, aortic stenosis, and vascular diseases. Almost all the applications of impedance cardiography require determination of some of the characteristic points of the ICG waveform. The ICG waveform has a set of characteristic points known as A, B, E ((dZ/dt) max ) X, Y, O and Z. These points are related to distinct physiological events in the cardiac cycle. Objective of this work is an approbation of a new method of processing and interpretation of the impedance cardiogram waveforms using wavelet analysis. A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. Use of original wavelet differentiation algorithm allows combining filtration and calculation of the derivatives of rheocardiogram. The proposed approach can be used in clinical practice for early diagnostics of cardiovascular system remodelling in the course of different pathologies.

  9. Krylov subspace acceleration of waveform relaxation

    Energy Technology Data Exchange (ETDEWEB)

    Lumsdaine, A.; Wu, Deyun [Univ. of Notre Dame, IN (United States)

    1996-12-31

    Standard solution methods for numerically solving time-dependent problems typically begin by discretizing the problem on a uniform time grid and then sequentially solving for successive time points. The initial time discretization imposes a serialization to the solution process and limits parallel speedup to the speedup available from parallelizing the problem at any given time point. This bottleneck can be circumvented by the use of waveform methods in which multiple time-points of the different components of the solution are computed independently. With the waveform approach, a problem is first spatially decomposed and distributed among the processors of a parallel machine. Each processor then solves its own time-dependent subsystem over the entire interval of interest using previous iterates from other processors as inputs. Synchronization and communication between processors take place infrequently, and communication consists of large packets of information - discretized functions of time (i.e., waveforms).

  10. Waveform Design for Wireless Power Transfer

    Science.gov (United States)

    Clerckx, Bruno; Bayguzina, Ekaterina

    2016-12-01

    Far-field Wireless Power Transfer (WPT) has attracted significant attention in recent years. Despite the rapid progress, the emphasis of the research community in the last decade has remained largely concentrated on improving the design of energy harvester (so-called rectenna) and has left aside the effect of transmitter design. In this paper, we study the design of transmit waveform so as to enhance the DC power at the output of the rectenna. We derive a tractable model of the non-linearity of the rectenna and compare with a linear model conventionally used in the literature. We then use those models to design novel multisine waveforms that are adaptive to the channel state information (CSI). Interestingly, while the linear model favours narrowband transmission with all the power allocated to a single frequency, the non-linear model favours a power allocation over multiple frequencies. Through realistic simulations, waveforms designed based on the non-linear model are shown to provide significant gains (in terms of harvested DC power) over those designed based on the linear model and over non-adaptive waveforms. We also compute analytically the theoretical scaling laws of the harvested energy for various waveforms as a function of the number of sinewaves and transmit antennas. Those scaling laws highlight the benefits of CSI knowledge at the transmitter in WPT and of a WPT design based on a non-linear rectenna model over a linear model. Results also motivate the study of a promising architecture relying on large-scale multisine multi-antenna waveforms for WPT. As a final note, results stress the importance of modeling and accounting for the non-linearity of the rectenna in any system design involving wireless power.

  11. Principles of waveform diversity and design

    CERN Document Server

    Wicks, Michael

    2011-01-01

    This is the first book to discuss current and future applications of waveform diversity and design in subjects such as radar and sonar, communications systems, passive sensing, and many other technologies. Waveform diversity allows researchers and system designers to optimize electromagnetic and acoustic systems for sensing, communications, electronic warfare or combinations thereof. This book enables solutions to problems, explaining how each system performs its own particular function, as well as how it is affected by other systems and how those other systems may likewise be affected. It is

  12. Signal processing in noise waveform radar

    CERN Document Server

    Kulpa, Krzysztof

    2013-01-01

    This book is devoted to the emerging technology of noise waveform radar and its signal processing aspects. It is a new kind of radar, which use noise-like waveform to illuminate the target. The book includes an introduction to basic radar theory, starting from classical pulse radar, signal compression, and wave radar. The book then discusses the properties, difficulties and potential of noise radar systems, primarily for low-power and short-range civil applications. The contribution of modern signal processing techniques to making noise radar practical are emphasized, and application examples

  13. Multi-Channel Deconvolution for Forward-Looking Phase Array Radar Imaging

    Directory of Open Access Journals (Sweden)

    Jie Xia

    2017-07-01

    Full Text Available The cross-range resolution of forward-looking phase array radar (PAR is limited by the effective antenna beamwidth since the azimuth echo is the convolution of antenna pattern and targets’ backscattering coefficients. Therefore, deconvolution algorithms are proposed to improve the imaging resolution under the limited antenna beamwidth. However, as a typical inverse problem, deconvolution is essentially a highly ill-posed problem which is sensitive to noise and cannot ensure a reliable and robust estimation. In this paper, multi-channel deconvolution is proposed for improving the performance of deconvolution, which intends to considerably alleviate the ill-posed problem of single-channel deconvolution. To depict the performance improvement obtained by multi-channel more effectively, evaluation parameters are generalized to characterize the angular spectrum of antenna pattern or singular value distribution of observation matrix, which are conducted to compare different deconvolution systems. Here we present two multi-channel deconvolution algorithms which improve upon the traditional deconvolution algorithms via combining with multi-channel technique. Extensive simulations and experimental results based on real data are presented to verify the effectiveness of the proposed imaging methods.

  14. Sparse spectral deconvolution algorithm for noncartesian MR spectroscopic imaging.

    Science.gov (United States)

    Bhave, Sampada; Eslami, Ramin; Jacob, Mathews

    2014-02-01

    To minimize line shape distortions and spectral leakage artifacts in MR spectroscopic imaging (MRSI). A spatially and spectrally regularized non-Cartesian MRSI algorithm that uses the line shape distortion priors, estimated from water reference data, to deconvolve the spectra is introduced. Sparse spectral regularization is used to minimize noise amplification associated with deconvolution. A spiral MRSI sequence that heavily oversamples the central k-space regions is used to acquire the MRSI data. The spatial regularization term uses the spatial supports of brain and extracranial fat regions to recover the metabolite spectra and nuisance signals at two different resolutions. Specifically, the nuisance signals are recovered at the maximum resolution to minimize spectral leakage, while the point spread functions of metabolites are controlled to obtain acceptable signal-to-noise ratio. The comparisons of the algorithm against Tikhonov regularized reconstructions demonstrates considerably reduced line-shape distortions and improved metabolite maps. The proposed sparsity constrained spectral deconvolution scheme is effective in minimizing the line-shape distortions. The dual resolution reconstruction scheme is capable of minimizing spectral leakage artifacts. Copyright © 2013 Wiley Periodicals, Inc.

  15. Retinal image restoration by means of blind deconvolution

    Science.gov (United States)

    Marrugo, Andrés G.; Šorel, Michal; Šroubek, Filip; Millán, María S.

    2011-11-01

    Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

  16. A soft double regularization approach to parametric blind image deconvolution.

    Science.gov (United States)

    Chen, Li; Yap, Kim-Hui

    2005-05-01

    This paper proposes a blind image deconvolution scheme based on soft integration of parametric blur structures. Conventional blind image deconvolution methods encounter a difficult dilemma of either imposing stringent and inflexible preconditions on the problem formulation or experiencing poor restoration results due to lack of information. This paper attempts to address this issue by assessing the relevance of parametric blur information, and incorporating the knowledge into the parametric double regularization (PDR) scheme. The PDR method assumes that the actual blur satisfies up to a certain degree of parametric structure, as there are many well-known parametric blurs in practical applications. Further, it can be tailored flexibly to include other blur types if some prior parametric knowledge of the blur is available. A manifold soft parametric modeling technique is proposed to generate the blur manifolds, and estimate the fuzzy blur structure. The PDR scheme involves the development of the meaningful cost function, the estimation of blur support and structure, and the optimization of the cost function. Experimental results show that it is effective in restoring degraded images under different environments.

  17. Method for the deconvolution of incompletely resolved CARS spectra in chemical dynamics experiments

    International Nuclear Information System (INIS)

    Anda, A.A.; Phillips, D.L.; Valentini, J.J.

    1986-01-01

    We describe a method for deconvoluting incompletely resolved CARS spectra to obtain quantum state population distributions. No particular form for the rotational and vibrational state distribution is assumed, the population of each quantum state is treated as an independent quantity. This method of analysis differs from previously developed approaches for the deconvolution of CARS spectra, all of which assume that the population distribution is Boltzmann, and thus are limited to the analysis of CARS spectra taken under conditions of thermal equilibrium. The method of analysis reported here has been developed to deconvolute CARS spectra of photofragments and chemical reaction products obtained in chemical dynamics experiments under nonequilibrium conditions. The deconvolution procedure has been incorporated into a computer code. The application of that code to the deconvolution of CARS spectra obtained for samples at thermal equilibrium and not at thermal equilibrium is reported. The method is accurate and computationally efficient

  18. Waveform relaxation methods for implicit differential equations

    NARCIS (Netherlands)

    P.J. van der Houwen; W.A. van der Veen

    1996-01-01

    textabstractWe apply a Runge-Kutta-based waveform relaxation method to initial-value problems for implicit differential equations. In the implementation of such methods, a sequence of nonlinear systems has to be solved iteratively in each step of the integration process. The size of these systems

  19. A multi-channel waveform digitizer system

    International Nuclear Information System (INIS)

    Bieser, F.; Muller, W.F.J.

    1990-01-01

    The authors report on the design and performance of a multichannel waveform digitizer system for use with the Multiple Sample Ionization Chamber (MUSIC) Detector at the Bevalac. 128 channels of 20 MHz Flash ADC plus 256 word deep memory are housed in a single crate. Digital thresholds and hit pattern logic facilitate zero suppression during readout which is performed over a standard VME bus

  20. Resolution analysis in full waveform inversion

    NARCIS (Netherlands)

    Fichtner, A.; Trampert, J.

    2011-01-01

    We propose a new method for the quantitative resolution analysis in full seismic waveform inversion that overcomes the limitations of classical synthetic inversions while being computationally more efficient and applicable to any misfit measure. The method rests on (1) the local quadratic

  1. Classification of morphologic changes in photoplethysmographic waveforms

    Directory of Open Access Journals (Sweden)

    Tigges Timo

    2016-09-01

    Full Text Available An ever increasing number of research is examining the question to what extent physiological information beyond the blood oxygen saturation could be drawn from the photoplethysmogram. One important approach to elicit that information from the photoplethysmogram is the analysis of its waveform. One prominent example for the value of photoplethysmographic waveform analysis in cardiovascular monitoring that has emerged is hemodynamic compensation assessment in the peri-operative setting or trauma situations, as digital pulse waveform dynamically changes with alterations in vascular tone or pulse wave velocity. In this work, we present an algorithm based on modern machine learning techniques that automatically finds individual digital volume pulses in photoplethysmographic signals and sorts them into one of the pulse classes defined by Dawber et al. We evaluate our approach based on two major datasets – a measurement study that we conducted ourselves as well as data from the PhysioNet MIMIC II database. As the results are satisfying we could demonstrate the capabilities of classification algorithms in the automated assessment of the digital volume pulse waveform measured by photoplethysmographic devices.

  2. Full-waveform inversion: Filling the gaps

    KAUST Repository

    Beydoun, Wafik B.; Alkhalifah, Tariq Ali

    2015-01-01

    After receiving an outstanding response to its inaugural workshop in 2013, SEG once again achieved great success with its 2015 SEG Middle East Workshop, “Full-waveform inversion: Filling the gaps,” which took place 30 March–1 April 2015 in Abu Dhabi

  3. Source-independent elastic waveform inversion using a logarithmic wavefield

    KAUST Repository

    Choi, Yun Seok; Min, Dong Joon

    2012-01-01

    The logarithmic waveform inversion has been widely developed and applied to some synthetic and real data. In most logarithmic waveform inversion algorithms, the subsurface velocities are updated along with the source estimation. To avoid estimating

  4. Multiparameter Elastic Full Waveform Inversion with Facies-based Constraints

    KAUST Repository

    Zhang, Zhendong; Alkhalifah, Tariq Ali; Naeini, Ehsan Zabihi; Sun, Bingbing

    2018-01-01

    Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize full waveform inversion beyond improved acoustic imaging, like

  5. Waveform inversion for acoustic VTI media in frequency domain

    KAUST Repository

    Wu, Zedong; Alkhalifah, Tariq Ali

    2016-01-01

    Reflected waveform inversion (RWI) provides a method to reduce the nonlinearity of the standard full waveform inversion (FWI) by inverting for the background model using a single scattered wavefield from an inverted perturbation. However, current

  6. Multiparameter Elastic Full Waveform Inversion With Facies Constraints

    KAUST Repository

    Zhang, Zhendong; Alkhalifah, Tariq Ali; Naeini, Ehsan Zabihi

    2017-01-01

    Full waveform inversion (FWI) aims fully benefit from all the data characteristics to estimate the parameters describing the assumed physics of the subsurface. However, current efforts to utilize full waveform inversion as a tool beyond acoustic

  7. Generation of correlated finite alphabet waveforms using gaussian random variables

    KAUST Repository

    Jardak, Seifallah; Ahmed, Sajid; Alouini, Mohamed-Slim

    2014-01-01

    , the proposed scheme is general, the main focus of this paper is to generate finite alphabet waveforms for multiple-input multiple-output radar, where correlated waveforms are used to achieve desired beampatterns. © 2014 IEEE.

  8. Towards full waveform ambient noise inversion

    Science.gov (United States)

    Sager, Korbinian; Ermert, Laura; Boehm, Christian; Fichtner, Andreas

    2018-01-01

    In this work we investigate fundamentals of a method—referred to as full waveform ambient noise inversion—that improves the resolution of tomographic images by extracting waveform information from interstation correlation functions that cannot be used without knowing the distribution of noise sources. The fundamental idea is to drop the principle of Green function retrieval and to establish correlation functions as self-consistent observables in seismology. This involves the following steps: (1) We introduce an operator-based formulation of the forward problem of computing correlation functions. It is valid for arbitrary distributions of noise sources in both space and frequency, and for any type of medium, including 3-D elastic, heterogeneous and attenuating media. In addition, the formulation allows us to keep the derivations independent of time and frequency domain and it facilitates the application of adjoint techniques, which we use to derive efficient expressions to compute first and also second derivatives. The latter are essential for a resolution analysis that accounts for intra- and interparameter trade-offs. (2) In a forward modelling study we investigate the effect of noise sources and structure on different observables. Traveltimes are hardly affected by heterogeneous noise source distributions. On the other hand, the amplitude asymmetry of correlations is at least to first order insensitive to unmodelled Earth structure. Energy and waveform differences are sensitive to both structure and the distribution of noise sources. (3) We design and implement an appropriate inversion scheme, where the extraction of waveform information is successively increased. We demonstrate that full waveform ambient noise inversion has the potential to go beyond ambient noise tomography based on Green function retrieval and to refine noise source location, which is essential for a better understanding of noise generation. Inherent trade-offs between source and structure

  9. Retrieving rupture history using waveform inversions in time sequence

    Science.gov (United States)

    Yi, L.; Xu, C.; Zhang, X.

    2017-12-01

    The rupture history of large earthquakes is generally regenerated using the waveform inversion through utilizing seismological waveform records. In the waveform inversion, based on the superposition principle, the rupture process is linearly parameterized. After discretizing the fault plane into sub-faults, the local source time function of each sub-fault is usually parameterized using the multi-time window method, e.g., mutual overlapped triangular functions. Then the forward waveform of each sub-fault is synthesized through convoluting the source time function with its Green function. According to the superposition principle, these forward waveforms generated from the fault plane are summarized in the recorded waveforms after aligning the arrival times. Then the slip history is retrieved using the waveform inversion method after the superposing of all forward waveforms for each correspond seismological waveform records. Apart from the isolation of these forward waveforms generated from each sub-fault, we also realize that these waveforms are gradually and sequentially superimposed in the recorded waveforms. Thus we proposed a idea that the rupture model is possibly detachable in sequent rupture times. According to the constrained waveform length method emphasized in our previous work, the length of inverted waveforms used in the waveform inversion is objectively constrained by the rupture velocity and rise time. And one essential prior condition is the predetermined fault plane that limits the duration of rupture time, which means the waveform inversion is restricted in a pre-set rupture duration time. Therefore, we proposed a strategy to inverse the rupture process sequentially using the progressively shift rupture times as the rupture front expanding in the fault plane. And we have designed a simulation inversion to test the feasibility of the method. Our test result shows the prospect of this idea that requiring furthermore investigation.

  10. Optimized coincidence Doppler broadening spectroscopy using deconvolution algorithms

    International Nuclear Information System (INIS)

    Ho, K.F.; Ching, H.M.; Cheng, K.W.; Beling, C.D.; Fung, S.; Ng, K.P.

    2004-01-01

    In the last few years a number of excellent deconvolution algorithms have been developed for use in ''de-blurring'' 2D images. Here we report briefly on one such algorithm we have studied which uses the non-negativity constraint to optimize the regularization and which is applied to the 2D image like data produced in Coincidence Doppler Broadening Spectroscopy (CDBS). The system instrumental resolution functions are obtained using the 514 keV line from 85 Sr. The technique when applied to a series of well annealed polycrystalline metals gives two photon momentum data on a quality comparable to that obtainable using 1D Angular Correlation of Annihilation Radiation (ACAR). (orig.)

  11. Double spike with isotope pattern deconvolution for mercury speciation

    International Nuclear Information System (INIS)

    Castillo, A.; Rodriguez-Gonzalez, P.; Centineo, G.; Roig-Navarro, A.F.; Garcia Alonso, J.I.

    2009-01-01

    Full text: A double-spiking approach, based on an isotope pattern deconvolution numerical methodology, has been developed and applied for the accurate and simultaneous determination of inorganic mercury (IHg) and methylmercury (MeHg). Isotopically enriched mercury species ( 199 IHg and 201 MeHg) are added before sample preparation to quantify the extent of methylation and demethylation processes. Focused microwave digestion was evaluated to perform the quantitative extraction of such compounds from solid matrices of environmental interest. Satisfactory results were obtained in different certificated reference materials (dogfish liver DOLT-4 and tuna fish CRM-464) both by using GC-ICPMS and GC-MS, demonstrating the suitability of the proposed analytical method. (author)

  12. A deconvolution technique for processing small intestinal transit data

    Energy Technology Data Exchange (ETDEWEB)

    Brinch, K. [Department of Clinical Physiology and Nuclear Medicine, Glostrup Hospital, University Hospital of Copenhagen (Denmark); Larsson, H.B.W. [Danish Research Center of Magnetic Resonance, Hvidovre Hospital, University Hospital of Copenhagen (Denmark); Madsen, J.L. [Department of Clinical Physiology and Nuclear Medicine, Hvidovre Hospital, University Hospital of Copenhagen (Denmark)

    1999-03-01

    The deconvolution technique can be used to compute small intestinal impulse response curves from scintigraphic data. Previously suggested approaches, however, are sensitive to noise from the data. We investigated whether deconvolution based on a new simple iterative convolving technique can be recommended. Eight healthy volunteers ingested a meal that contained indium-111 diethylene triamine penta-acetic acid labelled water and technetium-99m stannous colloid labelled omelette. Imaging was performed at 30-min intervals until all radioactivity was located in the colon. A Fermi function=(1+e{sup -{alpha}{beta}})/(1+e{sup (t-{alpha}){beta}}) was chosen to characterize the small intestinal impulse response function. By changing only two parameters, {alpha} and {beta}, it is possible to obtain configurations from nearly a square function to nearly a monoexponential function. Small intestinal input function was obtained from the gastric emptying curve and convolved with the Fermi function. The sum of least squares was used to find {alpha} and {beta} yielding the best fit of the convolved curve to the oberved small intestinal time-activity curve. Finally, a small intestinal mean transit time was calculated from the Fermi function referred to. In all cases, we found an excellent fit of the convolved curve to the observed small intestinal time-activity curve, that is the Fermi function reflected the small intestinal impulse response curve. Small intestinal mean transit time of liquid marker (median 2.02 h) was significantly shorter than that of solid marker (median 2.99 h; P<0.02). The iterative convolving technique seems to be an attractive alternative to ordinary approaches for the processing of small intestinal transit data. (orig.) With 2 figs., 13 refs.

  13. SU-F-T-478: Effect of Deconvolution in Analysis of Mega Voltage Photon Beam Profiles

    Energy Technology Data Exchange (ETDEWEB)

    Muthukumaran, M [Apollo Speciality Hospitals, Chennai, Tamil Nadu (India); Manigandan, D [Fortis Cancer Institute, Mohali, Punjab (India); Murali, V; Chitra, S; Ganapathy, K [Apollo Speciality Hospital, Chennai, Tamil Nadu (India); Vikraman, S [JAYPEE HOSPITAL- RADIATION ONCOLOGY, Noida, UTTAR PRADESH (India)

    2016-06-15

    Purpose: To study and compare the penumbra of 6 MV and 15 MV photon beam profiles after deconvoluting different volume ionization chambers. Methods: 0.125cc Semi-Flex chamber, Markus Chamber and PTW Farmer chamber were used to measure the in-plane and cross-plane profiles at 5 cm depth for 6 MV and 15 MV photons. The profiles were measured for various field sizes starting from 2×2 cm till 30×30 cm. PTW TBA scan software was used for the measurements and the “deconvolution” functionality in the software was used to remove the volume averaging effect due to finite volume of the chamber along lateral and longitudinal directions for all the ionization chambers. The predicted true profile was compared and the change in penumbra before and after deconvolution was studied. Results: After deconvoluting the penumbra decreased by 1 mm for field sizes ranging from 2 × 2 cm till 20 x20 cm. This is observed for along both lateral and longitudinal directions. However for field sizes from 20 × 20 till 30 ×30 cm the difference in penumbra was around 1.2 till 1.8 mm. This was observed for both 6 MV and 15 MV photon beams. The penumbra was always lesser in the deconvoluted profiles for all the ionization chambers involved in the study. The variation in difference in penumbral values were in the order of 0.1 till 0.3 mm between the deconvoluted profile along lateral and longitudinal directions for all the chambers under study. Deconvolution of the profiles along longitudinal direction for Farmer chamber was not good and is not comparable with other deconvoluted profiles. Conclusion: The results of the deconvoluted profiles for 0.125cc and Markus chamber was comparable and the deconvolution functionality can be used to overcome the volume averaging effect.

  14. UDECON: deconvolution optimization software for restoring high-resolution records from pass-through paleomagnetic measurements

    Science.gov (United States)

    Xuan, Chuang; Oda, Hirokuni

    2015-11-01

    The rapid accumulation of continuous paleomagnetic and rock magnetic records acquired from pass-through measurements on superconducting rock magnetometers (SRM) has greatly contributed to our understanding of the paleomagnetic field and paleo-environment. Pass-through measurements are inevitably smoothed and altered by the convolution effect of SRM sensor response, and deconvolution is needed to restore high-resolution paleomagnetic and environmental signals. Although various deconvolution algorithms have been developed, the lack of easy-to-use software has hindered the practical application of deconvolution. Here, we present standalone graphical software UDECON as a convenient tool to perform optimized deconvolution for pass-through paleomagnetic measurements using the algorithm recently developed by Oda and Xuan (Geochem Geophys Geosyst 15:3907-3924, 2014). With the preparation of a format file, UDECON can directly read pass-through paleomagnetic measurement files collected at different laboratories. After the SRM sensor response is determined and loaded to the software, optimized deconvolution can be conducted using two different approaches (i.e., "Grid search" and "Simplex method") with adjustable initial values or ranges for smoothness, corrections of sample length, and shifts in measurement position. UDECON provides a suite of tools to view conveniently and check various types of original measurement and deconvolution data. Multiple steps of measurement and/or deconvolution data can be compared simultaneously to check the consistency and to guide further deconvolution optimization. Deconvolved data together with the loaded original measurement and SRM sensor response data can be saved and reloaded for further treatment in UDECON. Users can also export the optimized deconvolution data to a text file for analysis in other software.

  15. Prototype of a transient waveform recording ASIC

    Science.gov (United States)

    Qin, J.; Zhao, L.; Cheng, B.; Chen, H.; Guo, Y.; Liu, S.; An, Q.

    2018-01-01

    The paper presents the design and measurement results of a transient waveform recording ASIC based on the Switched Capacitor Array (SCA) architecture. This 0.18 μm CMOS prototype device contains two channels and each channel employs a SCA of 128 samples deep, a 12-bit Wilkinson ADC and a serial data readout. A series of tests have been conducted and the results indicate that: a full 1 V signal voltage range is available, the input analog bandwidth is approximately 450 MHz and the sampling speed is adjustable from 0.076 to 3.2 Gsps (Gigabit Samples Per Second). For precision waveform timing extraction, careful calibration of timing intervals between samples is conducted to improve the timing resolution of such chips, and the timing precision of this ASIC is proved to be better than 15 ps RMS.

  16. Digitizing and analysis of neutron generator waveforms

    International Nuclear Information System (INIS)

    Bryant, T.C.

    1977-11-01

    All neutron generator waveforms from units tested at the SLA neutron generator test site are digitized and the digitized data stored in the CDC 6600 tape library for display and analysis using the CDC 6600 computer. The digitizing equipment consists mainly of seven Biomation Model 8100 transient recorders, Digital Equipment Corporation PDP 11/20 computer, RK05 disk, seven-track magnetic tape transport, and appropriate DEC and SLA controllers and interfaces. The PDP 11/20 computer is programmed in BASIC with assembly language drivers. In addition to digitizing waveforms, this equipment is used for other functions such as the automated testing of multiple-operation electronic neutron generators. Although other types of analysis have been done, the largest use of the digitized data has been for various types of graphical displays using the CDC 6600 and either the SD4020 or DX4460 plotters

  17. Programmable Clock Waveform Generation for CCD Readout

    Energy Technology Data Exchange (ETDEWEB)

    Vicente, J. de; Castilla, J.; Martinez, G.; Marin, J.

    2006-07-01

    Charge transfer efficiency in CCDs is closely related to the clock waveform. In this paper, an experimental framework to explore different FPGA based clock waveform generator designs is described. Two alternative design approaches for controlling the rise/fall edge times and pulse width of the CCD clock signal have been implemented: level-control and time-control. Both approaches provide similar characteristics regarding the edge linearity and noise. Nevertheless, dissimilarities have been found with respect to the area and frequency range of application. Thus, while the time-control approach consumes less area, the level control approach provides a wider range of clock frequencies since it does not suffer capacitor discharge effect. (Author) 8 refs.

  18. Induced waveform transitions of dissipative solitons

    Science.gov (United States)

    Kochetov, Bogdan A.; Tuz, Vladimir R.

    2018-01-01

    The effect of an externally applied force upon the dynamics of dissipative solitons is analyzed in the framework of the one-dimensional cubic-quintic complex Ginzburg-Landau equation supplemented by a potential term with an explicit coordinate dependence. The potential accounts for the external force manipulations and consists of three symmetrically arranged potential wells whose depth varies along the longitudinal coordinate. It is found out that under an influence of such potential a transition between different soliton waveforms coexisting under the same physical conditions can be achieved. A low-dimensional phase-space analysis is applied in order to demonstrate that by only changing the potential profile, transitions between different soliton waveforms can be performed in a controllable way. In particular, it is shown that by means of a selected potential, stationary dissipative soliton can be transformed into another stationary soliton as well as into periodic, quasi-periodic, and chaotic spatiotemporal dissipative structures.

  19. Anatomic and energy variation of scatter compensation for digital chest radiography with Fourier deconvolution

    International Nuclear Information System (INIS)

    Floyd, C.E.; Beatty, P.T.; Ravin, C.E.

    1988-01-01

    The Fourier deconvolution algorithm for scatter compensation in digital chest radiography has been evaluated in four anatomically different regions at three energies. A shift invariant scatter distribution shape, optimized for the lung region at 140 kVp, was applied at 90 kVp and 120 kVp in the lung, retrocardiac, subdiaphragmatic, and thoracic spine regions. Scatter estimates from the deconvolution were compared with measured values. While some regional variation is apparent, the use of a shift invariant scatter distribution shape (optimized for a given energy) produces reasonable scatter compensation in the chest. A different set of deconvolution parameters were required at the different energies

  20. Advanced Waveform Simulation for Seismic Monitoring

    Science.gov (United States)

    2008-09-01

    velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radial components), Rayleigh (vertical and...ranges out to 10°, including extensive observations of crustal thinning and thickening and various Pnl complexities. Broadband modeling in 1D, 2D...existing models perform in predicting the various regional phases, Rayleigh waves, Love waves, and Pnl waves. Previous events from this Basin-and-Range

  1. Full-waveform inversion: Filling the gaps

    KAUST Repository

    Beydoun, Wafik B.

    2015-09-01

    After receiving an outstanding response to its inaugural workshop in 2013, SEG once again achieved great success with its 2015 SEG Middle East Workshop, “Full-waveform inversion: Filling the gaps,” which took place 30 March–1 April 2015 in Abu Dhabi, UAE. The workshop was organized by SEG, and its partner sponsors were Saudi Aramco (gold sponsor), ExxonMobil, and CGG. Read More: http://library.seg.org/doi/10.1190/tle34091106.1

  2. Integration and interpolation of sampled waveforms

    International Nuclear Information System (INIS)

    Stearns, S.D.

    1978-01-01

    Methods for integrating, interpolating, and improving the signal-to-noise ratio of digitized waveforms are discussed with regard to seismic data from underground tests. The frequency-domain integration method and the digital interpolation method of Schafer and Rabiner are described and demonstrated using test data. The use of bandpass filtering for noise reduction is also demonstrated. With these methods, a backlog of seismic test data has been successfully processed

  3. Time-dependent phase error correction using digital waveform synthesis

    Science.gov (United States)

    Doerry, Armin W.; Buskirk, Stephen

    2017-10-10

    The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.

  4. Sparse Frequency Waveform Design for Radar-Embedded Communication

    Directory of Open Access Journals (Sweden)

    Chaoyun Mai

    2016-01-01

    Full Text Available According to the Tag application with function of covert communication, a method for sparse frequency waveform design based on radar-embedded communication is proposed. Firstly, sparse frequency waveforms are designed based on power spectral density fitting and quasi-Newton method. Secondly, the eigenvalue decomposition of the sparse frequency waveform sequence is used to get the dominant space. Finally the communication waveforms are designed through the projection of orthogonal pseudorandom vectors in the vertical subspace. Compared with the linear frequency modulation waveform, the sparse frequency waveform can further improve the bandwidth occupation of communication signals, thus achieving higher communication rate. A certain correlation exists between the reciprocally orthogonal communication signals samples and the sparse frequency waveform, which guarantees the low SER (signal error rate and LPI (low probability of intercept. The simulation results verify the effectiveness of this method.

  5. Image-domain full waveform inversion

    KAUST Repository

    Zhang, Sanzong

    2013-08-20

    The main difficulty with the data-domain full waveform inversion (FWI) is that it tends to get stuck in the local minima associated with the waveform misfit function. This is because the waveform misfit function is highly nonlinear with respect to changes in velocity model. To reduce this nonlinearity, we define the image-domain objective function to minimize the difference of the suboffset-domain common image gathers (CIGs) obtained by migrating the observed data and the calculated data. The derivation shows that the gradient of this new objective function is the combination of the gradient of the conventional FWI and the image-domain differential semblance optimization (DSO). Compared to the conventional FWI, the imagedomain FWI is immune to cycle skipping problems by smearing the nonzero suboffset images along wavepath. It also can avoid the edge effects and the gradient artifacts that are inherent in DSO due to the falsely over-penalized focused images. This is achieved by subtracting the focused image associated with the calculated data from the unfocused image associated with the observed data in the image-domain misfit function. The numerical results of the Marmousi model show that image-domain FWI is less sensitive the initial model than the conventional FWI. © 2013 SEG.

  6. Image-domain full waveform inversion

    KAUST Repository

    Zhang, Sanzong; Schuster, Gerard T.

    2013-01-01

    The main difficulty with the data-domain full waveform inversion (FWI) is that it tends to get stuck in the local minima associated with the waveform misfit function. This is because the waveform misfit function is highly nonlinear with respect to changes in velocity model. To reduce this nonlinearity, we define the image-domain objective function to minimize the difference of the suboffset-domain common image gathers (CIGs) obtained by migrating the observed data and the calculated data. The derivation shows that the gradient of this new objective function is the combination of the gradient of the conventional FWI and the image-domain differential semblance optimization (DSO). Compared to the conventional FWI, the imagedomain FWI is immune to cycle skipping problems by smearing the nonzero suboffset images along wavepath. It also can avoid the edge effects and the gradient artifacts that are inherent in DSO due to the falsely over-penalized focused images. This is achieved by subtracting the focused image associated with the calculated data from the unfocused image associated with the observed data in the image-domain misfit function. The numerical results of the Marmousi model show that image-domain FWI is less sensitive the initial model than the conventional FWI. © 2013 SEG.

  7. Blind Deconvolution of Anisoplanatic Images Collected by a Partially Coherent Imaging System

    National Research Council Canada - National Science Library

    MacDonald, Adam

    2004-01-01

    ... have limited emissivity or reflectivity. This research proposes a novel blind deconvolution algorithm that is based on a maximum a posteriori Bayesian estimator constructed upon a physically based statistical model for the intensity...

  8. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  9. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.

    Science.gov (United States)

    Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G

    2012-05-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.

  10. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program

    International Nuclear Information System (INIS)

    Afouxenidis, D.; Polymeris, G. S.; Tsirliganis, N. C.; Kitis, G.

    2012-01-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the Glow Curve Analysis Intercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters. (authors)

  11. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  12. New Lagrange Multipliers for the Blind Adaptive Deconvolution Problem Applicable for the Noisy Case

    Directory of Open Access Journals (Sweden)

    Monika Pinchas

    2016-02-01

    Full Text Available Recently, a new blind adaptive deconvolution algorithm was proposed based on a new closed-form approximated expression for the conditional expectation (the expectation of the source input given the equalized or deconvolutional output where the output and input probability density function (pdf of the deconvolutional process were approximated with the maximum entropy density approximation technique. The Lagrange multipliers for the output pdf were set to those used for the input pdf. Although this new blind adaptive deconvolution method has been shown to have improved equalization performance compared to the maximum entropy blind adaptive deconvolution algorithm recently proposed by the same author, it is not applicable for the very noisy case. In this paper, we derive new Lagrange multipliers for the output and input pdfs, where the Lagrange multipliers related to the output pdf are a function of the channel noise power. Simulation results indicate that the newly obtained blind adaptive deconvolution algorithm using these new Lagrange multipliers is robust to signal-to-noise ratios (SNR, unlike the previously proposed method, and is applicable for the whole range of SNR down to 7 dB. In addition, we also obtain new closed-form approximated expressions for the conditional expectation and mean square error (MSE.

  13. Hybrid sparse blind deconvolution: an implementation of SOOT algorithm to real data

    Science.gov (United States)

    Pakmanesh, Parvaneh; Goudarzi, Alireza; Kourki, Meisam

    2018-06-01

    Getting information of seismic data depends on deconvolution as an important processing step; it provides the reflectivity series by signal compression. This compression can be obtained by removing the wavelet effects on the traces. The recently blind deconvolution has provided reliable performance for sparse signal recovery. In this study, two deconvolution methods have been implemented to the seismic data; the convolution of these methods provides a robust spiking deconvolution approach. This hybrid deconvolution is applied using the sparse deconvolution (MM algorithm) and the Smoothed-One-Over-Two algorithm (SOOT) in a chain. The MM algorithm is based on the minimization of the cost function defined by standards l1 and l2. After applying the two algorithms to the seismic data, the SOOT algorithm provided well-compressed data with a higher resolution than the MM algorithm. The SOOT algorithm requires initial values to be applied for real data, such as the wavelet coefficients and reflectivity series that can be achieved through the MM algorithm. The computational cost of the hybrid method is high, and it is necessary to be implemented on post-stack or pre-stack seismic data of complex structure regions.

  14. PERT: A Method for Expression Deconvolution of Human Blood Samples from Varied Microenvironmental and Developmental Conditions

    Science.gov (United States)

    Csaszar, Elizabeth; Yu, Mei; Morris, Quaid; Zandstra, Peter W.

    2012-01-01

    The cellular composition of heterogeneous samples can be predicted using an expression deconvolution algorithm to decompose their gene expression profiles based on pre-defined, reference gene expression profiles of the constituent populations in these samples. However, the expression profiles of the actual constituent populations are often perturbed from those of the reference profiles due to gene expression changes in cells associated with microenvironmental or developmental effects. Existing deconvolution algorithms do not account for these changes and give incorrect results when benchmarked against those measured by well-established flow cytometry, even after batch correction was applied. We introduce PERT, a new probabilistic expression deconvolution method that detects and accounts for a shared, multiplicative perturbation in the reference profiles when performing expression deconvolution. We applied PERT and three other state-of-the-art expression deconvolution methods to predict cell frequencies within heterogeneous human blood samples that were collected under several conditions (uncultured mono-nucleated and lineage-depleted cells, and culture-derived lineage-depleted cells). Only PERT's predicted proportions of the constituent populations matched those assigned by flow cytometry. Genes associated with cell cycle processes were highly enriched among those with the largest predicted expression changes between the cultured and uncultured conditions. We anticipate that PERT will be widely applicable to expression deconvolution strategies that use profiles from reference populations that vary from the corresponding constituent populations in cellular state but not cellular phenotypic identity. PMID:23284283

  15. Best waveform score for diagnosing keratoconus

    Directory of Open Access Journals (Sweden)

    Allan Luz

    2013-12-01

    Full Text Available PURPOSE: To test whether corneal hysteresis (CH and corneal resistance factor (CRF can discriminate between keratoconus and normal eyes and to evaluate whether the averages of two consecutive measurements perform differently from the one with the best waveform score (WS for diagnosing keratoconus. METHODS: ORA measurements for one eye per individual were selected randomly from 53 normal patients and from 27 patients with keratoconus. Two groups were considered the average (CH-Avg, CRF-Avg and best waveform score (CH-WS, CRF-WS groups. The Mann-Whitney U-test was used to evaluate whether the variables had similar distributions in the Normal and Keratoconus groups. Receiver operating characteristics (ROC curves were calculated for each parameter to assess the efficacy for diagnosing keratoconus and the same obtained for each variable were compared pairwise using the Hanley-McNeil test. RESULTS: The CH-Avg, CRF-Avg, CH-WS and CRF-WS differed significantly between the normal and keratoconus groups (p<0.001. The areas under the ROC curve (AUROC for CH-Avg, CRF-Avg, CH-WS, and CRF-WS were 0.824, 0.873, 0.891, and 0.931, respectively. CH-WS and CRF-WS had significantly better AUROCs than CH-Avg and CRF-Avg, respectively (p=0.001 and 0.002. CONCLUSION: The analysis of the biomechanical properties of the cornea through the ORA method has proved to be an important aid in the diagnosis of keratoconus, regardless of the method used. The best waveform score (WS measurements were superior to the average of consecutive ORA measurements for diagnosing keratoconus.

  16. Pixel-by-pixel mean transit time without deconvolution.

    Science.gov (United States)

    Dobbeleir, Andre A; Piepsz, Amy; Ham, Hamphrey R

    2008-04-01

    Mean transit time (MTT) within a kidney is given by the integral of the renal activity on a well-corrected renogram between time zero and time t divided by the integral of the plasma activity between zero and t, providing that t is close to infinity. However, as the data acquisition of a renogram is finite, the MTT calculated using this approach might result in the underestimation of the true MTT. To evaluate the degree of this underestimation we conducted a simulation study. One thousand renograms were created by convoluting various plasma curves obtained from patients with different renal clearance levels with simulated retentions curves having different shapes and mean transit times. For a 20 min renogram, the calculated MTT started to underestimate the MTT when the MTT was higher than 6 min. The longer the MTT, the greater was the underestimation. Up to a MTT value of 6 min, the error on the MTT estimation is negligible. As normal cortical transit is less than 2 min, this approach is used for patients to calculate pixel-to-pixel cortical mean transit time and to create a MTT parametric image without deconvolution.

  17. Toward fully automated genotyping: Genotyping microsatellite markers by deconvolution

    Energy Technology Data Exchange (ETDEWEB)

    Perlin, M.W.; Lancia, G.; See-Kiong, Ng [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    1995-11-01

    Dense genetic linkage maps have been constructed for the human and mouse genomes, with average densities of 2.9 cM and 0.35 cM, respectively. These genetic maps are crucial for mapping both Mendelian and complex traits and are useful in clinical genetic diagnosis. Current maps are largely comprised of abundant, easily assayed, and highly polymorphic PCR-based microsatellite markers, primarily dinucleotide (CA){sub n} repeats. One key limitation of these length polymorphisms is the PCR stutter (or slippage) artifact that introduces additional stutter bands. With two (or more) closely spaced alleles, the stutter bands overlap, and it is difficult to accurately determine the correct alleles; this stutter phenomenon has all but precluded full automation, since a human must visually inspect the allele data. We describe here novel deconvolution methods for accurate genotyping that mathematically remove PCR stutter artifact from microsatellite markers. These methods overcome the manual interpretation bottleneck and thereby enable full automation of genetic map construction and use. New functionalities, including the pooling of DNAs and the pooling of markers, are described that may greatly reduce the associated experimentation requirements. 32 refs., 5 figs., 3 tabs.

  18. Blind deconvolution of seismograms regularized via minimum support

    International Nuclear Information System (INIS)

    Royer, A A; Bostock, M G; Haber, E

    2012-01-01

    The separation of earthquake source signature and propagation effects (the Earth’s ‘Green’s function’) that encode a seismogram is a challenging problem in seismology. The task of separating these two effects is called blind deconvolution. By considering seismograms of multiple earthquakes from similar locations recorded at a given station and that therefore share the same Green’s function, we may write a linear relation in the time domain u i (t)*s j (t) − u j (t)*s i (t) = 0, where u i (t) is the seismogram for the ith source and s j (t) is the jth unknown source. The symbol * represents the convolution operator. From two or more seismograms, we obtain a homogeneous linear system where the unknowns are the sources. This system is subject to a scaling constraint to deliver a non-trivial solution. Since source durations are not known a priori and must be determined, we augment our system by introducing the source durations as unknowns and we solve the combined system (sources and source durations) using separation of variables. Our solution is derived using direct linear inversion to recover the sources and Newton’s method to recover source durations. This method is tested using two sets of synthetic seismograms created by convolution of (i) random Gaussian source-time functions and (ii) band-limited sources with a simplified Green’s function and signal to noise levels up to 10% with encouraging results. (paper)

  19. Early Cambrian wave-formed shoreline deposits

    DEFF Research Database (Denmark)

    Clemmensen, Lars B; Glad, Aslaug Clemmensen; Pedersen, Gunver Krarup

    2017-01-01

    -preserved subaqueous dunes and wave ripples indicates deposition in a wave-dominated upper shoreface (littoral zone) environment, and the presence of interference ripples indicates that the littoral zone environment experienced water level fluctuations due to tides and/or changing meteorological conditions. Discoidal....... During this period, wave-formed shoreline sediments (the Vik Member, Hardeberga Formation) were deposited on Bornholm and are presently exposed at Strøby quarry. The sediments consist of fine- and medium-grained quartz-cemented arenites in association with a few silt-rich mudstones. The presence of well...

  20. Waveform design for wireless power transfer

    OpenAIRE

    Clerckx, B; Bayguzina, E

    2016-01-01

    Far-field Wireless Power Transfer (WPT) has attracted significant attention in recent years. Despite the rapid progress, the emphasis of the research community in the last decade has remained largely concentrated on improving the design of energy harvester (so-called rectenna) and has left aside the effect of transmitter design. In this paper, we study the design of transmit waveform so as to enhance the DC power at the output of the rectenna. We derive a tractable model of the non-linearity ...

  1. Performance Prediction of Constrained Waveform Design for Adaptive Radar

    Science.gov (United States)

    2016-11-01

    the famous Woodward quote, having a ubiquitous feeling for all radar waveform design (and performance prediction) researchers , that is found at the end...discuss research that develops performance prediction models to quantify the impact on SINR when an amplitude constraint is placed on a radar waveform...optimize the radar perfor- mance for the particular scenario and tasks. There have also been several survey papers on various topics in waveform design for

  2. On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology

    Science.gov (United States)

    Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela

    2016-08-01

    We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.

  3. Advances in waveform-agile sensing for tracking

    CERN Document Server

    Sira, Sandeep Prasad

    2009-01-01

    Recent advances in sensor technology and information processing afford a new flexibility in the design of waveforms for agile sensing. Sensors are now developed with the ability to dynamically choose their transmit or receive waveforms in order to optimize an objective cost function. This has exposed a new paradigm of significant performance improvements in active sensing: dynamic waveform adaptation to environment conditions, target structures, or information features. The manuscript provides a review of recent advances in waveform-agile sensing for target tracking applications. A dynamic wav

  4. Wavelet-Based Signal Processing of Electromagnetic Pulse Generated Waveforms

    National Research Council Canada - National Science Library

    Ardolino, Richard S

    2007-01-01

    This thesis investigated and compared alternative signal processing techniques that used wavelet-based methods instead of traditional frequency domain methods for processing measured electromagnetic pulse (EMP) waveforms...

  5. Elastic reflection waveform inversion with variable density

    KAUST Repository

    Li, Yuanyuan

    2017-08-17

    Elastic full waveform inversion (FWI) provides a better description of the subsurface than those given by the acoustic assumption. However it suffers from a more serious cycle skipping problem compared with the latter. Reflection waveform inversion (RWI) provides a method to build a good background model, which can serve as an initial model for elastic FWI. Therefore, we introduce the concept of RWI for elastic media, and propose elastic RWI with variable density. We apply Born modeling to generate the synthetic reflection data by using optimized perturbations of P- and S-wave velocities and density. The inversion for the perturbations in P- and S-wave velocities and density is similar to elastic least-squares reverse time migration (LSRTM). An incorrect initial model will lead to some misfits at the far offsets of reflections; thus, can be utilized to update the background velocity. We optimize the perturbation and background models in a nested approach. Numerical tests on the Marmousi model demonstrate that our method is able to build reasonably good background models for elastic FWI with absence of low frequencies, and it can deal with the variable density, which is needed in real cases.

  6. A sheath model for arbitrary radiofrequency waveforms

    Science.gov (United States)

    Turner, M. M.; Chabert, Pascal

    2012-10-01

    The sheath is often the most important region of a rf plasma, because discharge impedance, power absorption and ion acceleration are critically affected by the behaviour of the sheath. Consequently, models of the sheath are central to any understanding of the physics of rf plasmas. Lieberman has supplied an analytical model for a radio-frequency sheath driven by a single frequency, but in recent years interest has been increasing in radio-frequency discharges excited by increasingly complex wave forms. There has been limited success in generalizing the Lieberman model in this direction, because of mathematical complexities. So there is essentially no sheath model available to describe many modern experiments. In this paper we present a new analytical sheath model, based on a simpler mathematical framework than that of Lieberman. For the single frequency case, this model yields scaling laws that are identical in form to those of Lieberman, differing only by numerical coefficients close to one. However, the new model may be straightforwardly solved for arbitrary current waveforms, and may be used to derive scaling laws for such complex waveforms. In this paper, we will describe the model and present some illustrative examples.

  7. Use of new spectral analysis methods in gamma spectra deconvolution

    International Nuclear Information System (INIS)

    Pinault, J.L.

    1991-01-01

    A general deconvolution method applicable to X and gamma ray spectrometry is proposed. Using new spectral analysis methods, it is applied to an actual case: the accurate on-line analysis of three elements (Ca, Si, Fe) in a cement plant using neutron capture gamma rays. Neutrons are provided by a low activity (5 μg) 252 Cf source; the detector is a BGO 3 in.x8 in. scintillator. The principle of the methods rests on the Fourier transform of the spectrum. The search for peaks and determination of peak areas are worked out in the Fourier representation, which enables separation of background and peaks and very efficiently discriminates peaks, or elements represented by several peaks. First the spectrum is transformed so that in the new representation the full width at half maximum (FWHM) is independent of energy. Thus, the spectrum is arranged symmetrically and transformed into the Fourier representation. The latter is multiplied by a function in order to transform original Gaussian into Lorentzian peaks. An autoregressive filter is calculated, leading to a characteristic polynomial whose complex roots represent both the location and the width of each peak, provided that the absolute value is lower than unit. The amplitude of each component (the area of each peak or the sum of areas of peaks characterizing an element) is fitted by the weighted least squares method, taking into account that errors in spectra are independent and follow a Poisson law. Very accurate results are obtained, which would be hard to achieve by other methods. The DECO FORTRAN code has been developed for compatible PC microcomputers. Some features of the code are given. (orig.)

  8. Breast image feature learning with adaptive deconvolutional networks

    Science.gov (United States)

    Jamieson, Andrew R.; Drukker, Karen; Giger, Maryellen L.

    2012-03-01

    Feature extraction is a critical component of medical image analysis. Many computer-aided diagnosis approaches employ hand-designed, heuristic lesion extracted features. An alternative approach is to learn features directly from images. In this preliminary study, we explored the use of Adaptive Deconvolutional Networks (ADN) for learning high-level features in diagnostic breast mass lesion images with potential application to computer-aided diagnosis (CADx) and content-based image retrieval (CBIR). ADNs (Zeiler, et. al., 2011), are recently-proposed unsupervised, generative hierarchical models that decompose images via convolution sparse coding and max pooling. We trained the ADNs to learn multiple layers of representation for two breast image data sets on two different modalities (739 full field digital mammography (FFDM) and 2393 ultrasound images). Feature map calculations were accelerated by use of GPUs. Following Zeiler et. al., we applied the Spatial Pyramid Matching (SPM) kernel (Lazebnik, et. al., 2006) on the inferred feature maps and combined this with a linear support vector machine (SVM) classifier for the task of binary classification between cancer and non-cancer breast mass lesions. Non-linear, local structure preserving dimension reduction, Elastic Embedding (Carreira-Perpiñán, 2010), was then used to visualize the SPM kernel output in 2D and qualitatively inspect image relationships learned. Performance was found to be competitive with current CADx schemes that use human-designed features, e.g., achieving a 0.632+ bootstrap AUC (by case) of 0.83 [0.78, 0.89] for an ultrasound image set (1125 cases).

  9. Deconvolution of the tree ring based delta13C record

    International Nuclear Information System (INIS)

    Peng, T.; Broecker, W.S.; Freyer, H.D.; Trumbore, S.

    1983-01-01

    We assumed that the tree-ring based 13 C/ 12 C record constructed by Freyer and Belacy (1983) to be representative of the fossil fuel and forest-soil induced 13 C/ 12 C change for atmospheric CO 2 . Through the use of a modification of the Oeschger et al. ocean model, we have computed the contribution of the combustion of coal, oil, and natural gas to this observed 13 C/ 12 C change. A large residual remains when the tree-ring-based record is corrected for the contribution of fossil fuel CO 2 . A deconvolution was performed on this residual to determine the time history and magnitude of the forest-soil reservoir changes over the past 150 years. Several important conclusions were reached. (1) The magnitude of the integrated CO 2 input from these sources was about 1.6 times that from fossil fuels. (2) The forest-soil contribution reached a broad maximum centered at about 1900. (3) Over the 2 decade period covered by the Mauna Loa atmospheric CO 2 content record, the input from forests and soils was about 30% that from fossil fuels. (4) The 13 C/ 12 C trend over the last 20 years was dominated by the input of fossil fuel CO 2 . (5) The forest-soil release did not contribute significantly to the secular increase in atmospheric CO 2 observed over the last 20 years. (6) The pre-1850 atmospheric p2 values must have been in the range 245 to 270 x 10 -6 atmospheres

  10. Monte-Carlo error analysis in x-ray spectral deconvolution

    International Nuclear Information System (INIS)

    Shirk, D.G.; Hoffman, N.M.

    1985-01-01

    The deconvolution of spectral information from sparse x-ray data is a widely encountered problem in data analysis. An often-neglected aspect of this problem is the propagation of random error in the deconvolution process. We have developed a Monte-Carlo approach that enables us to attach error bars to unfolded x-ray spectra. Our Monte-Carlo error analysis has been incorporated into two specific deconvolution techniques: the first is an iterative convergent weight method; the second is a singular-value-decomposition (SVD) method. These two methods were applied to an x-ray spectral deconvolution problem having m channels of observations with n points in energy space. When m is less than n, this problem has no unique solution. We discuss the systematics of nonunique solutions and energy-dependent error bars for both methods. The Monte-Carlo approach has a particular benefit in relation to the SVD method: It allows us to apply the constraint of spectral nonnegativity after the SVD deconvolution rather than before. Consequently, we can identify inconsistencies between different detector channels

  11. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    Science.gov (United States)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  12. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    International Nuclear Information System (INIS)

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R

    2009-01-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  13. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    Energy Technology Data Exchange (ETDEWEB)

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: John.Votaw@Emory.edu

    2009-02-07

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  14. Method and apparatus for resonant frequency waveform modulation

    Science.gov (United States)

    Taubman, Matthew S [Richland, WA

    2011-06-07

    A resonant modulator device and process are described that provide enhanced resonant frequency waveforms to electrical devices including, e.g., laser devices. Faster, larger, and more complex modulation waveforms are obtained than can be obtained by use of conventional current controllers alone.

  15. Frequency-domain waveform inversion using the unwrapped phase

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2011-01-01

    Phase wrapping in the frequency-domain (or cycle skipping in the time-domain) is the major cause of the local minima problem in the waveform inversion. The unwrapped phase has the potential to provide us with a robust and reliable waveform inversion

  16. An Overview of Radar Waveform Optimization for Target Detection

    Directory of Open Access Journals (Sweden)

    Wang Lulu

    2016-10-01

    Full Text Available An optimal waveform design method that fully employs the knowledge of the target and the environment can further improve target detection performance, thus is of vital importance to research. In this paper, methods of radar waveform optimization for target detection are reviewed and summarized and provide the basis for the research.

  17. A pulse generator of arbitrary shaped waveform

    International Nuclear Information System (INIS)

    Jiang Jiayou; Chen Zhihao

    2011-01-01

    The three bump magnets in the booster extraction system of SSRF are driven by a signal generator with an external trigger. The signal generator must have three independent and controllable outputs, and both amplitude and make-and-break should be controllable, with current state information being readable. In this paper, we describe a signal generator based on FPGA and DAC boards. It makes use of characteristics of both FPGA flex programmable and rich reconfigurable IO resources. The system has a 16-bit DAC with four outputs, using Matlab to write a GUI based on RS232 protocol for control. It was simulated in Modelsim and tested on board. The results indicate that the system is well designed and all the requirements are met. The arbitrary waveform is writable, and the pulse width and period can be controlled. (authors)

  18. Facies Constrained Elastic Full Waveform Inversion

    KAUST Repository

    Zhang, Z.

    2017-05-26

    Current efforts to utilize full waveform inversion (FWI) as a tool beyond acoustic imaging applications, for example for reservoir analysis, face inherent limitations on resolution and also on the potential trade-off between elastic model parameters. Adding rock physics constraints does help to mitigate these issues. However, current approaches to add such constraints are based on averaged type rock physics regularization terms. Since the true earth model consists of different facies, averaging over those facies naturally leads to smoothed models. To overcome this, we propose a novel way to utilize facies based constraints in elastic FWI. A so-called confidence map is calculated and updated at each iteration of the inversion using both the inverted models and the prior information. The numerical example shows that the proposed method can reduce the cross-talks and also can improve the resolution of inverted elastic properties.

  19. Facies Constrained Elastic Full Waveform Inversion

    KAUST Repository

    Zhang, Z.; Zabihi Naeini, E.; Alkhalifah, Tariq Ali

    2017-01-01

    Current efforts to utilize full waveform inversion (FWI) as a tool beyond acoustic imaging applications, for example for reservoir analysis, face inherent limitations on resolution and also on the potential trade-off between elastic model parameters. Adding rock physics constraints does help to mitigate these issues. However, current approaches to add such constraints are based on averaged type rock physics regularization terms. Since the true earth model consists of different facies, averaging over those facies naturally leads to smoothed models. To overcome this, we propose a novel way to utilize facies based constraints in elastic FWI. A so-called confidence map is calculated and updated at each iteration of the inversion using both the inverted models and the prior information. The numerical example shows that the proposed method can reduce the cross-talks and also can improve the resolution of inverted elastic properties.

  20. Rectangular waveform linear transformer driver module design

    International Nuclear Information System (INIS)

    Zhao Yue; Xie Weiping; Zhou Liangji; Chen Lin

    2014-01-01

    Linear Transformer Driver is a novel pulsed power technology, its main merits include a parallel LC discharge array and Inductive Voltage Adder. The parallel LC discharge array lowers the whole circuit equivalent inductance and the Inductive Voltage Adder unites the modules in series in order to create a high electric field grads, meanwhile, restricts the high voltage in a small space. The lower inductance in favor of LTD output a fast waveform and IVA confine high voltage in secondary cavity. In recently, some LTD-based pulsed power system has been development yet. The usual LTD architecture provides damped sine shaped output pulses that may not be suitable in flash radiography, high power microwave production, z-pinch drivers, and certain other applications. A more suitable driver output pulse would have a flat or inclined top (slightly rising or falling). In this paper, we present the design of an LTD cavity that generates this type of the output pulse by including within its circular array some number of the harmonic bricks in addition to the standard bricks according to Fourier progression theory. The parallel LC discharge array circuit formula is introduced by Kirchhoff Law, and the sum of harmonic is proofed as an analytic result, meanwhile, rationality of design is proved by simulation. Varying gas spark discharge dynamic resistance with harmonic order and switches jitter are analyzed. The results are as following: The more harmonic order is an approach to the ideal rectangular waveform, but lead to more system complexity. The capacity decreases as harmonic order increase, and gas spark discharge dynamic resistance rises with the capacity. The rising time protracts and flat is decay or even vanishes and the shot to shot reproducibility is degenerate as the switches jitter is high. (authors)

  1. Synthetic tsunami waveform catalogs with kinematic constraints

    Science.gov (United States)

    Baptista, Maria Ana; Miranda, Jorge Miguel; Matias, Luis; Omira, Rachid

    2017-07-01

    In this study we present a comprehensive methodology to produce a synthetic tsunami waveform catalogue in the northeast Atlantic, east of the Azores islands. The method uses a synthetic earthquake catalogue compatible with plate kinematic constraints of the area. We use it to assess the tsunami hazard from the transcurrent boundary located between Iberia and the Azores, whose western part is known as the Gloria Fault. This study focuses only on earthquake-generated tsunamis. Moreover, we assume that the time and space distribution of the seismic events is known. To do this, we compute a synthetic earthquake catalogue including all fault parameters needed to characterize the seafloor deformation covering the time span of 20 000 years, which we consider long enough to ensure the representability of earthquake generation on this segment of the plate boundary. The computed time and space rupture distributions are made compatible with global kinematic plate models. We use the tsunami empirical Green's functions to efficiently compute the synthetic tsunami waveforms for the dataset of coastal locations, thus providing the basis for tsunami impact characterization. We present the results in the form of offshore wave heights for all coastal points in the dataset. Our results focus on the northeast Atlantic basin, showing that earthquake-induced tsunamis in the transcurrent segment of the Azores-Gibraltar plate boundary pose a minor threat to coastal areas north of Portugal and beyond the Strait of Gibraltar. However, in Morocco, the Azores, and the Madeira islands, we can expect wave heights between 0.6 and 0.8 m, leading to precautionary evacuation of coastal areas. The advantages of the method are its easy application to other regions and the low computation effort needed.

  2. Design of a 9-loop quasi-exponential waveform generator.

    Science.gov (United States)

    Banerjee, Partha; Shukla, Rohit; Shyam, Anurag

    2015-12-01

    We know in an under-damped L-C-R series circuit, current follows a damped sinusoidal waveform. But if a number of sinusoidal waveforms of decreasing time period, generated in an L-C-R circuit, be combined in first quarter cycle of time period, then a quasi-exponential nature of output current waveform can be achieved. In an L-C-R series circuit, quasi-exponential current waveform shows a rising current derivative and thereby finds many applications in pulsed power. Here, we have described design and experiment details of a 9-loop quasi-exponential waveform generator. In that, design details of magnetic switches have also been described. In the experiment, output current of 26 kA has been achieved. It has been shown that how well the experimentally obtained output current profile matches with the numerically computed output.

  3. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    export terminals, 275 flow stations, 10 gas plants, 3 refineries and a massive natural ..... clear of the Niger Delta', adding that, 'The Chinese government by investing in stolen .... Development Agency (SMEDAN); to help boost the growth of both countries' ..... “The Rule of Oil: Petro-Politics and the Anatomy of an Insurgency”.

  4. Geothermal pump down-hole energy regeneration system

    Science.gov (United States)

    Matthews, Hugh B.

    1982-01-01

    Geothermal deep well energy extraction apparatus is provided of the general kind in which solute-bearing hot water is pumped to the earth's surface from a subterranean location by utilizing thermal energy extracted from the hot water for operating a turbine motor for driving an electrical power generator at the earth 3 s surface, the solute bearing water being returned into the earth by a reinjection well. Efficiency of operation of the total system is increased by an arrangement of coaxial conduits for greatly reducing the flow of heat from the rising brine into the rising exhaust of the down-well turbine motor.

  5. Active cooling of a down hole well tractor

    DEFF Research Database (Denmark)

    Soprani, Stefano; Nesgaard, Carsten

    Wireline interventions in high temperature wells represent one of today’s biggest challenges for the oil and gas industry. The high wellbore temperatures, which can reach 200 °C, drastically reduce the life of the electronic components contained in the wireline downhole tools, which can cause...... the intervention to fail. Active cooling systems represent a possible solution to the electronics overheating, as they could maintain the sensitive electronics at a tolerable temperature, while operating in hotter environments. This work presents the design, construction and testing of an actively cooled downhole......-width-modulation circuit was developed to adapt the downhole power source to a suitable voltage for the thermoelectric cooler. The implementation of the active cooling system was supported by the study of the thermal interaction between the downhole tool and the well environment, which was relevant to define the heat...

  6. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    specific purposes (ESP) as an approach to language teaching, and the differences between ... Occupational Purposes (EOP), English for Vocational Purposes (EVP), Vocational ... theories of functionalism and communicative competence. ... primary, secondary and tertiary institutions with a syllabus handed down by the.

  7. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    way forward for the emerging relations between universities and their communities. ... Governing council: Universities in the public sector like other higher ... provide grants-in-aid, study fellowship and sponsorship, etc (Akintayo, 2002). ... First, we could have university town, in which case the siteing of the university actually ...

  8. Hydraulic Workover Unit (HWU) Application in Down-hole Milling ...

    African Journals Online (AJOL)

    ENGINEERS

    Key words: Trade Liberalisation, Economic Integration, Common External Tariff,. Openness to trade ... integration and development in the West African region. It is also to .... Also the region lacks a regional rail network, and the national rail ...

  9. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    were measured using appropriate field measurement techniques. ... nature, providing nutrients for plant to grow as well as habitat for millions of micro- .... respectively (i.e. the angel of the tree was measured using the Abney level while the.

  10. Hydraulic Workover Unit (HWU) Application in Down-hole Milling ...

    African Journals Online (AJOL)

    ENGINEERS

    drill pipe and pumping at a steady rate of 400gpm (10bls/min) we were able to ... innovation was now broadened the scope of work and application available to the ... welding of bell nipple. Contact with hot surfaces/heated metal or fire. Noise.

  11. Hydraulic Workover Unit (HWU) Application in Down-hole Milling ...

    African Journals Online (AJOL)

    ENGINEERS

    actions and activities of other arms of government and administration are in accordance .... independence, irrespective of the fusion of power in the country's ... Montesquieu's writings were not only an inspiration for critical and rational thought,.

  12. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    Micro-financing as Poverty Reduction Strategy: a Case Study of Oredo and Egor ... Inadequate access to market where the poor can sell goods and services. ▫ Low endowment of ... programmes would not be felt by the target beneficiaries.

  13. Continuation of down-hole geophysical testing for rock sockets.

    Science.gov (United States)

    2013-11-01

    Site characterization for the design of deep foundations is crucial for ensuring a reliable and economic substructure design, as unanticipated site conditions can cause significant problems and disputes during construction. Traditional invasive explo...

  14. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    transmit data instantaneously, prevent diseases through life style and behaviour, and .... as in drug abuse, poor diet and smoking and the change in health-related ... Health-related cognitions: investigating the processes which can explain, ... Wilson (1997) sees communication as having tremendous impact on society which.

  15. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    a principal tool for generating revenue for government for the prosecution of ..... The self-employed people do not keep proper accounting records. Most of them do not prepare the annual financial statements of their businesses, not to talk. of.

  16. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    that tertiary institutions offering Business Education programmes need to be ... facilities can make teaching and learning more efficient and productive by ... Towards Utilization of E-learning in Preparing Business Education Students for the ...

  17. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    2004:9). The second language (L2) status of English in Nigeria makes it imperative ... teach their students to be effective 'comprehenders' (p. 1). The importance of teachers in .... There exist many different methods of testing students' reading.

  18. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    identified threats to internal security in Nigeria to include: religious/political ... Frequent and persistent ethnic conflicts and religious clashes between the two dominant ..... advanced training, intelligence sharing, advanced technology, logistics, ...

  19. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    1962). Bulk density was determined by core method (Blake, 1965). The soil pH was measured using a pH meter in 1:2.5 soil to water ratio. Percentage organic matter was estimated by procedure described by walkley and Black (1934). The exchangeable bases (Ca, Mg, K and Na) were determined by procedure outlined by ...

  20. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    A questionnaire made up of twenty items was developed and responded to ... expectation unless adequate fore-thought and planning are put in place to ... sophisticated twenty-first century parent to question the 'magister dixit' image of the.

  1. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    ... used to collect data. The questionnaire was titled “Attitudes ... family expectations, activities and obligations required from members of a family. Crouter and ... These family responsibilities are crucial for adolescent-parent relationships, family.

  2. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    Students' retention amongst the competitive private Universities in Nigeria leads to ... service quality. Another study in the US conducted by Carey, Cambiano and De Vore (2002) compared campus satisfaction levels between students and faculty as ... The study adopted a positivist approach whereby 1,200 questionnaires.

  3. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    organizations that involved in CCB as check lists, and might serve as guides .... out from their empirical research study that the key issues that make partnership work ... Safety Initiative-AARSI partnership organization in order to understand the.

  4. Application in Down-hole Milling Operations In Niger Delta.

    African Journals Online (AJOL)

    ENGINEERS

    such an alarming rate that the bank becomes insolvent. Even loan default in form of doubtful debt in such an alarming dimension is sufficient to generate "cracks" in the. "walls" of the stability of a bank. There is therefore an acceptable need for an independent body to protect the interest of depositors and promote sound ...

  5. Tube Expansion Under Various Down-Hole End Conditions

    Directory of Open Access Journals (Sweden)

    FJ Sanchez

    2013-06-01

    Full Text Available Fossil hydrocarbons are indispensables commodities that motorize the global economy, and oil and gas are two of those conventional fuels that have been extracted and processed for over a century. During last decade, operators face challenges discovering and developing reservoirs commonly found up to several kilometers underground, for which advanced technologies are developed through different research programs. In order to optimize the current processes to drill and construct oil/gas wells, a large number of mechanical technologies discovered centuries ago by diverse sectors are implemented by well engineers. In petroleum industry, the ancient tube forming manufacturing process founds an application once well engineers intend to produce from reservoirs that cannot be reached unless previous and shallower troublesome formations are isolated. Solid expandable tubular is, for instance, one of those technologies developed to mitigate drilling problems and optimize the well delivery process. It consists of in-situ expansion of a steel-based tube that is attained by pushing/pulling a solid mandrel, which permanently enlarge its diameters. This non-linear expansion process is strongly affected by the material properties of the tubular, its geometry, and the pipe/mandrel contact surface. The anticipated force required to deform long sections of the pipe in an uncontrollable expansion environment, might jeopardize mechanical properties of the pipe and the well structural integrity. Scientific-based solutions, that depend on sound theoretical formulation and are validated through experiments, will help to understand possible tubular failure mechanisms during its operational life. This work is aimed to study the effect of different loading/boundary conditions on mechanical/physical properties of the pipe after expansion. First, full-scale experiments were conducted to evaluate the geometrical and behavioral changes. Second, simulation of deformation process was done using finite element method and validated against experimental results to assess the effects on the post-expansion tubular properties. Finally, the authors bring a comparison study where in a semi-analytical model is used to predict the force required for expansion.

  6. Hydraulic Workover Unit (HWU) Application in Down-hole Milling ...

    African Journals Online (AJOL)

    ENGINEERS

    the top management to the lowest rung of the organizational hierarchy geared ... challenging changing society are required to exhibit leadership qualities and ..... has been sacrificed on the altar of rough politics leading to policy inconsistency,.

  7. Closed form of optimal current waveform for class-F PA up to fourth ...

    Indian Academy of Sciences (India)

    PA and its dual, usually referred as inverse class-F PA, current and voltage ... voltage waveforms provides a number of advantages in the process of PA design ... RF PA design approaches with waveform theory and experimental waveform.

  8. Performance evaluation of spectral deconvolution analysis tool (SDAT) software used for nuclear explosion radionuclide measurements

    International Nuclear Information System (INIS)

    Foltz Biegalski, K.M.; Biegalski, S.R.; Haas, D.A.

    2008-01-01

    The Spectral Deconvolution Analysis Tool (SDAT) software was developed to improve counting statistics and detection limits for nuclear explosion radionuclide measurements. SDAT utilizes spectral deconvolution spectroscopy techniques and can analyze both β-γ coincidence spectra for radioxenon isotopes and high-resolution HPGe spectra from aerosol monitors. Spectral deconvolution spectroscopy is an analysis method that utilizes the entire signal deposited in a gamma-ray detector rather than the small portion of the signal that is present in one gamma-ray peak. This method shows promise to improve detection limits over classical gamma-ray spectroscopy analytical techniques; however, this hypothesis has not been tested. To address this issue, we performed three tests to compare the detection ability and variance of SDAT results to those of commercial off- the-shelf (COTS) software which utilizes a standard peak search algorithm. (author)

  9. 4Pi microscopy deconvolution with a variable point-spread function.

    Science.gov (United States)

    Baddeley, David; Carl, Christian; Cremer, Christoph

    2006-09-20

    To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.

  10. Triggerless Readout with Time and Amplitude Reconstruction of Event Based on Deconvolution Algorithm

    International Nuclear Information System (INIS)

    Kulis, S.; Idzik, M.

    2011-01-01

    In future linear colliders like CLIC, where the period between the bunch crossings is in a sub-nanoseconds range ( 500 ps), an appropriate detection technique with triggerless signal processing is needed. In this work we discuss a technique, based on deconvolution algorithm, suitable for time and amplitude reconstruction of an event. In the implemented method the output of a relatively slow shaper (many bunch crossing periods) is sampled and digitalised in an ADC and then the deconvolution procedure is applied to digital data. The time of an event can be found with a precision of few percent of sampling time. The signal to noise ratio is only slightly decreased after passing through the deconvolution filter. The performed theoretical and Monte Carlo studies are confirmed by the results of preliminary measurements obtained with the dedicated system comprising of radiation source, silicon sensor, front-end electronics, ADC and further digital processing implemented on a PC computer. (author)

  11. Deconvolution for the localization of sound sources using a circular microphone array

    DEFF Research Database (Denmark)

    Tiana Roig, Elisabet; Jacobsen, Finn

    2013-01-01

    During the last decade, the aeroacoustic community has examined various methods based on deconvolution to improve the visualization of acoustic fields scanned with planar sparse arrays of microphones. These methods assume that the beamforming map in an observation plane can be approximated by a c......-negative least squares, and the Richardson-Lucy. This investigation examines the matter with computer simulations and measurements....... that the beamformer's point-spread function is shift-invariant. This makes it possible to apply computationally efficient deconvolution algorithms that consist of spectral procedures in the entire region of interest, such as the deconvolution approach for the mapping of the acoustic sources 2, the Fourier-based non...

  12. Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image

    Science.gov (United States)

    He, Xingwu; You, Junchen

    2018-03-01

    Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.

  13. Blind deconvolution using the similarity of multiscales regularization for infrared spectrum

    International Nuclear Information System (INIS)

    Huang, Tao; Liu, Hai; Zhang, Zhaoli; Liu, Sanyan; Liu, Tingting; Shen, Xiaoxuan; Zhang, Jianfeng; Zhang, Tianxu

    2015-01-01

    Band overlap and random noise exist widely when the spectra are captured using an infrared spectrometer, especially since the aging of instruments has become a serious problem. In this paper, via introducing the similarity of multiscales, a blind spectral deconvolution method is proposed. Considering that there is a similarity between latent spectra at different scales, it is used as prior knowledge to constrain the estimated latent spectrum similar to pre-scale to reduce artifacts which are produced from deconvolution. The experimental results indicate that the proposed method is able to obtain a better performance than state-of-the-art methods, and to obtain satisfying deconvolution results with fewer artifacts. The recovered infrared spectra can easily extract the spectral features and recognize unknown objects. (paper)

  14. Image processing of globular clusters - Simulation for deconvolution tests (GlencoeSim)

    Science.gov (United States)

    Blazek, Martin; Pata, Petr

    2016-10-01

    This paper presents an algorithmic approach for efficiency tests of deconvolution algorithms in astronomic image processing. Due to the existence of noise in astronomical data there is no certainty that a mathematically exact result of stellar deconvolution exists and iterative or other methods such as aperture or PSF fitting photometry are commonly used. Iterative methods are important namely in the case of crowded fields (e.g., globular clusters). For tests of the efficiency of these iterative methods on various stellar fields, information about the real fluxes of the sources is essential. For this purpose a simulator of artificial images with crowded stellar fields provides initial information on source fluxes for a robust statistical comparison of various deconvolution methods. The "GlencoeSim" simulator and the algorithms presented in this paper consider various settings of Point-Spread Functions, noise types and spatial distributions, with the aim of producing as realistic an astronomical optical stellar image as possible.

  15. Optimisation of digital noise filtering in the deconvolution of ultrafast kinetic data

    International Nuclear Information System (INIS)

    Banyasz, Akos; Dancs, Gabor; Keszei, Erno

    2005-01-01

    Ultrafast kinetic measurements in the sub-picosecond time range are always distorted by a convolution with the instrumental response function. To restore the undistorted signal, deconvolution of the measured data is needed, which can be done via inverse filtering, using Fourier transforms, if experimental noise can be successfully filtered. However, in the case of experimental data when no underlying physical model is available, no quantitative criteria are known to find an optimal noise filter which would remove excessive noise without distorting the signal itself. In this paper, we analyse the Fourier transforms used during deconvolution and describe a graphical method to find such optimal noise filters. Comparison of graphically found optima to those found by quantitative criteria in the case of known synthetic kinetic signals shows the reliability of the proposed method to get fairly good deconvolved kinetic curves. A few examples of deconvolution of real-life experimental curves with the graphical noise filter optimisation are also shown

  16. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models

    Science.gov (United States)

    Field, Scott E.; Galley, Chad R.; Hesthaven, Jan S.; Kaye, Jason; Tiglio, Manuel

    2014-07-01

    We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mcfit) online operations, where cfit denotes the fitting function operation count and, typically, m ≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 105M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in generating new waveforms with a

  17. Iterative choice of the optimal regularization parameter in TV image deconvolution

    International Nuclear Information System (INIS)

    Sixou, B; Toma, A; Peyrin, F; Denis, L

    2013-01-01

    We present an iterative method for choosing the optimal regularization parameter for the linear inverse problem of Total Variation image deconvolution. This approach is based on the Morozov discrepancy principle and on an exponential model function for the data term. The Total Variation image deconvolution is performed with the Alternating Direction Method of Multipliers (ADMM). With a smoothed l 2 norm, the differentiability of the value of the Lagrangian at the saddle point can be shown and an approximate model function obtained. The choice of the optimal parameter can be refined with a Newton method. The efficiency of the method is demonstrated on a blurred and noisy bone CT cross section

  18. Combined failure acoustical diagnosis based on improved frequency domain blind deconvolution

    International Nuclear Information System (INIS)

    Pan, Nan; Wu, Xing; Chi, YiLin; Liu, Xiaoqin; Liu, Chang

    2012-01-01

    According to gear box combined failure extraction in complex sound field, an acoustic fault detection method based on improved frequency domain blind deconvolution was proposed. Follow the frequency-domain blind deconvolution flow, the morphological filtering was firstly used to extract modulation features embedded in the observed signals, then the CFPA algorithm was employed to do complex-domain blind separation, finally the J-Divergence of spectrum was employed as distance measure to resolve the permutation. Experiments using real machine sound signals was carried out. The result demonstrate this algorithm can be efficiently applied to gear box combined failure detection in practice.

  19. Thermoluminescence glow-curve deconvolution functions for mixed order of kinetics and continuous trap distribution

    International Nuclear Information System (INIS)

    Kitis, G.; Gomez-Ros, J.M.

    2000-01-01

    New glow-curve deconvolution functions are proposed for mixed order of kinetics and for continuous-trap distribution. The only free parameters of the presented glow-curve deconvolution functions are the maximum peak intensity (I m ) and the maximum peak temperature (T m ), which can be estimated experimentally together with the activation energy (E). The other free parameter is the activation energy range (ΔE) for the case of the continuous-trap distribution or a constant α for the case of mixed-order kinetics

  20. Quantitative interpretation of nuclear logging data by adopting point-by-point spectrum striping deconvolution technology

    International Nuclear Information System (INIS)

    Tang Bin; Liu Ling; Zhou Shumin; Zhou Rongsheng

    2006-01-01

    The paper discusses the gamma-ray spectrum interpretation technology on nuclear logging. The principles of familiar quantitative interpretation methods, including the average content method and the traditional spectrum striping method, are introduced, and their limitation of determining the contents of radioactive elements on unsaturated ledges (where radioactive elements distribute unevenly) is presented. On the basis of the intensity gamma-logging quantitative interpretation technology by using the deconvolution method, a new quantitative interpretation method of separating radioactive elements is presented for interpreting the gamma spectrum logging. This is a point-by-point spectrum striping deconvolution technology which can give the logging data a quantitative interpretation. (authors)

  1. Electrochemical sensing using comparison of voltage-current time differential values during waveform generation and detection

    Science.gov (United States)

    Woo, Leta Yar-Li; Glass, Robert Scott; Fitzpatrick, Joseph Jay; Wang, Gangqiang; Henderson, Brett Tamatea; Lourdhusamy, Anthoniraj; Steppan, James John; Allmendinger, Klaus Karl

    2018-01-02

    A device for signal processing. The device includes a signal generator, a signal detector, and a processor. The signal generator generates an original waveform. The signal detector detects an affected waveform. The processor is coupled to the signal detector. The processor receives the affected waveform from the signal detector. The processor also compares at least one portion of the affected waveform with the original waveform. The processor also determines a difference between the affected waveform and the original waveform. The processor also determines a value corresponding to a unique portion of the determined difference between the original and affected waveforms. The processor also outputs the determined value.

  2. Study of the Van Cittert and Gold iterative methods of deconvolution and their application in the deconvolution of experimental spectra of positron annihilation

    International Nuclear Information System (INIS)

    Bandzuch, P.; Morhac, M.; Kristiak, J.

    1997-01-01

    The study of deconvolution by Van Cittert and Gold iterative algorithms and their use in the processing of experimental spectra of Doppler broadening of the annihilation line in positron annihilation measurement is described. By comparing results from both algorithms it was observed that the Gold algorithm was able to eliminate linear instability of the measuring equipment if one uses the 1274 keV 22 Na peak, that was measured simultaneously with the annihilation peak, for deconvolution of annihilation peak 511 keV. This permitted the measurement of small changes of the annihilation peak (e.g. S-parameter) with high confidence. The dependence of γ-ray-like peak parameters on the number of iterations and the ability of these algorithms to distinguish a γ-ray doublet with different intensities and positions were also studied. (orig.)

  3. Source-independent elastic waveform inversion using a logarithmic wavefield

    KAUST Repository

    Choi, Yun Seok

    2012-01-01

    The logarithmic waveform inversion has been widely developed and applied to some synthetic and real data. In most logarithmic waveform inversion algorithms, the subsurface velocities are updated along with the source estimation. To avoid estimating the source wavelet in the logarithmic waveform inversion, we developed a source-independent logarithmic waveform inversion algorithm. In this inversion algorithm, we first normalize the wavefields with the reference wavefield to remove the source wavelet, and then take the logarithm of the normalized wavefields. Based on the properties of the logarithm, we define three types of misfit functions using the following methods: combination of amplitude and phase, amplitude-only, and phase-only. In the inversion, the gradient is computed using the back-propagation formula without directly calculating the Jacobian matrix. We apply our algorithm to noise-free and noise-added synthetic data generated for the modified version of elastic Marmousi2 model, and compare the results with those of the source-estimation logarithmic waveform inversion. For the noise-free data, the source-independent algorithms yield velocity models close to true velocity models. For random-noise data, the source-estimation logarithmic waveform inversion yields better results than the source-independent method, whereas for coherent-noise data, the results are reversed. Numerical results show that the source-independent and source-estimation logarithmic waveform inversion methods have their own merits for random- and coherent-noise data. © 2011.

  4. Generation of correlated finite alphabet waveforms using gaussian random variables

    KAUST Repository

    Jardak, Seifallah

    2014-09-01

    Correlated waveforms have a number of applications in different fields, such as radar and communication. It is very easy to generate correlated waveforms using infinite alphabets, but for some of the applications, it is very challenging to use them in practice. Moreover, to generate infinite alphabet constant envelope correlated waveforms, the available research uses iterative algorithms, which are computationally very expensive. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method map the Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability-density-function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. To generate equiprobable symbols, the area of each region is kept same. If the requirement is to have each symbol with its own unique probability, the proposed scheme allows us that as well. Although, the proposed scheme is general, the main focus of this paper is to generate finite alphabet waveforms for multiple-input multiple-output radar, where correlated waveforms are used to achieve desired beampatterns. © 2014 IEEE.

  5. Waveform LiDAR across forest biomass gradients

    Science.gov (United States)

    Montesano, P. M.; Nelson, R. F.; Dubayah, R.; Sun, G.; Ranson, J.

    2011-12-01

    Detailed information on the quantity and distribution of aboveground biomass (AGB) is needed to understand how it varies across space and changes over time. Waveform LiDAR data is routinely used to derive the heights of scattering elements in each illuminated footprint, and the vertical structure of vegetation is related to AGB. Changes in LiDAR waveforms across vegetation structure gradients can demonstrate instrument sensitivity to land cover transitions. A close examination of LiDAR waveforms in footprints across a forest gradient can provide new insight into the relationship of vegetation structure and forest AGB. In this study we use field measurements of individual trees within Laser Vegetation Imaging Sensor (LVIS) footprints along transects crossing forest to non-forest gradients to examine changes in LVIS waveform characteristics at sites with low (field AGB measurements to original and adjusted LVIS waveforms to detect the forest AGB interval along a forest - non-forest transition in which the LVIS waveform lose the ability to discern differences in AGB. Our results help identify the lower end the forest biomass range that a ~20m footprint waveform LiDAR can detect, which can help infer accumulation of biomass after disturbances and during forest expansion, and which can guide the use of LiDAR within a multi-sensor fusion biomass mapping approach.

  6. SURFACE FITTING FILTERING OF LIDAR POINT CLOUD WITH WAVEFORM INFORMATION

    Directory of Open Access Journals (Sweden)

    S. Xing

    2017-09-01

    Full Text Available Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from “WATER (Watershed Allied Telemetry Experimental Research” are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.

  7. Statistical gravitational waveform models: What to simulate next?

    Science.gov (United States)

    Doctor, Zoheyr; Farr, Ben; Holz, Daniel E.; Pürrer, Michael

    2017-12-01

    Models of gravitational waveforms play a critical role in detecting and characterizing the gravitational waves (GWs) from compact binary coalescences. Waveforms from numerical relativity (NR), while highly accurate, are too computationally expensive to produce to be directly used with Bayesian parameter estimation tools like Markov-chain-Monte-Carlo and nested sampling. We propose a Gaussian process regression (GPR) method to generate reduced-order-model waveforms based only on existing accurate (e.g. NR) simulations. Using a training set of simulated waveforms, our GPR approach produces interpolated waveforms along with uncertainties across the parameter space. As a proof of concept, we use a training set of IMRPhenomD waveforms to build a GPR model in the 2-d parameter space of mass ratio q and equal-and-aligned spin χ1=χ2. Using a regular, equally-spaced grid of 120 IMRPhenomD training waveforms in q ∈[1 ,3 ] and χ1∈[-0.5 ,0.5 ], the GPR mean approximates IMRPhenomD in this space to mismatches below 4.3 ×10-5. Our approach could in principle use training waveforms directly from numerical relativity. Beyond interpolation of waveforms, we also present a greedy algorithm that utilizes the errors provided by our GPR model to optimize the placement of future simulations. In a fiducial test case we find that using the greedy algorithm to iteratively add simulations achieves GPR errors that are ˜1 order of magnitude lower than the errors from using Latin-hypercube or square training grids.

  8. Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models

    Directory of Open Access Journals (Sweden)

    Scott E. Field

    2014-07-01

    Full Text Available We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform’s value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mc_{fit} online operations, where c_{fit} denotes the fitting function operation count and, typically, m≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 10^{5}M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in

  9. Full Waveform Inversion Using Nonlinearly Smoothed Wavefields

    KAUST Repository

    Li, Y.; Choi, Yun Seok; Alkhalifah, Tariq Ali; Li, Z.

    2017-01-01

    The lack of low frequency information in the acquired data makes full waveform inversion (FWI) conditionally converge to the accurate solution. An initial velocity model that results in data with events within a half cycle of their location in the observed data was required to converge. The multiplication of wavefields with slightly different frequencies generates artificial low frequency components. This can be effectively utilized by multiplying the wavefield with itself, which is nonlinear operation, followed by a smoothing operator to extract the artificially produced low frequency information. We construct the objective function using the nonlinearly smoothed wavefields with a global-correlation norm to properly handle the energy imbalance in the nonlinearly smoothed wavefield. Similar to the multi-scale strategy, we progressively reduce the smoothing width applied to the multiplied wavefield to welcome higher resolution. We calculate the gradient of the objective function using the adjoint-state technique, which is similar to the conventional FWI except for the adjoint source. Examples on the Marmousi 2 model demonstrate the feasibility of the proposed FWI method to mitigate the cycle-skipping problem in the case of a lack of low frequency information.

  10. Full Waveform Inversion Using Nonlinearly Smoothed Wavefields

    KAUST Repository

    Li, Y.

    2017-05-26

    The lack of low frequency information in the acquired data makes full waveform inversion (FWI) conditionally converge to the accurate solution. An initial velocity model that results in data with events within a half cycle of their location in the observed data was required to converge. The multiplication of wavefields with slightly different frequencies generates artificial low frequency components. This can be effectively utilized by multiplying the wavefield with itself, which is nonlinear operation, followed by a smoothing operator to extract the artificially produced low frequency information. We construct the objective function using the nonlinearly smoothed wavefields with a global-correlation norm to properly handle the energy imbalance in the nonlinearly smoothed wavefield. Similar to the multi-scale strategy, we progressively reduce the smoothing width applied to the multiplied wavefield to welcome higher resolution. We calculate the gradient of the objective function using the adjoint-state technique, which is similar to the conventional FWI except for the adjoint source. Examples on the Marmousi 2 model demonstrate the feasibility of the proposed FWI method to mitigate the cycle-skipping problem in the case of a lack of low frequency information.

  11. Femtosecond Nanofocusing with Full Optical Waveform Control

    International Nuclear Information System (INIS)

    Berweger, Samuel; Atkin, Joanna M.; Xu, Xiaoji G.; Olmon, Robert L.; Raschke, Markus Bernd

    2011-01-01

    The simultaneous nanometer spatial confinement and femtosecond temporal control of an optical excitation has been a long-standing challenge in optics. Previous approaches using surface plasmon polariton (SPP) resonant nanostructures or SPP waveguides have suffered from, for example, mode mismatch, or possible dependence on the phase of the driving laser field to achieve spatial localization. Here we take advantage of the intrinsic phase- and amplitude-independent nanofocusing ability of a conical noble metal tip with weak wavelength dependence over a broad bandwidth to achieve a 10 nm spatially and few-femtosecond temporally confined excitation. In combination with spectral pulse shaping and feedback on the second-harmonic response of the tip apex, we demonstrate deterministic arbitrary optical waveform control. In addition, the high efficiency of the nanofocusing tip provided by the continuous micro- to nanoscale mode transformation opens the door for spectroscopy of elementary optical excitations in matter on their natural length and time scales and enables applications from ultrafast nano-opto-electronics to single molecule quantum coherent control.

  12. Full waveform inversion for mechanized tunneling reconnaissance

    Science.gov (United States)

    Lamert, Andre; Musayev, Khayal; Lambrecht, Lasse; Friederich, Wolfgang; Hackl, Klaus; Baitsch, Matthias

    2016-04-01

    In mechanized tunnel drilling processes, exploration of soil structure and properties ahead of the tunnel boring machine can greatly help to lower costs and improve safety conditions during drilling. We present numerical full waveform inversion approaches in time and frequency domain of synthetic acoustic data to detect different small scale structures representing potential obstacles in front of the tunnel boring machine. With the use of sensitivity kernels based on the adjoint wave field in time domain and in frequency domain it is possible to derive satisfactory models with a manageable amount of computational load. Convergence to a suitable model is assured by the use of iterative model improvements and gradually increasing frequencies. Results of both, time and frequency approach, will be compared for different obstacle and source/receiver setups. They show that the image quality strongly depends on the used receiver and source positions and increases significantly with the use of transmission waves due to the installed receivers and sources at the surface and/or in bore holes. Transmission waves lead to clearly identified structure and position of the obstacles and give satisfactory guesses for the wave speed. Setups using only reflected waves result in blurred objects and ambiguous position of distant objects and allow to distinguish heterogeneities with higher or lower wave speed, respectively.

  13. SeisFlows-Flexible waveform inversion software

    Science.gov (United States)

    Modrak, Ryan T.; Borisov, Dmitry; Lefebvre, Matthieu; Tromp, Jeroen

    2018-06-01

    SeisFlows is an open source Python package that provides a customizable waveform inversion workflow and framework for research in oil and gas exploration, earthquake tomography, medical imaging, and other areas. New methods can be rapidly prototyped in SeisFlows by inheriting from default inversion or migration classes, and code can be tested on 2D examples before application to more expensive 3D problems. Wave simulations must be performed using an external software package such as SPECFEM3D. The ability to interface with external solvers lends flexibility, and the choice of SPECFEM3D as a default option provides optional GPU acceleration and other useful capabilities. Through support for massively parallel solvers and interfaces for high-performance computing (HPC) systems, inversions with thousands of seismic traces and billions of model parameters can be performed. So far, SeisFlows has run on clusters managed by the Department of Defense, Chevron Corp., Total S.A., Princeton University, and the University of Alaska, Fairbanks.

  14. Nonnegative Matrix Factor 2-D Deconvolution for Blind Single Channel Source Separation

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for blind separation of instruments in polyphonic music based on a non-negative matrix factor 2-D deconvolution algorithm. Using a model which is convolutive in both time and frequency we factorize a spectrogram representation of music into components corresponding...

  15. Novel response function resolves by image deconvolution more details of surface nanomorphology

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    2010-01-01

    and to imaging by in situ STM of electrocrystallization of copper on gold in electrolytes containing copper sulfate and sulfuric acid. It is suggested that the observed peaks of the recorded image do not represent atoms, but the atomic structure may be recovered by image deconvolution followed by calibration...

  16. Inter-source seismic interferometry by multidimensional deconvolution (MDD) for borehole sources

    NARCIS (Netherlands)

    Liu, Y.; Wapenaar, C.P.A.; Romdhane, A.

    2014-01-01

    Seismic interferometry (SI) is usually implemented by crosscorrelation (CC) to retrieve the impulse response between pairs of receiver positions. An alternative approach by multidimensional deconvolution (MDD) has been developed and shown in various studies the potential to suppress artifacts due to

  17. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    NARCIS (Netherlands)

    Wink, Alle Meije; Hoogduin, Hans; Roerdink, Jos B.T.M.

    2008-01-01

    Background: We present a simple, data-driven method to extract haemodynamic response functions (HRF) from functional magnetic resonance imaging (fMRI) time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD) technique. HRF data are required for many fMRI applications, such as

  18. Data-driven haemodynamic response function extraction using Fourier-wavelet regularised deconvolution

    NARCIS (Netherlands)

    Wink, Alle Meije; Hoogduin, Hans; Roerdink, Jos B.T.M.

    2010-01-01

    Background: We present a simple, data-driven method to extract haemodynamic response functions (HRF) from functional magnetic resonance imaging (fMRI) time series, based on the Fourier-wavelet regularised deconvolution (ForWaRD) technique. HRF data are required for many fMRI applications, such as

  19. Deconvolution in the presence of noise using the Maximum Entropy Principle

    International Nuclear Information System (INIS)

    Steenstrup, S.

    1984-01-01

    The main problem in deconvolution in the presence of noise is the nonuniqueness. This problem is overcome by the application of the Maximum Entropy Principle. The way the noise enters in the formulation of the problem is examined in some detail and the final equations are derived such that the necessary assumptions becomes explicit. Examples using X-ray diffraction data are shown. (orig.)

  20. Noise Quantification with Beamforming Deconvolution: Effects of Regularization and Boundary Conditions

    DEFF Research Database (Denmark)

    Lylloff, Oliver Ackermann; Fernandez Grande, Efren

    Delay-and-sum (DAS) beamforming can be described as a linear convolution of an unknown sound source distribution and the microphone array response to a point source, i.e., point-spread function. Deconvolution tries to compensate for the influence of the array response and reveal the true source...

  1. Lineshape estimation for magnetic resonance spectroscopy (MRS) signals: self-deconvolution revisited

    International Nuclear Information System (INIS)

    Sima, D M; Garcia, M I Osorio; Poullet, J; Van Huffel, S; Suvichakorn, A; Antoine, J-P; Van Ormondt, D

    2009-01-01

    Magnetic resonance spectroscopy (MRS) is an effective diagnostic technique for monitoring biochemical changes in an organism. The lineshape of MRS signals can deviate from the theoretical Lorentzian lineshape due to inhomogeneities of the magnetic field applied to patients and to tissue heterogeneity. We call this deviation a distortion and study the self-deconvolution method for automatic estimation of the unknown lineshape distortion. The method is embedded within a time-domain metabolite quantitation algorithm for short-echo-time MRS signals. Monte Carlo simulations are used to analyze whether estimation of the unknown lineshape can improve the overall quantitation result. We use a signal with eight metabolic components inspired by typical MRS signals from healthy human brain and allocate special attention to the step of denoising and spike removal in the self-deconvolution technique. To this end, we compare several modeling techniques, based on complex damped exponentials, splines and wavelets. Our results show that self-deconvolution performs well, provided that some unavoidable hyper-parameters of the denoising methods are well chosen. Comparison of the first and last iterations shows an improvement when considering iterations instead of a single step of self-deconvolution

  2. Sparse Non-negative Matrix Factor 2-D Deconvolution for Automatic Transcription of Polyphonic Music

    DEFF Research Database (Denmark)

    Schmidt, Mikkel N.; Mørup, Morten

    2006-01-01

    We present a novel method for automatic transcription of polyphonic music based on a recently published algorithm for non-negative matrix factor 2-D deconvolution. The method works by simultaneously estimating a time-frequency model for an instrument and a pattern corresponding to the notes which...... are played based on a log-frequency spectrogram of the music....

  3. Numerical deconvolution to enhance sharpness and contrast of portal images for radiotherapy patient positioning verification

    International Nuclear Information System (INIS)

    Looe, H.K.; Uphoff, Y.; Poppe, B.; Carl von Ossietzky Univ., Oldenburg; Harder, D.; Willborn, K.C.

    2012-01-01

    The quality of megavoltage clinical portal images is impaired by physical and geometrical effects. This image blurring can be corrected by a fast numerical two-dimensional (2D) deconvolution algorithm implemented in the electronic portal image device. We present some clinical examples of deconvolved portal images and evaluate the clinical advantages achieved by the improved sharpness and contrast. The principle of numerical 2D image deconvolution and the enhancement of sharpness and contrast thereby achieved are shortly explained. The key concept is the convolution kernel K(x,y), the mathematical equivalent of the smearing or blurring of a picture, and the computer-based elimination of this influence. Enhancements of sharpness and contrast were observed in all clinical portal images investigated. The images of fine bone structures were restored. The identification of organ boundaries and anatomical landmarks was improved, thereby permitting a more accurate comparison with the x-ray simulator radiographs. The visibility of prostate gold markers is also shown to be enhanced by deconvolution. The blurring effects of clinical portal images were eliminated by a numerical deconvolution algorithm that leads to better image sharpness and contrast. The fast algorithm permits the image blurring correction to be performed in real time, so that patient positioning verification with increased accuracy can be achieved in clinical practice. (orig.)

  4. Numerical deconvolution to enhance sharpness and contrast of portal images for radiotherapy patient positioning verification

    Energy Technology Data Exchange (ETDEWEB)

    Looe, H.K.; Uphoff, Y.; Poppe, B. [Pius Hospital, Oldenburg (Germany). Clinic for Radiation Therapy; Carl von Ossietzky Univ., Oldenburg (Germany). WG Medical Radiation Physics; Harder, D. [Georg August Univ., Goettingen (Germany). Medical Physics and Biophysics; Willborn, K.C. [Pius Hospital, Oldenburg (Germany). Clinic for Radiation Therapy

    2012-02-15

    The quality of megavoltage clinical portal images is impaired by physical and geometrical effects. This image blurring can be corrected by a fast numerical two-dimensional (2D) deconvolution algorithm implemented in the electronic portal image device. We present some clinical examples of deconvolved portal images and evaluate the clinical advantages achieved by the improved sharpness and contrast. The principle of numerical 2D image deconvolution and the enhancement of sharpness and contrast thereby achieved are shortly explained. The key concept is the convolution kernel K(x,y), the mathematical equivalent of the smearing or blurring of a picture, and the computer-based elimination of this influence. Enhancements of sharpness and contrast were observed in all clinical portal images investigated. The images of fine bone structures were restored. The identification of organ boundaries and anatomical landmarks was improved, thereby permitting a more accurate comparison with the x-ray simulator radiographs. The visibility of prostate gold markers is also shown to be enhanced by deconvolution. The blurring effects of clinical portal images were eliminated by a numerical deconvolution algorithm that leads to better image sharpness and contrast. The fast algorithm permits the image blurring correction to be performed in real time, so that patient positioning verification with increased accuracy can be achieved in clinical practice. (orig.)

  5. A fast Fourier transform program for the deconvolution of IN10 data

    International Nuclear Information System (INIS)

    Howells, W.S.

    1981-04-01

    A deconvolution program based on the Fast Fourier Transform technique is described and some examples are presented to help users run the programs and interpret the results. Instructions are given for running the program on the RAL IBM 360/195 computer. (author)

  6. A Novel wave-form command shaper for overhead cranes

    Directory of Open Access Journals (Sweden)

    KHALED ALHAZZA

    2013-12-01

    Full Text Available In this work, a novel command shaping control strategy for oscillation reduction of simple harmonic oscillators is proposed, and validated experimentally. A wave-form acceleration command shaper is derived analytically. The performance of the proposed shaper is simulated numerically, and validated experimentally on a scaled model of an overhead crane. Amplitude modulation is used to enhance the shaper performance, which results in a modulated wave-form command shaper. It is determined that the proposed wave-form and modulated wave-form command shaper profiles are capable of eliminating travel and residual oscillations. Furthermore, unlike traditional impulse and step command shapers, the proposed command shaper has piecewise smoother acceleration, velocity, and displacement profiles. Experimental results using continuous and discrete commands are presented. Experiments with discrete commands involved embedding a saturation model-based feedback in the algorithm of the command shaper.

  7. Generation of correlated finite alphabet waveforms using gaussian random variables

    KAUST Repository

    Ahmed, Sajid

    2016-01-13

    Various examples of methods and systems are provided for generation of correlated finite alphabet waveforms using Gaussian random variables in, e.g., radar and communication applications. In one example, a method includes mapping an input signal comprising Gaussian random variables (RVs) onto finite-alphabet non-constant-envelope (FANCE) symbols using a predetermined mapping function, and transmitting FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The FANCE waveforms can be based upon the mapping of the Gaussian RVs onto the FANCE symbols. In another example, a system includes a memory unit that can store a plurality of digital bit streams corresponding to FANCE symbols and a front end unit that can transmit FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The system can include a processing unit that can encode the input signal and/or determine the mapping function.

  8. Maass waveforms arising from sigma and related indefinite theta functions

    OpenAIRE

    Zwegers, Sander

    2010-01-01

    In this paper we consider an example of a Maass waveform which was constructed by Cohen from a function $\\sigma$, studied by Andrews, Dyson and Hickerson, and it's companion $\\sigma^*$. We put this example in a more general framework.

  9. Efficient data retrieval method for similar plasma waveforms in EAST

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Ying, E-mail: liuying-ipp@szu.edu.cn [SZU-CASIPP Joint Laboratory for Applied Plasma, Shenzhen University, Shenzhen 518060 (China); Huang, Jianjun; Zhou, Huasheng; Wang, Fan [SZU-CASIPP Joint Laboratory for Applied Plasma, Shenzhen University, Shenzhen 518060 (China); Wang, Feng [Institute of Plasma Physics Chinese Academy of Sciences, Hefei 230031 (China)

    2016-11-15

    Highlights: • The proposed method is carried out by means of bounding envelope and angle distance. • It allows retrieving for whole similar waveforms of any time length. • In addition, the proposed method is also possible to retrieve subsequences. - Abstract: Fusion research relies highly on data analysis due to its massive-sized database. In the present work, we propose an efficient method for searching and retrieving similar plasma waveforms in Experimental Advanced Superconducting Tokamak (EAST). Based on Piecewise Linear Aggregate Approximation (PLAA) for extracting feature values, the searching process is accomplished in two steps. The first one is coarse searching to narrow down the search space, which is carried out by means of bounding envelope. The second step is fine searching to retrieval similar waveforms, which is implemented by the angle distance. The proposed method is tested in EAST databases and turns out to have good performance in retrieving similar waveforms.

  10. Conditioning the full-waveform inversion gradient to welcome anisotropy

    KAUST Repository

    Alkhalifah, Tariq Ali

    2015-01-01

    Multiparameter full-waveform inversion (FWI) suffers from complex nonlinearity in the objective function, compounded by the eventual trade-off between the model parameters. A hierarchical approach based on frequency and arrival time data decimation

  11. Anisotropic wave-equation traveltime and waveform inversion

    KAUST Repository

    Feng, Shihang; Schuster, Gerard T.

    2016-01-01

    The wave-equation traveltime and waveform inversion (WTW) methodology is developed to invert for anisotropic parameters in a vertical transverse isotropic (VTI) meidum. The simultaneous inversion of anisotropic parameters v0, ε and δ is initially

  12. Full Waveform Inversion Using Oriented Time Migration Method

    KAUST Repository

    Zhang, Zhendong

    2016-01-01

    Full waveform inversion (FWI) for reflection events is limited by its linearized update requirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate the resulting gradient can have

  13. Interferometric full-waveform inversion of time-lapse data

    KAUST Repository

    Sinha, Mrinal

    2017-01-01

    surveys. To overcome this challenge, we propose the use of interferometric full waveform inversion (IFWI) for inverting the velocity model from data recorded by baseline and monitor surveys. A known reflector is used as the reference reflector for IFWI

  14. Velocity Building by Reflection Waveform Inversion without Cycle-skipping

    KAUST Repository

    Guo, Qiang; Alkhalifah, Tariq Ali; Wu, Zedong

    2017-01-01

    Reflection waveform inversion (RWI) provides estimation of low wavenumber model components using reflections generated from a migration/demigration process. The resulting model tends to be a good initial model for FWI. In fact, the optimization

  15. 3-D waveform tomography sensitivity kernels for anisotropic media

    KAUST Repository

    Djebbi, Ramzi; Alkhalifah, Tariq Ali

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate

  16. Spectral implementation of full waveform inversion based on reflections

    KAUST Repository

    Wu, Zedong; Alkhalifah, Tariq Ali

    2014-01-01

    Using the reflection imaging process as a source to model reflections for full waveform inversion (FWI), referred to as reflection FWI (RFWI), allows us to update the background component of the model, and avoid using the relatively costly migration

  17. Solving seismological problems using sgraph program: II-waveform modeling

    International Nuclear Information System (INIS)

    Abdelwahed, Mohamed F.

    2012-01-01

    One of the seismological programs to manipulate seismic data is SGRAPH program. It consists of integrated tools to perform advanced seismological techniques. SGRAPH is considered a new system for maintaining and analyze seismic waveform data in a stand-alone Windows-based application that manipulate a wide range of data formats. SGRAPH was described in detail in the first part of this paper. In this part, I discuss the advanced techniques including in the program and its applications in seismology. Because of the numerous tools included in the program, only SGRAPH is sufficient to perform the basic waveform analysis and to solve advanced seismological problems. In the first part of this paper, the application of the source parameters estimation and hypocentral location was given. Here, I discuss SGRAPH waveform modeling tools. This paper exhibits examples of how to apply the SGRAPH tools to perform waveform modeling for estimating the focal mechanism and crustal structure of local earthquakes.

  18. Generation of correlated finite alphabet waveforms using gaussian random variables

    KAUST Repository

    Ahmed, Sajid; Alouini, Mohamed-Slim; Jardak, Seifallah

    2016-01-01

    Various examples of methods and systems are provided for generation of correlated finite alphabet waveforms using Gaussian random variables in, e.g., radar and communication applications. In one example, a method includes mapping an input signal comprising Gaussian random variables (RVs) onto finite-alphabet non-constant-envelope (FANCE) symbols using a predetermined mapping function, and transmitting FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The FANCE waveforms can be based upon the mapping of the Gaussian RVs onto the FANCE symbols. In another example, a system includes a memory unit that can store a plurality of digital bit streams corresponding to FANCE symbols and a front end unit that can transmit FANCE waveforms through a uniform linear array of antenna elements to obtain a corresponding beampattern. The system can include a processing unit that can encode the input signal and/or determine the mapping function.

  19. Lane marking detection based on waveform analysis and CNN

    Science.gov (United States)

    Ye, Yang Yang; Chen, Hou Jin; Hao, Xiao Li

    2017-06-01

    Lane markings detection is a very important part of the ADAS to avoid traffic accidents. In order to obtain accurate lane markings, in this work, a novel and efficient algorithm is proposed, which analyses the waveform generated from the road image after inverse perspective mapping (IPM). The algorithm includes two main stages: the first stage uses an image preprocessing including a CNN to reduce the background and enhance the lane markings. The second stage obtains the waveform of the road image and analyzes the waveform to get lanes. The contribution of this work is that we introduce local and global features of the waveform to detect the lane markings. The results indicate the proposed method is robust in detecting and fitting the lane markings.

  20. Full Waveform Inversion for Reservoir Characterization - A Synthetic Study

    KAUST Repository

    Zabihi Naeini, E.; Kamath, N.; Tsvankin, I.; Alkhalifah, Tariq Ali

    2017-01-01

    Most current reservoir-characterization workflows are based on classic amplitude-variation-with-offset (AVO) inversion techniques. Although these methods have generally served us well over the years, here we examine full-waveform inversion (FWI

  1. Anisotropic wave-equation traveltime and waveform inversion

    KAUST Repository

    Feng, Shihang

    2016-09-06

    The wave-equation traveltime and waveform inversion (WTW) methodology is developed to invert for anisotropic parameters in a vertical transverse isotropic (VTI) meidum. The simultaneous inversion of anisotropic parameters v0, ε and δ is initially performed using the wave-equation traveltime inversion (WT) method. The WT tomograms are then used as starting background models for VTI full waveform inversion. Preliminary numerical tests on synthetic data demonstrate the feasibility of this method for multi-parameter inversion.

  2. A microcomputer-based waveform generator for Moessbauer spectrometers

    International Nuclear Information System (INIS)

    Huang Jianping; Chen Xiaomei

    1995-01-01

    A waveform generator for Moessbauer spectrometers based on 8751 single chip microcomputer is described. The reference wave form with high linearity is generated with a 12 bit DAC, and its amplitude is controlled with a 8 bit DAC. Because the channel advance and synchronous signals can be delayed arbitrarily, excellent folded spectra can be acquired. This waveform generator can be controlled with DIP switches on faceplate or series interface of the IBM-PC microcomputer

  3. Optimising delineation accuracy of tumours in PET for radiotherapy planning using blind deconvolution

    International Nuclear Information System (INIS)

    Guvenis, A.; Koc, A.

    2015-01-01

    Positron emission tomography (PET) imaging has been proven to be useful in radiotherapy planning for the determination of the metabolically active regions of tumours. Delineation of tumours, however, is a difficult task in part due to high noise levels and the partial volume effects originating mainly from the low camera resolution. The goal of this work is to study the effect of blind deconvolution on tumour volume estimation accuracy for different computer-aided contouring methods. The blind deconvolution estimates the point spread function (PSF) of the imaging system in an iterative manner in a way that the likelihood of the given image being the convolution output is maximised. In this way, the PSF of the imaging system does not need to be known. Data were obtained from a NEMA NU-2 IQ-based phantom with a GE DSTE-16 PET/CT scanner. The artificial tumour diameters were 13, 17, 22, 28 and 37 mm with a target/background ratio of 4:1. The tumours were delineated before and after blind deconvolution. Student's two-tailed paired t-test showed a significant decrease in volume estimation error ( p < 0.001) when blind deconvolution was used in conjunction with computer-aided delineation methods. A manual delineation confirmation demonstrated an improvement from 26 to 16 % for the artificial tumour of size 37 mm while an improvement from 57 to 15 % was noted for the small tumour of 13 mm. Therefore, it can be concluded that blind deconvolution of reconstructed PET images may be used to increase tumour delineation accuracy. (authors)

  4. A Time Domain Waveform for Testing General Relativity

    International Nuclear Information System (INIS)

    Huwyler, Cédric; Jetzer, Philippe; Porter, Edward K

    2015-01-01

    Gravitational-wave parameter estimation is only as good as the theory the waveform generation models are based upon. It is therefore crucial to test General Relativity (GR) once data becomes available. Many previous works, such as studies connected with the ppE framework by Yunes and Pretorius, rely on the stationary phase approximation (SPA) to model deviations from GR in the frequency domain. As Fast Fourier Transform algorithms have become considerably faster and in order to circumvent possible problems with the SPA, we test GR with corrected time domain waveforms instead of SPA waveforms. Since a considerable amount of work has been done already in the field using SPA waveforms, we establish a connection between leading-order-corrected waveforms in time and frequency domain, concentrating on phase-only corrected terms. In a Markov Chain Monte Carlo study, whose results are preliminary and will only be available later, we will assess the ability of the eLISA detector to measure deviations from GR for signals coming from supermassive black hole inspirals using these corrected waveforms. (paper)

  5. Phase-space topography characterization of nonlinear ultrasound waveforms.

    Science.gov (United States)

    Dehghan-Niri, Ehsan; Al-Beer, Helem

    2018-03-01

    Fundamental understanding of ultrasound interaction with material discontinuities having closed interfaces has many engineering applications such as nondestructive evaluation of defects like kissing bonds and cracks in critical structural and mechanical components. In this paper, to analyze the acoustic field nonlinearities due to defects with closed interfaces, the use of a common technique in nonlinear physics, based on a phase-space topography construction of ultrasound waveform, is proposed. The central idea is to complement the "time" and "frequency" domain analyses with the "phase-space" domain analysis of nonlinear ultrasound waveforms. A nonlinear time series method known as pseudo phase-space topography construction is used to construct equivalent phase-space portrait of measured ultrasound waveforms. Several nonlinear models are considered to numerically simulate nonlinear ultrasound waveforms. The phase-space response of the simulated waveforms is shown to provide different topographic information, while the frequency domain shows similar spectral behavior. Thus, model classification can be substantially enhanced in the phase-space domain. Experimental results on high strength aluminum samples show that the phase-space transformation provides a unique detection and classification capabilities. The Poincaré map of the phase-space domain is also used to better understand the nonlinear behavior of ultrasound waveforms. It is shown that the analysis of ultrasound nonlinearities is more convenient and informative in the phase-space domain than in the frequency domain. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Adaptive Waveform Design for Cognitive Radar in Multiple Targets Situations

    Directory of Open Access Journals (Sweden)

    Xiaowen Zhang

    2018-02-01

    Full Text Available In this paper, the problem of cognitive radar (CR waveform optimization design for target detection and estimation in multiple extended targets situations is investigated. This problem is analyzed in signal-dependent interference, as well as additive channel noise for extended targets with unknown target impulse response (TIR. To address this problem, an improved algorithm is employed for target detection by maximizing the detection probability of the received echo on the promise of ensuring the TIR estimation precision. In this algorithm, an additional weight vector is introduced to achieve a trade-off among different targets. Both the estimate of TIR and transmit waveform can be updated at each step based on the previous step. Under the same constraint on waveform energy and bandwidth, the information theoretical approach is also considered. In addition, the relationship between the waveforms that are designed based on the two criteria is discussed. Unlike most existing works that only consider single target with temporally correlated characteristics, waveform design for multiple extended targets is considered in this method. Simulation results demonstrate that compared with linear frequency modulated (LFM signal, waveforms designed based on maximum detection probability and maximum mutual information (MI criteria can make radar echoes contain more multiple-target information and improve radar performance as a result.

  7. Adaptive phase k-means algorithm for waveform classification

    Science.gov (United States)

    Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin

    2018-01-01

    Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.

  8. Optimal current waveforms for brushless permanent magnet motors

    Science.gov (United States)

    Moehle, Nicholas; Boyd, Stephen

    2015-07-01

    In this paper, we give energy-optimal current waveforms for a permanent magnet synchronous motor that result in a desired average torque. Our formulation generalises previous work by including a general back-electromotive force (EMF) wave shape, voltage and current limits, an arbitrary phase winding connection, a simple eddy current loss model, and a trade-off between power loss and torque ripple. Determining the optimal current waveforms requires solving a small convex optimisation problem. We show how to use the alternating direction method of multipliers to find the optimal current in milliseconds or hundreds of microseconds, depending on the processor used, which allows the possibility of generating optimal waveforms in real time. This allows us to adapt in real time to changes in the operating requirements or in the model, such as a change in resistance with winding temperature, or even gross changes like the failure of one winding. Suboptimal waveforms are available in tens or hundreds of microseconds, allowing for quick response after abrupt changes in the desired torque. We demonstrate our approach on a simple numerical example, in which we give the optimal waveforms for a motor with a sinusoidal back-EMF, and for a motor with a more complicated, nonsinusoidal waveform, in both the constant-torque region and constant-power region.

  9. Frequency-domain waveform inversion using the unwrapped phase

    KAUST Repository

    Choi, Yun Seok

    2011-01-01

    Phase wrapping in the frequency-domain (or cycle skipping in the time-domain) is the major cause of the local minima problem in the waveform inversion. The unwrapped phase has the potential to provide us with a robust and reliable waveform inversion, with reduced local minima. We propose a waveform inversion algorithm using the unwrapped phase objective function in the frequency-domain. The unwrapped phase, or what we call the instantaneous traveltime, is given by the imaginary part of dividing the derivative of the wavefield with respect to the angular frequency by the wavefield itself. As a result, the objective function is given a traveltime-like function, which allows us to smooth it and reduce its nonlinearity. The gradient of the objective function is computed using the back-propagation algorithm based on the adjoint-state technique. We apply both our waveform inversion algorithm using the unwrapped phase and the conventional waveform inversion and show that our inversion algorithm gives better convergence to the true model than the conventional waveform inversion. © 2011 Society of Exploration Geophysicists.

  10. 3D Electric Waveforms of Solar Wind Turbulence

    Science.gov (United States)

    Kellogg, P. J.; Goetz, K.; Monson, S. J.

    2018-01-01

    Electric fields provide the major coupling between the turbulence of the solar wind and particles. A large part of the turbulent spectrum of fluctuations in the solar wind is thought to be kinetic Alfvén waves; however, whistlers have recently been found to be important. In this article, we attempt to determine the mode identification of individual waveforms using the three-dimensional antenna system of the SWaves experiments on the STEREO spacecraft. Samples are chosen using waveforms with an apparent periodic structure, selected visually. The short antennas of STEREO respond to density fluctuations and to electric fields. Measurement of four quantities using only three antennas presents a problem. Methods to overcome or to ignore this difficulty are presented. We attempt to decide whether the waveforms correspond to the whistler mode or the Alfvén mode by using the direction of rotation of the signal. Most of the waveforms are so oblique—nearly linearly polarized—that the direction cannot be determined. However, about one third of the waveforms can be identified, and whistlers and Alfvén waves are present in roughly equal numbers. The selected waveforms are very intense but intermittent and are orders of magnitude stronger than the average, yet their accumulated signal accounts for a large fraction of the average. The average, however, is supposed to be the result of a turbulent mixture of many waves, not short coherent events. This presents a puzzle for future work.

  11. Source-independent time-domain waveform inversion using convolved wavefields: Application to the encoded multisource waveform inversion

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2011-01-01

    Full waveform inversion requires a good estimation of the source wavelet to improve our chances of a successful inversion. This is especially true for an encoded multisource time-domain implementation, which, conventionally, requires separate

  12. System and Method for Generating a Frequency Modulated Linear Laser Waveform

    Science.gov (United States)

    Pierrottet, Diego F. (Inventor); Petway, Larry B. (Inventor); Amzajerdian, Farzin (Inventor); Barnes, Bruce W. (Inventor); Lockard, George E. (Inventor); Hines, Glenn D. (Inventor)

    2017-01-01

    A system for generating a frequency modulated linear laser waveform includes a single frequency laser generator to produce a laser output signal. An electro-optical modulator modulates the frequency of the laser output signal to define a linear triangular waveform. An optical circulator passes the linear triangular waveform to a band-pass optical filter to filter out harmonic frequencies created in the waveform during modulation of the laser output signal, to define a pure filtered modulated waveform having a very narrow bandwidth. The optical circulator receives the pure filtered modulated laser waveform and transmits the modulated laser waveform to a target.

  13. Extension of frequency-based dissimilarity for retrieving similar plasma waveforms

    International Nuclear Information System (INIS)

    Hochin, Teruhisa; Koyama, Katsumasa; Nakanishi, Hideya; Kojima, Mamoru

    2008-01-01

    Some computer-aided assistance in finding the waveforms similar to a waveform has become indispensable for accelerating data analysis in the plasma experiments. For the slowly-varying waveforms and those having time-sectional oscillation patterns, the methods using the Fourier series coefficients of waveforms in calculating the dissimilarity have successfully improved the performance in retrieving similar waveforms. This paper treats severely-varying waveforms, and proposes two extensions to the dissimilarity of waveforms. The first extension is to capture the difference of the importance of the Fourier series coefficients of waveforms against frequency. The second extension is to consider the outlines of waveforms. The correctness of the extended dissimilarity is experimentally evaluated by using the metrics used in evaluating that of the information retrieval, i.e. precision and recall. The experimental results show that the extended dissimilarity could improve the correctness of the similarity retrieval of plasma waveforms

  14. Pseudo LRM waveforms from CryoSat SARin acquisition

    Science.gov (United States)

    Scagliola, Michele; Fornari, Marco; Bouffard, Jerome; Parrinello, Tommaso; Féménias, Pierre

    2016-04-01

    CryoSat was launched on the 8th April 2010 and is the first European ice mission dedicated to the monitoring of precise changes in the thickness of polar ice sheets and floating sea ice. The main payload of CryoSat is a Ku-band pulsewidth limited radar altimeter, called SIRAL (Synthetic interferometric radar altimeter). When commanded in SARIn (synthetic aperture radar interferometry) mode, through coherent along-track processing of the returns received from two antennas, the interferometric phase related to the first arrival of the echo is used to retrieve the angle of arrival of the scattering in the across-track direction. When SIRAL operates in SAR or SARin mode, the obtained waveforms have an along-track resolution and a speckle reduction which is increased with respect to the pulse-limited waveforms. Anyway, in order to analyze the continuity of the geophysical retrieved parameters among different acquisition modes, techniques to transform SARin mode data to pseudo-LRM mode data are welcome. The transformation process is known as SAR reduction and it is worth recalling here that only approximate pseudo-LRM waveforms can be obtained in case of closed burst acquisitions, as SIRAL operates. A SAR reduction processing scheme has been developed to obtain pseudo-LRM waveforms from CryoSat SARin acquisition. As a trade-off between the along-track length on Earth surface contributing to one SARin pseudo-LRM waveform and the noisiness of the waveform itself, it has been chosen a SAR reduction approach based on the averaging of all the SARin echoes received each 20Hz, resulting in one pseudo-LRM waveform for each SARin burst given the SARin burst repetition period. SARin pseudo-LRM waveforms have been produced for CryoSat acquisition both on ice and sea surfaces, aiming at verifying the continuity of the retracked surface height over the ellipsoid between genuine LRM products and pseudo-LRM products. Moreover, the retracked height from the SARin pseudo-LRM has been

  15. Cramer-Rao Lower Bound for Support-Constrained and Pixel-Based Multi-Frame Blind Deconvolution (Postprint)

    National Research Council Canada - National Science Library

    Matson, Charles; Haji, Aiim

    2006-01-01

    Multi-frame blind deconvolution (MFBD) algorithms can be used to reconstruct a single high-resolution image of an object from one or more measurement frames of that are blurred and noisy realizations of that object...

  16. A technique for the deconvolution of the pulse shape of acoustic emission signals back to the generating defect source

    International Nuclear Information System (INIS)

    Houghton, J.R.; Packman, P.F.; Townsend, M.A.

    1976-01-01

    Acoustic emission signals recorded after passage through the instrumentation system can be deconvoluted to produce signal traces indicative of those at the generating source, and these traces can be used to identify characteristics of the source

  17. Ocular pressure waveform reflects ventricular bigeminy and aortic insufficiency

    Directory of Open Access Journals (Sweden)

    Jean B Kassem

    2015-01-01

    Full Text Available Ocular pulse amplitude (OPA is defined as the difference between maximum and minimum intraocular pressure (IOP during a cardiac cycle. Average values of OPA range from 1 to 4 mmHg. The purpose of this investigation is to determine the source of an irregular IOP waveform with elevated OPA in a 48-year-old male. Ocular pressure waveforms had an unusual shape consistent with early ventricular contraction. With a normal IOP, OPA was 9 mmHg, which is extraordinarily high. The subject was examined by a cardiologist and was determined to be in ventricular bigeminy. In addition, he had bounding carotid pulses and echocardiogram confirmed aortic insufficiency. After replacement of the aortic valve, the bigeminy resolved and the ocular pulse waveform became regular in appearance with an OPA of 1.6-2.0 mmHg. The ocular pressure waveform is a direct reflection of hemodynamics. Evaluating this waveform may provide an additional opportunity for screening subjects for cardiovascular anomalies and arrhythmias.

  18. Selection and generation of waveforms for differential mobility spectrometry.

    Science.gov (United States)

    Krylov, Evgeny V; Coy, Stephen L; Vandermey, John; Schneider, Bradley B; Covey, Thomas R; Nazarov, Erkinjon G

    2010-02-01

    Devices based on differential mobility spectrometry (DMS) are used in a number of ways, including applications as ion prefilters for API-MS systems, as detectors or selectors in hybrid instruments (GC-DMS, DMS-IMS), and in standalone systems for chemical detection and identification. DMS ion separation is based on the relative difference between high field and low field ion mobility known as the alpha dependence, and requires the application of an intense asymmetric electric field known as the DMS separation field, typically in the megahertz frequency range. DMS performance depends on the waveform and on the magnitude of this separation field. In this paper, we analyze the relationship between separation waveform and DMS resolution and consider feasible separation field generators. We examine ideal and practical DMS separation field waveforms and discuss separation field generator circuit types and their implementations. To facilitate optimization of the generator designs, we present a set of relations that connect ion alpha dependence to DMS separation fields. Using these relationships we evaluate the DMS separation power of common generator types as a function of their waveform parameters. Optimal waveforms for the major types of DMS separation generators are determined for ions with various alpha dependences. These calculations are validated by comparison with experimental data.

  19. Direct current contamination of kilohertz frequency alternating current waveforms.

    Science.gov (United States)

    Franke, Manfred; Bhadra, Niloy; Bhadra, Narendra; Kilgore, Kevin

    2014-07-30

    Kilohertz frequency alternating current (KHFAC) waveforms are being evaluated in a variety of physiological settings because of their potential to modulate neural activity uniquely when compared to frequencies in the sub-kilohertz range. However, the use of waveforms in this frequency range presents some unique challenges regarding the generator output. In this study we explored the possibility of undesirable contamination of the KHFAC waveforms by direct current (DC). We evaluated current- and voltage-controlled KHFAC waveform generators in configurations that included a capacitive coupling between generator and electrode, a resistive coupling and combinations of capacitive with inductive coupling. Our results demonstrate that both voltage- and current-controlled signal generators can unintentionally add DC-contamination to a KHFAC signal, and that capacitive coupling is not always sufficient to eliminate this contamination. We furthermore demonstrated that high value inductors, placed in parallel with the electrode, can be effective in eliminating DC-contamination irrespective of the type of stimulator, reducing the DC contamination to less than 1 μA. This study highlights the importance of carefully designing the electronic setup used in KHFAC studies and suggests specific testing that should be performed and reported in all studies that assess the neural response to KHFAC waveforms. Published by Elsevier B.V.

  20. Selection and generation of waveforms for differential mobility spectrometry

    International Nuclear Information System (INIS)

    Krylov, Evgeny V.; Coy, Stephen L.; Nazarov, Erkinjon G.; Vandermey, John; Schneider, Bradley B.; Covey, Thomas R.

    2010-01-01

    Devices based on differential mobility spectrometry (DMS) are used in a number of ways, including applications as ion prefilters for API-MS systems, as detectors or selectors in hybrid instruments (GC-DMS, DMS-IMS), and in standalone systems for chemical detection and identification. DMS ion separation is based on the relative difference between high field and low field ion mobility known as the alpha dependence, and requires the application of an intense asymmetric electric field known as the DMS separation field, typically in the megahertz frequency range. DMS performance depends on the waveform and on the magnitude of this separation field. In this paper, we analyze the relationship between separation waveform and DMS resolution and consider feasible separation field generators. We examine ideal and practical DMS separation field waveforms and discuss separation field generator circuit types and their implementations. To facilitate optimization of the generator designs, we present a set of relations that connect ion alpha dependence to DMS separation fields. Using these relationships we evaluate the DMS separation power of common generator types as a function of their waveform parameters. Optimal waveforms for the major types of DMS separation generators are determined for ions with various alpha dependences. These calculations are validated by comparison with experimental data.

  1. Accuracy of Binary Black Hole waveforms for Advanced LIGO searches

    Science.gov (United States)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Chu, Tony; Fong, Heather; Brown, Duncan; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela

    2015-04-01

    Coalescing binaries of compact objects are flagship sources for the first direct detection of gravitational waves with LIGO-Virgo observatories. Matched-filtering based detection searches aimed at binaries of black holes will use aligned spin waveforms as filters, and their efficiency hinges on the accuracy of the underlying waveform models. A number of gravitational waveform models are available in literature, e.g. the Effective-One-Body, Phenomenological, and traditional post-Newtonian ones. While Numerical Relativity (NR) simulations provide for the most accurate modeling of gravitational radiation from compact binaries, their computational cost limits their application in large scale searches. In this talk we assess the accuracy of waveform models in two regions of parameter space, which have only been explored cursorily in the past: the high mass-ratio regime as well as the comparable mass-ratio + high spin regime.s Using the SpEC code, six q = 7 simulations with aligned-spins and lasting 60 orbits, and tens of q ∈ [1,3] simulations with high black hole spins were performed. We use them to study the accuracy and intrinsic parameter biases of different waveform families, and assess their viability for Advanced LIGO searches.

  2. A study of doppler waveform using pulsatile flow model

    International Nuclear Information System (INIS)

    Chung, Hye Won; Chung, Myung Jin; Park, Jae Hyung; Chung, Jin Wook; Lee, Dong Hyuk; Min, Byoung Goo

    1997-01-01

    Through the construction of a pulsatile flow model using an artificial heart pump and stenosis to demonstrate triphasic Doppler waveform, which simulates in vivo conditions, and to evaluate the relationship between Doppler waveform and vascular compliance. The flow model was constructed using a flowmeter, rubber tube, glass tube with stenosis, and artificial heart pump. Doppler study was carried out at the prestenotic, poststenotic, and distal segments;compliance was changed by changing the length of the rubber tube. With increasing proximal compliance, Doppler waveforms show decreasing peak velocity of the first phase and slightly delayed acceleration time, but the waveform itself did not change significantly. Distal compliance influenced the second phase, and was important for the formation of pulsus tardus and parvus, which without poststenotic vascular compliance, did not develop. The peak velocity of the first phase was inversely proportional to proximal compliance, and those of the second and third phases were directly proportional to distal compliance. After constructing this pulsatile flow model, we were able to explain the relationship between vascular compliance and Doppler waveform, and also better understand the formation of pulsus tardus and parvus

  3. Resolution improvement of ultrasonic echography methods in non destructive testing by adaptative deconvolution

    International Nuclear Information System (INIS)

    Vivet, L.

    1989-01-01

    The ultrasonic echography has a lot of advantages which make it attractive for nondestructive testing. But the important acoustic energy useful to go through very attenuating materials can be got only with resonant translators, that is a limit for the resolution on measured echograms. This resolution can be improved by deconvolution. But this method is a problem for austenitic steel. Here is developed a method of time deconvolution which allows to take in account the characteristics of the wave. A first step of phase correction and a second step of spectral equalization which gives back the spectral contents of ideal reflectivity. The two steps use fast Kalman filters which reduce the cost of the method

  4. Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.

    Science.gov (United States)

    Hendler, R W; Shrager, R I

    1994-01-01

    Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.

  5. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    Science.gov (United States)

    Zhang, Pengcheng; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Coatrieux, Jean-Louis; Li, Baosheng; Shu, Huazhong

    2013-09-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements.

  6. A new deconvolution approach to robust fluence for intensity modulation under geometrical uncertainty

    International Nuclear Information System (INIS)

    Zhang Pengcheng; Coatrieux, Jean-Louis; Shu Huazhong; De Crevoisier, Renaud; Simon, Antoine; Haigron, Pascal; Li Baosheng

    2013-01-01

    This work addresses random geometrical uncertainties that are intrinsically observed in radiation therapy by means of a new deconvolution method combining a series expansion and a Butterworth filter. The method efficiently suppresses high-frequency components by discarding the higher order terms of the series expansion and then filtering out deviations on the field edges. An additional approximation is made in order to set the fluence values outside the field to zero in the robust profiles. This method is compared to the deconvolution kernel method for a regular 2D fluence map, a real intensity-modulated radiation therapy field, and a prostate case. The results show that accuracy is improved while fulfilling clinical planning requirements. (paper)

  7. Computerized glow curve deconvolution of thermoluminescent emission from polyminerals of Jamaica Mexican flower

    Science.gov (United States)

    Favalli, A.; Furetta, C.; Zaragoza, E. Cruz; Reyes, A.

    The aim of this work is to study the main thermoluminescence (TL) characteristics of the inorganic polyminerals extracted from dehydrated Jamaica flower or roselle (Hibiscus sabdariffa L.) belonging to Malvaceae family of Mexican origin. TL emission properties of the polymineral fraction in powder were studied using the initial rise (IR) method. The complex structure and kinetic parameters of the glow curves have been analysed accurately using the computerized glow curve deconvolution (CGCD) assuming an exponential distribution of trapping levels. The extension of the IR method to the case of a continuous and exponential distribution of traps is reported, such as the derivation of the TL glow curve deconvolution functions for continuous trap distribution. CGCD is performed both in the case of frequency factor, s, temperature independent, and in the case with the s function of temperature.

  8. Primary variables influencing generation of earthquake motions by a deconvolution process

    International Nuclear Information System (INIS)

    Idriss, I.M.; Akky, M.R.

    1979-01-01

    In many engineering problems, the analysis of potential earthquake response of a soil deposit, a soil structure or a soil-foundation-structure system requires the knowledge of earthquake ground motions at some depth below the level at which the motions are recorded, specified, or estimated. A process by which such motions are commonly calculated is termed a deconvolution process. This paper presents the results of a parametric study which was conducted to examine the accuracy, convergence, and stability of a frequency used deconvolution process and the significant parameters that may influence the output of this process. Parameters studied in included included: soil profile characteristics, input motion characteristics, level of input motion, and frequency cut-off. (orig.)

  9. Comparison of alternative methods for multiplet deconvolution in the analysis of gamma-ray spectra

    International Nuclear Information System (INIS)

    Blaauw, Menno; Keyser, Ronald M.; Fazekas, Bela

    1999-01-01

    Three methods for multiplet deconvolution were tested using the 1995 IAEA reference spectra: Total area determination, iterative fitting and the library-oriented approach. It is concluded that, if statistical control (i.e. the ability to report results that agree with the known, true values to within the reported uncertainties) is required, the total area determination method performs the best. If high deconvolution power is required and a good, internally consistent library is available, the library oriented method yields the best results. Neither Erdtmann and Soyka's gamma-ray catalogue nor Browne and Firestone's Table of Radioactive Isotopes were found to be internally consistent enough in this respect. In the absence of a good library, iterative fitting with restricted peak width variation performs the best. The ultimate approach as yet to be implemented might be library-oriented fitting with allowed peak position variation according to the peak energy uncertainty specified in the library. (author)

  10. Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation.

    Directory of Open Access Journals (Sweden)

    Najah Alsubaie

    Full Text Available Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners.

  11. A MAP blind image deconvolution algorithm with bandwidth over-constrained

    Science.gov (United States)

    Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong

    2018-03-01

    We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.

  12. An l1-TV Algorithm for Deconvolution with Salt and Pepper Noise

    Science.gov (United States)

    2009-04-01

    deblurring in the presence of impulsive noise ,” Int. J. Comput. Vision, vol. 70, no. 3, pp. 279–298, Dec. 2006. [13] A. E. Beaton and J. W. Tukey, “The...AN 1-TV ALGORITHM FOR DECONVOLUTIONWITH SALT AND PEPPER NOISE Brendt Wohlberg∗ T-7 Mathematical Modeling and Analysis Los Alamos National Laboratory...and pepper noise , but the extension of this formulation to more general prob- lems, such as deconvolution, has received little attention. We consider

  13. Resolution enhancement for ultrasonic echographic technique in non destructive testing with an adaptive deconvolution method

    International Nuclear Information System (INIS)

    Vivet, L.

    1989-01-01

    The ultrasonic echographic technique has specific advantages which makes it essential in a lot of Non Destructive Testing (NDT) investigations. However, the high acoustic power necessary to propagate through highly attenuating media can only be transmitted by resonant transducers, which induces severe limitations of the resolution on the received echograms. This resolution may be improved with deconvolution methods. But one-dimensional deconvolution methods come up against problems in non destructive testing when the investigated medium is highly anisotropic and inhomogeneous (i.e. austenitic steel). Numerous deconvolution techniques are well documented in the NDT literature. But they often come from other application fields (biomedical engineering, geophysics) and we show they do not apply well to specific NDT problems: frequency-dependent attenuation and non-minimum phase of the emitted wavelet. We therefore introduce a new time-domain approach which takes into account the wavelet features. Our method solves the deconvolution problem as an estimation one and is performed in two steps: (i) A phase correction step which takes into account the phase of the wavelet and estimates a phase-corrected echogram. The phase of the wavelet is only due to the transducer and is assumed time-invariant during the propagation. (ii) A band equalization step which restores the spectral content of the ideal reflectivity. The two steps of the method are performed using fast Kalman filters which allow a significant reduction of the computational effort. Synthetic and actual results are given to prove that this is a good approach for resolution improvement in attenuating media [fr

  14. A comparison of deconvolution and the Rutland-Patlak plot in parenchymal renal uptake rate.

    Science.gov (United States)

    Al-Shakhrah, Issa A

    2012-07-01

    Deconvolution and the Rutland-Patlak (R-P) plot are two of the most commonly used methods for analyzing dynamic radionuclide renography. Both methods allow estimation of absolute and relative renal uptake of radiopharmaceutical and of its rate of transit through the kidney. Seventeen patients (32 kidneys) were referred for further evaluation by renal scanning. All patients were positioned supine with their backs to the scintillation gamma camera, so that the kidneys and the heart are both in the field of view. Approximately 5-7 mCi of (99m)Tc-DTPA (diethylinetriamine penta-acetic acid) in about 0.5 ml of saline is injected intravenously and sequential 20 s frames were acquired, the study on each patient lasts for approximately 20 min. The time-activity curves of the parenchymal region of interest of each kidney, as well as the heart were obtained for analysis. The data were then analyzed with deconvolution and the R-P plot. A strong positive association (n = 32; r = 0.83; R (2) = 0.68) was found between the values that obtained by applying the two methods. Bland-Altman statistical analysis demonstrated that ninety seven percent of the values in the study (31 cases from 32 cases, 97% of the cases) were within limits of agreement (mean ± 1.96 standard deviation). We believe that R-P analysis method is expected to be more reproducible than iterative deconvolution method, because the deconvolution technique (the iterative method) relies heavily on the accuracy of the first point analyzed, as any errors are carried forward into the calculations of all the subsequent points, whereas R-P technique is based on an initial analysis of the data by means of the R-P plot, and it can be considered as an alternative technique to find and calculate the renal uptake rate.

  15. Seismic Input Motion Determined from a Surface-Downhole Pair of Sensors: A Constrained Deconvolution Approach

    OpenAIRE

    Dino Bindi; Stefano Parolai; M. Picozzi; A. Ansal

    2010-01-01

    We apply a deconvolution approach to the problem of determining the input motion at the base of an instrumented borehole using only a pair of recordings, one at the borehole surface and the other at its bottom. To stabilize the bottom-tosurface spectral ratio, we apply an iterative regularization algorithm that allows us to constrain the solution to be positively defined and to have a finite time duration. Through the analysis of synthetic data, we show that the method is capab...

  16. Methods for deconvoluting and interpreting complex gamma- and x-ray spectral regions

    International Nuclear Information System (INIS)

    Gunnink, R.

    1983-06-01

    Germanium and silicon detectors are now widely used for the detection and measurement of x and gamma radiation. However, some analysis situations and spectral regions have heretofore been too complex to deconvolute and interpret by techniques in general use. One example is the L x-ray spectrum of an element taken with a Ge or Si detector. This paper describes some new tools and methods that were developed to analyze complex spectral regions; they are illustrated with examples

  17. A Convolution Tree with Deconvolution Branches: Exploiting Geometric Relationships for Single Shot Keypoint Detection

    OpenAIRE

    Kumar, Amit; Chellappa, Rama

    2017-01-01

    Recently, Deep Convolution Networks (DCNNs) have been applied to the task of face alignment and have shown potential for learning improved feature representations. Although deeper layers can capture abstract concepts like pose, it is difficult to capture the geometric relationships among the keypoints in DCNNs. In this paper, we propose a novel convolution-deconvolution network for facial keypoint detection. Our model predicts the 2D locations of the keypoints and their individual visibility ...

  18. A Design Methodology for Efficient Implementation of Deconvolutional Neural Networks on an FPGA

    OpenAIRE

    Zhang, Xinyu; Das, Srinjoy; Neopane, Ojash; Kreutz-Delgado, Ken

    2017-01-01

    In recent years deep learning algorithms have shown extremely high performance on machine learning tasks such as image classification and speech recognition. In support of such applications, various FPGA accelerator architectures have been proposed for convolutional neural networks (CNNs) that enable high performance for classification tasks at lower power than CPU and GPU processors. However, to date, there has been little research on the use of FPGA implementations of deconvolutional neural...

  19. Closed-loop waveform control of boost inverter

    DEFF Research Database (Denmark)

    Zhu, Guo Rong; Xiao, Cheng Yuan; Wang, Haoran

    2016-01-01

    The input current of single-phase inverter typically has an AC ripple component at twice the output frequency, which causes a reduction in both the operating lifetime of its DC source and the efficiency of the system. In this paper, the closed-loop performance of a proposed waveform control method...... to eliminate such a ripple current in boost inverter is investigated. The small-signal stability and the dynamic characteristic of the inverter system for input voltage or wide range load variations under the closed-loop waveform control method are studied. It is validated that with the closedloop waveform...... control, not only was stability achieved, the reference voltage of the boost inverter capacitors can be instantaneously adjusted to match the new load, thereby achieving improved ripple mitigation for a wide load range. Furthermore, with the control and feedback mechanism, there is minimal level of ripple...

  20. Designing waveforms for temporal encoding using a frequency sampling method

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    was compared to a linear frequency modulated signal with amplitude tapering, previously used in clinical studies for synthetic transmit aperture imaging. The latter had a relatively flat spectrum which implied that the waveform tried to excite all frequencies including ones with low amplification. The proposed......In this paper a method for designing waveforms for temporal encoding in medical ultrasound imaging is described. The method is based on least squares optimization and is used to design nonlinear frequency modulated signals for synthetic transmit aperture imaging. By using the proposed design method...... waveform, on the other hand, was designed so that only frequencies where the transducer had a large amplification were excited. Hereby, unnecessary heating of the transducer could be avoided and the signal-tonoise ratio could be increased. The experimental ultrasound scanner RASMUS was used to evaluate...

  1. Stimulator with arbitrary waveform for auditory evoked potentials

    International Nuclear Information System (INIS)

    Martins, H R; Romao, M; Placido, D; Provenzano, F; Tierra-Criollo, C J

    2007-01-01

    The technological improvement helps many medical areas. The audiometric exams involving the auditory evoked potentials can make better diagnoses of auditory disorders. This paper proposes the development of a stimulator based on Digital Signal Processor. This stimulator is the first step of an auditory evoked potential system based on the ADSP-BF533 EZ KIT LITE (Analog Devices Company - USA). The stimulator can generate arbitrary waveform like Sine Waves, Modulated Amplitude, Pulses, Bursts and Pips. The waveforms are generated through a graphical interface programmed in C++ in which the user can define the parameters of the waveform. Furthermore, the user can set the exam parameters as number of stimuli, time with stimulation (Time ON) and time without stimulus (Time OFF). In future works will be implemented another parts of the system that includes the acquirement of electroencephalogram and signal processing to estimate and analyze the evoked potential

  2. Stimulator with arbitrary waveform for auditory evoked potentials

    Energy Technology Data Exchange (ETDEWEB)

    Martins, H R; Romao, M; Placido, D; Provenzano, F; Tierra-Criollo, C J [Universidade Federal de Minas Gerais (UFMG), Departamento de Engenharia Eletrica (DEE), Nucleo de Estudos e Pesquisa em Engenharia Biomedica NEPEB, Av. Ant. Carlos, 6627, sala 2206, Pampulha, Belo Horizonte, MG, 31.270-901 (Brazil)

    2007-11-15

    The technological improvement helps many medical areas. The audiometric exams involving the auditory evoked potentials can make better diagnoses of auditory disorders. This paper proposes the development of a stimulator based on Digital Signal Processor. This stimulator is the first step of an auditory evoked potential system based on the ADSP-BF533 EZ KIT LITE (Analog Devices Company - USA). The stimulator can generate arbitrary waveform like Sine Waves, Modulated Amplitude, Pulses, Bursts and Pips. The waveforms are generated through a graphical interface programmed in C++ in which the user can define the parameters of the waveform. Furthermore, the user can set the exam parameters as number of stimuli, time with stimulation (Time ON) and time without stimulus (Time OFF). In future works will be implemented another parts of the system that includes the acquirement of electroencephalogram and signal processing to estimate and analyze the evoked potential.

  3. Generating Correlated QPSK Waveforms By Exploiting Real Gaussian Random Variables

    KAUST Repository

    Jardak, Seifallah

    2012-11-01

    The design of waveforms with specified auto- and cross-correlation properties has a number of applications in multiple-input multiple-output (MIMO) radar, one of them is the desired transmit beampattern design. In this work, an algorithm is proposed to generate quadrature phase shift- keying (QPSK) waveforms with required cross-correlation properties using real Gaussian random-variables (RV’s). This work can be considered as the extension of what was presented in [1] to generate BPSK waveforms. This work will be extended for the generation of correlated higher-order phase shift-keying (PSK) and quadrature amplitude modulation (QAM) schemes that can better approximate the desired beampattern.

  4. Analysis of Gradient Waveform in Magnetic Resonance Imaging

    Directory of Open Access Journals (Sweden)

    OU-YANG Shan-mei

    2017-12-01

    Full Text Available The accuracy of gradient pulse waveform affects image quality significantly in magnetic resonance imaging (MRI. Recording and analyzing the waveform of gradient pulse helps to make rapid and accurate diagnosis of spectrometer gradient hardware and/or pulse sequence. Using the virtual instrument software LabVIEW to control the high speed data acquisition card DAQ-2005, a multi-channel acquisition scheme was designed to collect the gradient outputs from a custom-made spectrometer. The collected waveforms were post-processed (i.e., histogram statistical analysis, data filtering and difference calculation to obtain feature points containing time and amplitude information. Experiments were carried out to validate the method, which is an auxiliary test method for the development of spectrometer and pulses sequence.

  5. A complete waveform model for compact binaries on eccentric orbits

    Science.gov (United States)

    George, Daniel; Huerta, Eliu; Kumar, Prayush; Agarwal, Bhanu; Schive, Hsi-Yu; Pfeiffer, Harald; Chu, Tony; Boyle, Michael; Hemberger, Daniel; Kidder, Lawrence; Scheel, Mark; Szilagyi, Bela

    2017-01-01

    We present a time domain waveform model that describes the inspiral, merger and ringdown of compact binary systems whose components are non-spinning, and which evolve on orbits with low to moderate eccentricity. We show that this inspiral-merger-ringdown waveform model reproduces the effective-one-body model for black hole binaries with mass-ratios between 1 to 15 in the zero eccentricity limit over a wide range of the parameter space under consideration. We use this model to show that the gravitational wave transients GW150914 and GW151226 can be effectively recovered with template banks of quasicircular, spin-aligned waveforms if the eccentricity e0 of these systems when they enter the aLIGO band at a gravitational wave frequency of 14 Hz satisfies e0GW 150914 <= 0 . 15 and e0GW 151226 <= 0 . 1 .

  6. Classification of Pulse Waveforms Using Edit Distance with Real Penalty

    Directory of Open Access Journals (Sweden)

    Zhang Dongyu

    2010-01-01

    Full Text Available Abstract Advances in sensor and signal processing techniques have provided effective tools for quantitative research in traditional Chinese pulse diagnosis (TCPD. Because of the inevitable intraclass variation of pulse patterns, the automatic classification of pulse waveforms has remained a difficult problem. In this paper, by referring to the edit distance with real penalty (ERP and the recent progress in -nearest neighbors (KNN classifiers, we propose two novel ERP-based KNN classifiers. Taking advantage of the metric property of ERP, we first develop an ERP-induced inner product and a Gaussian ERP kernel, then embed them into difference-weighted KNN classifiers, and finally develop two novel classifiers for pulse waveform classification. The experimental results show that the proposed classifiers are effective for accurate classification of pulse waveform.

  7. Generating Correlated QPSK Waveforms By Exploiting Real Gaussian Random Variables

    KAUST Repository

    Jardak, Seifallah; Ahmed, Sajid; Alouini, Mohamed-Slim

    2012-01-01

    The design of waveforms with specified auto- and cross-correlation properties has a number of applications in multiple-input multiple-output (MIMO) radar, one of them is the desired transmit beampattern design. In this work, an algorithm is proposed to generate quadrature phase shift- keying (QPSK) waveforms with required cross-correlation properties using real Gaussian random-variables (RV’s). This work can be considered as the extension of what was presented in [1] to generate BPSK waveforms. This work will be extended for the generation of correlated higher-order phase shift-keying (PSK) and quadrature amplitude modulation (QAM) schemes that can better approximate the desired beampattern.

  8. Shaping the spectrum of random-phase radar waveforms

    Science.gov (United States)

    Doerry, Armin W.; Marquette, Brandeis

    2017-05-09

    The various technologies presented herein relate to generation of a desired waveform profile in the form of a spectrum of apparently random noise (e.g., white noise or colored noise), but with precise spectral characteristics. Hence, a waveform profile that could be readily determined (e.g., by a spoofing system) is effectively obscured. Obscuration is achieved by dividing the waveform into a series of chips, each with an assigned frequency, wherein the sequence of chips are subsequently randomized. Randomization can be a function of the application of a key to the chip sequence. During processing of the echo pulse, a copy of the randomized transmitted pulse is recovered or regenerated against which the received echo is correlated. Hence, with the echo energy range-compressed in this manner, it is possible to generate a radar image with precise impulse response.

  9. ALFITeX. A new code for the deconvolution of complex alpha-particle spectra

    International Nuclear Information System (INIS)

    Caro Marroyo, B.; Martin Sanchez, A.; Jurado Vargas, M.

    2013-01-01

    A new code for the deconvolution of complex alpha-particle spectra has been developed. The ALFITeX code is written in Visual Basic for Microsoft Office Excel 2010 spreadsheets, incorporating several features aimed at making it a fast, robust and useful tool with a user-friendly interface. The deconvolution procedure is based on the Levenberg-Marquardt algorithm, with the curve fitting the experimental data being the mathematical function formed by the convolution of a Gaussian with two left-handed exponentials in the low-energy-tail region. The code also includes the capability of fitting a possible constant background contribution. The application of the singular value decomposition method for matrix inversion permits the fit of any kind of alpha-particle spectra, even those presenting singularities or an ill-conditioned curvature matrix. ALFITeX has been checked with its application to the deconvolution and the calculation of the alpha-particle emission probabilities of 239 Pu, 241 Am and 235 U. (author)

  10. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra

    2014-10-02

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  11. Chemometric deconvolution of gas chromatographic unresolved conjugated linoleic acid isomers triplet in milk samples.

    Science.gov (United States)

    Blasko, Jaroslav; Kubinec, Róbert; Ostrovský, Ivan; Pavlíková, Eva; Krupcík, Ján; Soják, Ladislav

    2009-04-03

    A generally known problem of GC separation of trans-7;cis-9; cis-9,trans-11; and trans-8,cis-10 CLA (conjugated linoleic acid) isomers was studied by GC-MS on 100m capillary column coated with cyanopropyl silicone phase at isothermal column temperatures in a range of 140-170 degrees C. The resolution of these CLA isomers obtained at given conditions was not high enough for direct quantitative analysis, but it was, however, sufficient for the determination of their peak areas by commercial deconvolution software. Resolution factors of overlapped CLA isomers determined by the separation of a model CLA mixture prepared by mixing of a commercial CLA mixture and CLA isomer fraction obtained by the HPLC semi-preparative separation of milk fatty acids methyl esters were used to validate the deconvolution procedure. Developed deconvolution procedure allowed the determination of the content of studied CLA isomers in ewes' and cows' milk samples, where dominant isomer cis-9,trans-11 is eluted between two small isomers trans-7,cis-9 and trans-8,cis-10 (in the ratio up to 1:100).

  12. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    Science.gov (United States)

    Faber, T. L.; Raghunath, N.; Tudorascu, D.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  13. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations

    Energy Technology Data Exchange (ETDEWEB)

    Faber, T L; Raghunath, N; Tudorascu, D; Votaw, J R [Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322 (United States)], E-mail: tfaber@emory.edu

    2009-02-07

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. Existing correction methods that use known patient motion obtained from tracking devices either require multi-frame acquisitions, detailed knowledge of the scanner, or specialized reconstruction algorithms. A deconvolution algorithm has been developed that alleviates these drawbacks by using the reconstructed image to estimate the original non-blurred image using maximum likelihood estimation maximization (MLEM) techniques. A high-resolution digital phantom was created by shape-based interpolation of the digital Hoffman brain phantom. Three different sets of 20 movements were applied to the phantom. For each frame of the motion, sinograms with attenuation and three levels of noise were simulated and then reconstructed using filtered backprojection. The average of the 20 frames was considered the motion blurred image, which was restored with the deconvolution algorithm. After correction, contrast increased from a mean of 2.0, 1.8 and 1.4 in the motion blurred images, for the three increasing amounts of movement, to a mean of 2.5, 2.4 and 2.2. Mean error was reduced by an average of 55% with motion correction. In conclusion, deconvolution can be used for correction of motion blur when subject motion is known.

  14. Isotope pattern deconvolution as a tool to study iron metabolism in plants.

    Science.gov (United States)

    Rodríguez-Castrillón, José Angel; Moldovan, Mariella; García Alonso, J Ignacio; Lucena, Juan José; García-Tomé, Maria Luisa; Hernández-Apaolaza, Lourdes

    2008-01-01

    Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using 57Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned 57Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low 57Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of 57Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample.

  15. Isotope pattern deconvolution as a tool to study iron metabolism in plants

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez-Castrillon, Jose A.; Moldovan, Mariella; Garcia Alonso, J.I. [University of Oviedo, Department of Physical and Analytical Chemistry, Oviedo (Spain); Lucena, Juan J.; Garcia-Tome, Maria L.; Hernandez-Apaolaza, Lourdes [Autonoma University of Madrid, Department of Agricultural Chemistry, Madrid (Spain)

    2008-01-15

    Isotope pattern deconvolution is a mathematical technique for isolating distinct isotope signatures from mixtures of natural abundance and enriched tracers. In iron metabolism studies measurement of all four isotopes of the element by high-resolution multicollector or collision cell ICP-MS allows the determination of the tracer/tracee ratio with simultaneous internal mass bias correction and lower uncertainties. This technique was applied here for the first time to study iron uptake by cucumber plants using {sup 57}Fe-enriched iron chelates of the o,o and o,p isomers of ethylenediaminedi(o-hydroxyphenylacetic) acid (EDDHA) and ethylenediamine tetraacetic acid (EDTA). Samples of root, stem, leaves, and xylem sap, after exposure of the cucumber plants to the mentioned {sup 57}Fe chelates, were collected, dried, and digested using nitric acid. The isotopic composition of iron in the samples was measured by ICP-MS using a high-resolution multicollector instrument. Mass bias correction was computed using both a natural abundance iron standard and by internal correction using isotope pattern deconvolution. It was observed that, for plants with low {sup 57}Fe enrichment, isotope pattern deconvolution provided lower tracer/tracee ratio uncertainties than the traditional method applying external mass bias correction. The total amount of the element in the plants was determined by isotope dilution analysis, using a collision cell quadrupole ICP-MS instrument, after addition of {sup 57}Fe or natural abundance Fe in a known amount which depended on the isotopic composition of the sample. (orig.)

  16. Direct imaging of phase objects enables conventional deconvolution in bright field light microscopy.

    Directory of Open Access Journals (Sweden)

    Carmen Noemí Hernández Candia

    Full Text Available In transmitted optical microscopy, absorption structure and phase structure of the specimen determine the three-dimensional intensity distribution of the image. The elementary impulse responses of the bright field microscope therefore consist of separate absorptive and phase components, precluding general application of linear, conventional deconvolution processing methods to improve image contrast and resolution. However, conventional deconvolution can be applied in the case of pure phase (or pure absorptive objects if the corresponding phase (or absorptive impulse responses of the microscope are known. In this work, we present direct measurements of the phase point- and line-spread functions of a high-aperture microscope operating in transmitted bright field. Polystyrene nanoparticles and microtubules (biological polymer filaments serve as the pure phase point and line objects, respectively, that are imaged with high contrast and low noise using standard microscopy plus digital image processing. Our experimental results agree with a proposed model for the response functions, and confirm previous theoretical predictions. Finally, we use the measured phase point-spread function to apply conventional deconvolution on the bright field images of living, unstained bacteria, resulting in improved definition of cell boundaries and sub-cellular features. These developments demonstrate practical application of standard restoration methods to improve imaging of phase objects such as cells in transmitted light microscopy.

  17. MetaUniDec: High-Throughput Deconvolution of Native Mass Spectra

    Science.gov (United States)

    Reid, Deseree J.; Diesing, Jessica M.; Miller, Matthew A.; Perry, Scott M.; Wales, Jessica A.; Montfort, William R.; Marty, Michael T.

    2018-04-01

    The expansion of native mass spectrometry (MS) methods for both academic and industrial applications has created a substantial need for analysis of large native MS datasets. Existing software tools are poorly suited for high-throughput deconvolution of native electrospray mass spectra from intact proteins and protein complexes. The UniDec Bayesian deconvolution algorithm is uniquely well suited for high-throughput analysis due to its speed and robustness but was previously tailored towards individual spectra. Here, we optimized UniDec for deconvolution, analysis, and visualization of large data sets. This new module, MetaUniDec, centers around a hierarchical data format 5 (HDF5) format for storing datasets that significantly improves speed, portability, and file size. It also includes code optimizations to improve speed and a new graphical user interface for visualization, interaction, and analysis of data. To demonstrate the utility of MetaUniDec, we applied the software to analyze automated collision voltage ramps with a small bacterial heme protein and large lipoprotein nanodiscs. Upon increasing collisional activation, bacterial heme-nitric oxide/oxygen binding (H-NOX) protein shows a discrete loss of bound heme, and nanodiscs show a continuous loss of lipids and charge. By using MetaUniDec to track changes in peak area or mass as a function of collision voltage, we explore the energetic profile of collisional activation in an ultra-high mass range Orbitrap mass spectrometer. [Figure not available: see fulltext.

  18. Deconvolution of the density of states of tip and sample through constant-current tunneling spectroscopy

    Directory of Open Access Journals (Sweden)

    Holger Pfeifer

    2011-09-01

    Full Text Available We introduce a scheme to obtain the deconvolved density of states (DOS of the tip and sample, from scanning tunneling spectra determined in the constant-current mode (z–V spectroscopy. The scheme is based on the validity of the Wentzel–Kramers–Brillouin (WKB approximation and the trapezoidal approximation of the electron potential within the tunneling barrier. In a numerical treatment of z–V spectroscopy, we first analyze how the position and amplitude of characteristic DOS features change depending on parameters such as the energy position, width, barrier height, and the tip–sample separation. Then it is shown that the deconvolution scheme is capable of recovering the original DOS of tip and sample with an accuracy of better than 97% within the one-dimensional WKB approximation. Application of the deconvolution scheme to experimental data obtained on Nb(110 reveals a convergent behavior, providing separately the DOS of both sample and tip. In detail, however, there are systematic quantitative deviations between the DOS results based on z–V data and those based on I–V data. This points to an inconsistency between the assumed and the actual transmission probability function. Indeed, the experimentally determined differential barrier height still clearly deviates from that derived from the deconvolved DOS. Thus, the present progress in developing a reliable deconvolution scheme shifts the focus towards how to access the actual transmission probability function.

  19. Bayesian Semiparametric Density Deconvolution in the Presence of Conditionally Heteroscedastic Measurement Errors

    KAUST Repository

    Sarkar, Abhra; Mallick, Bani K.; Staudenmayer, John; Pati, Debdeep; Carroll, Raymond J.

    2014-01-01

    We consider the problem of estimating the density of a random variable when precise measurements on the variable are not available, but replicated proxies contaminated with measurement error are available for sufficiently many subjects. Under the assumption of additive measurement errors this reduces to a problem of deconvolution of densities. Deconvolution methods often make restrictive and unrealistic assumptions about the density of interest and the distribution of measurement errors, e.g., normality and homoscedasticity and thus independence from the variable of interest. This article relaxes these assumptions and introduces novel Bayesian semiparametric methodology based on Dirichlet process mixture models for robust deconvolution of densities in the presence of conditionally heteroscedastic measurement errors. In particular, the models can adapt to asymmetry, heavy tails and multimodality. In simulation experiments, we show that our methods vastly outperform a recent Bayesian approach based on estimating the densities via mixtures of splines. We apply our methods to data from nutritional epidemiology. Even in the special case when the measurement errors are homoscedastic, our methodology is novel and dominates other methods that have been proposed previously. Additional simulation results, instructions on getting access to the data set and R programs implementing our methods are included as part of online supplemental materials.

  20. Deconvolution of Complex 1D NMR Spectra Using Objective Model Selection.

    Directory of Open Access Journals (Sweden)

    Travis S Hughes

    Full Text Available Fluorine (19F NMR has emerged as a useful tool for characterization of slow dynamics in 19F-labeled proteins. One-dimensional (1D 19F NMR spectra of proteins can be broad, irregular and complex, due to exchange of probe nuclei between distinct electrostatic environments; and therefore cannot be deconvoluted and analyzed in an objective way using currently available software. We have developed a Python-based deconvolution program, decon1d, which uses Bayesian information criteria (BIC to objectively determine which model (number of peaks would most likely produce the experimentally obtained data. The method also allows for fitting of intermediate exchange spectra, which is not supported by current software in the absence of a specific kinetic model. In current methods, determination of the deconvolution model best supported by the data is done manually through comparison of residual error values, which can be time consuming and requires model selection by the user. In contrast, the BIC method used by decond1d provides a quantitative method for model comparison that penalizes for model complexity helping to prevent over-fitting of the data and allows identification of the most parsimonious model. The decon1d program is freely available as a downloadable Python script at the project website (https://github.com/hughests/decon1d/.

  1. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  2. Improving waveform inversion using modified interferometric imaging condition

    Science.gov (United States)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2018-02-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  3. On the potential of OFDM enhancements as 5G waveforms

    DEFF Research Database (Denmark)

    Berardinelli, Gilberto; Pajukoski, Kari; Lähetkangas, Eeva

    2014-01-01

    The ideal radio waveform for an upcoming 5th Generation (5G) radio access technology should cope with a set of requirements such as limited complexity, good time/frequency localization and simple extension to multi-antenna technologies. This paper discusses the suitability of Orthogonal Frequency...... Division Multiplexing (OFDM) and its recently proposed enhancements as 5G waveforms, mainly focusing on their capability to cope with our requirements. Significant focus is given to the novel zero-tail paradigm, which allows boosting the OFDM flexibility while circumventing demerits such as poor spectral...

  4. Waveform Diversity and Design for Interoperating Radar Systems

    Science.gov (United States)

    2013-01-01

    University Di Pisa Department Di Ingegneria Dell Informazione Elettronica, Informatica , Telecomunicazioni Via Girolamo Caruso 16 Pisa, Italy 56122...NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University Di Pisa Department Di Ingegneria Dell Informazione Elettronica, Informatica ...DIPARTIMENTO DI INGEGNERIA DELL’INFORMAZIONE ELETTRONICA, INFORMATICA , TELECOMUNICAZIONI WAVEFORM DIVERSITY AND DESIGN FOR INTEROPERATING

  5. Seismic Broadband Full Waveform Inversion by shot/receiver refocusing

    NARCIS (Netherlands)

    Haffinger, P.R.

    2013-01-01

    Full waveform inversion is a tool to obtain high-resolution property models of the subsurface from seismic data. However, the technique is computationally expens- ive and so far no multi-dimensional implementation exists to achieve a resolution that can directly be used for seismic interpretation

  6. Augmented kludge waveforms for detecting extreme-mass-ratio inspirals

    Science.gov (United States)

    Chua, Alvin J. K.; Moore, Christopher J.; Gair, Jonathan R.

    2017-08-01

    The extreme-mass-ratio inspirals (EMRIs) of stellar-mass compact objects into massive black holes are an important class of source for the future space-based gravitational-wave detector LISA. Detecting signals from EMRIs will require waveform models that are both accurate and computationally efficient. In this paper, we present the latest implementation of an augmented analytic kludge (AAK) model, publicly available at https://github.com/alvincjk/EMRI_Kludge_Suite as part of an EMRI waveform software suite. This version of the AAK model has improved accuracy compared to its predecessors, with two-month waveform overlaps against a more accurate fiducial model exceeding 0.97 for a generic range of sources; it also generates waveforms 5-15 times faster than the fiducial model. The AAK model is well suited for scoping out data analysis issues in the upcoming round of mock LISA data challenges. A simple analytic argument shows that it might even be viable for detecting EMRIs with LISA through a semicoherent template bank method, while the use of the original analytic kludge in the same approach will result in around 90% fewer detections.

  7. Centered Differential Waveform Inversion with Minimum Support Regularization

    KAUST Repository

    Kazei, Vladimir

    2017-05-26

    Time-lapse full-waveform inversion has two major challenges. The first one is the reconstruction of a reference model (baseline model for most of approaches). The second is inversion for the time-lapse changes in the parameters. Common model approach is utilizing the information contained in all available data sets to build a better reference model for time lapse inversion. Differential (Double-difference) waveform inversion allows to reduce the artifacts introduced into estimates of time-lapse parameter changes by imperfect inversion for the baseline-reference model. We propose centered differential waveform inversion (CDWI) which combines these two approaches in order to benefit from both of their features. We apply minimum support regularization commonly used with electromagnetic methods of geophysical exploration. We test the CDWI method on synthetic dataset with random noise and show that, with Minimum support regularization, it provides better resolution of velocity changes than with total variation and Tikhonov regularizations in time-lapse full-waveform inversion.

  8. Josephson Arbitrary Waveform Synthesis With Multilevel Pulse Biasing

    Science.gov (United States)

    Brevik, Justus A.; Flowers-Jacobs, Nathan E.; Fox, Anna E.; Golden, Evan B.; Dresselhaus, Paul D.; Benz, Samuel P.

    2017-01-01

    We describe the implementation of new commercial pulse-bias electronics that have enabled an improvement in the generation of quantum-accurate waveforms both with and without low-frequency compensation biases. We have used these electronics to apply a multilevel pulse bias to the Josephson arbitrary waveform synthesizer and have generated, for the first time, a quantum-accurate bipolar sinusoidal waveform without the use of a low-frequency compensation bias current. This uncompensated 1 kHz waveform was synthesized with an rms amplitude of 325 mV and maintained its quantum accuracy over a1.5 mA operating current range. The same technique and equipment was also used to synthesize a quantum-accurate 1 MHz sinusoid with a 1.2 mA operating margin. In addition, we have synthesized a compensated 1 kHz sinusoid with an rms amplitude of 1 V and a 2.7 mA operating margin. PMID:28736494

  9. Synchronous Generator Model Parameter Estimation Based on Noisy Dynamic Waveforms

    Science.gov (United States)

    Berhausen, Sebastian; Paszek, Stefan

    2016-01-01

    In recent years, there have occurred system failures in many power systems all over the world. They have resulted in a lack of power supply to a large number of recipients. To minimize the risk of occurrence of power failures, it is necessary to perform multivariate investigations, including simulations, of power system operating conditions. To conduct reliable simulations, the current base of parameters of the models of generating units, containing the models of synchronous generators, is necessary. In the paper, there is presented a method for parameter estimation of a synchronous generator nonlinear model based on the analysis of selected transient waveforms caused by introducing a disturbance (in the form of a pseudorandom signal) in the generator voltage regulation channel. The parameter estimation was performed by minimizing the objective function defined as a mean square error for deviations between the measurement waveforms and the waveforms calculated based on the generator mathematical model. A hybrid algorithm was used for the minimization of the objective function. In the paper, there is described a filter system used for filtering the noisy measurement waveforms. The calculation results of the model of a 44 kW synchronous generator installed on a laboratory stand of the Institute of Electrical Engineering and Computer Science of the Silesian University of Technology are also given. The presented estimation method can be successfully applied to parameter estimation of different models of high-power synchronous generators operating in a power system.

  10. Multisource waveform inversion of marine streamer data using normalized wavefield

    KAUST Repository

    Choi, Yun Seok

    2013-09-01

    Multisource full-waveform inversion based on the L1- and L2-norm objective functions cannot be applied to marine streamer data because it does not take into account the unmatched acquisition geometries between the observed and modeled data. To apply multisource full-waveform inversion to marine streamer data, we construct the L1- and L2-norm objective functions using the normalized wavefield. The new residual seismograms obtained from the L1- and L2-norms using the normalized wavefield mitigate the problem of unmatched acquisition geometries, which enables multisource full-waveform inversion to work with marine streamer data. In the new approaches using the normalized wavefield, we used the back-propagation algorithm based on the adjoint-state technique to efficiently calculate the gradients of the objective functions. Numerical examples showed that multisource full-waveform inversion using the normalized wavefield yields much better convergence for marine streamer data than conventional approaches. © 2013 Society of Exploration Geophysicists.

  11. Centered Differential Waveform Inversion with Minimum Support Regularization

    KAUST Repository

    Kazei, Vladimir; Alkhalifah, Tariq Ali

    2017-01-01

    Time-lapse full-waveform inversion has two major challenges. The first one is the reconstruction of a reference model (baseline model for most of approaches). The second is inversion for the time-lapse changes in the parameters. Common model

  12. Frequency-domain waveform inversion using the phase derivative

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2013-01-01

    Phase wrapping in the frequency domain or cycle skipping in the time domain is the major cause of the local minima problem in the waveform inversion when the starting model is far from the true model. Since the phase derivative does not suffer from

  13. Experimental validation of waveform relaxation technique for power ...

    Indian Academy of Sciences (India)

    damping controller drawn our attention to a potential convergence problem which ... method was originally proposed as a method of parallelizing the numerical integration of very. Figure 2 ..... to it the features of an industrial real-time operating system. ..... Odeh F and Ruehli A 1985 Waveform relaxation: Theory and practice.

  14. MURI: Adaptive Waveform Design for Full Spectral Dominance

    Science.gov (United States)

    2011-03-11

    perhaps in a similarly-named file in the same directory as the data file) and handled by a Java class with an API for a user to request data without the...1101- 1104 . [15] J. Wang, and A. Nehorai, “Adaptive polarimetry design for a target in compound-Gaussian clutter,” International Waveform Diversity and

  15. Multisource waveform inversion of marine streamer data using normalized wavefield

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2013-01-01

    Multisource full-waveform inversion based on the L1- and L2-norm objective functions cannot be applied to marine streamer data because it does not take into account the unmatched acquisition geometries between the observed and modeled data. To apply

  16. Categorisation of full waveform data provided by laser scanning devices

    Science.gov (United States)

    Ullrich, Andreas; Pfennigbauer, Martin

    2011-11-01

    In 2004, a laser scanner device for commercial airborne laser scanning applications, the RIEGL LMS-Q560, was introduced to the market, making use of a radical alternative approach to the traditional analogue signal detection and processing schemes found in LIDAR instruments so far: digitizing the echo signals received by the instrument for every laser pulse and analysing these echo signals off-line in a so-called full waveform analysis in order to retrieve almost all information contained in the echo signal using transparent algorithms adaptable to specific applications. In the field of laser scanning the somewhat unspecific term "full waveform data" has since been established. We attempt a categorisation of the different types of the full waveform data found in the market. We discuss the challenges in echo digitization and waveform analysis from an instrument designer's point of view and we will address the benefits to be gained by using this technique, especially with respect to the so-called multi-target capability of pulsed time-of-flight LIDAR instruments.

  17. A compact, multichannel, and low noise arbitrary waveform generator.

    Science.gov (United States)

    Govorkov, S; Ivanov, B I; Il'ichev, E; Meyer, H-G

    2014-05-01

    A new type of high functionality, fast, compact, and easy programmable arbitrary waveform generator for low noise physical measurements is presented. The generator provides 7 fast differential waveform channels with a maximum bandwidth up to 200 MHz frequency. There are 6 fast pulse generators on the generator board with 78 ps time resolution in both duration and delay, 3 of them with amplitude control. The arbitrary waveform generator is additionally equipped with two auxiliary slow 16 bit analog-to-digital converters and four 16 bit digital-to-analog converters for low frequency applications. Electromagnetic shields are introduced to the power supply, digital, and analog compartments and with a proper filter design perform more than 110 dB digital noise isolation to the output signals. All the output channels of the board have 50 Ω SubMiniature version A termination. The generator board is suitable for use as a part of a high sensitive physical equipment, e.g., fast read out and manipulation of nuclear magnetic resonance or superconducting quantum systems and any other application, which requires electromagnetic interference free fast pulse and arbitrary waveform generation.

  18. A nonlinear approach of elastic reflection waveform inversion

    KAUST Repository

    Guo, Qiang

    2016-09-06

    Elastic full waveform inversion (EFWI) embodies the original intention of waveform inversion at its inception as it is a better representation of the mostly solid Earth. However, compared with the acoustic P-wave assumption, EFWI for P- and S-wave velocities using multi-component data admitted mixed results. Full waveform inversion (FWI) is a highly nonlinear problem and this nonlinearity only increases under the elastic assumption. Reflection waveform inversion (RWI) can mitigate the nonlinearity by relying on transmissions from reflections focused on inverting low wavenumber components of the model. In our elastic endeavor, we split the P- and S-wave velocities into low wavenumber and perturbation components and propose a nonlinear approach to invert for both of them. The new optimization problem is built on an objective function that depends on both background and perturbation models. We utilize an equivalent stress source based on the model perturbation to generate reflection instead of demigrating from an image, which is applied in conventional RWI. Application on a slice of an ocean-bottom data shows that our method can efficiently update the low wavenumber parts of the model, but more so, obtain perturbations that can be added to the low wavenumbers for a high resolution output.

  19. A compact, multichannel, and low noise arbitrary waveform generator

    International Nuclear Information System (INIS)

    Govorkov, S.; Ivanov, B. I.; Il'ichev, E.; Meyer, H.-G.

    2014-01-01

    A new type of high functionality, fast, compact, and easy programmable arbitrary waveform generator for low noise physical measurements is presented. The generator provides 7 fast differential waveform channels with a maximum bandwidth up to 200 MHz frequency. There are 6 fast pulse generators on the generator board with 78 ps time resolution in both duration and delay, 3 of them with amplitude control. The arbitrary waveform generator is additionally equipped with two auxiliary slow 16 bit analog-to-digital converters and four 16 bit digital-to-analog converters for low frequency applications. Electromagnetic shields are introduced to the power supply, digital, and analog compartments and with a proper filter design perform more than 110 dB digital noise isolation to the output signals. All the output channels of the board have 50 Ω SubMiniature version A termination. The generator board is suitable for use as a part of a high sensitive physical equipment, e.g., fast read out and manipulation of nuclear magnetic resonance or superconducting quantum systems and any other application, which requires electromagnetic interference free fast pulse and arbitrary waveform generation

  20. Programmable optical waveform reshaping on a picosecond timescale

    DEFF Research Database (Denmark)

    Manurkar, Paritosh; Jain, Nitin; Kumar Periyannan Rajeswari, Prem

    2017-01-01

    We experimentally demonstrate the temporal reshaping of optical waveforms in the telecom wavelength band using the principle of quantum frequency conversion. The reshaped optical pulses do not undergo any wavelength translation. The interaction takes place in a nonlinear chi((2)) waveguide using ...... for quantum communications. (C) 2017 Optical Society of America...

  1. A nonlinear approach of elastic reflection waveform inversion

    KAUST Repository

    Guo, Qiang; Alkhalifah, Tariq Ali

    2016-01-01

    Elastic full waveform inversion (EFWI) embodies the original intention of waveform inversion at its inception as it is a better representation of the mostly solid Earth. However, compared with the acoustic P-wave assumption, EFWI for P- and S-wave velocities using multi-component data admitted mixed results. Full waveform inversion (FWI) is a highly nonlinear problem and this nonlinearity only increases under the elastic assumption. Reflection waveform inversion (RWI) can mitigate the nonlinearity by relying on transmissions from reflections focused on inverting low wavenumber components of the model. In our elastic endeavor, we split the P- and S-wave velocities into low wavenumber and perturbation components and propose a nonlinear approach to invert for both of them. The new optimization problem is built on an objective function that depends on both background and perturbation models. We utilize an equivalent stress source based on the model perturbation to generate reflection instead of demigrating from an image, which is applied in conventional RWI. Application on a slice of an ocean-bottom data shows that our method can efficiently update the low wavenumber parts of the model, but more so, obtain perturbations that can be added to the low wavenumbers for a high resolution output.

  2. 2D acoustic-elastic coupled waveform inversion in the Laplace domain

    KAUST Repository

    Bae, Hoseuk; Shin, Changsoo; Cha, Youngho; Choi, Yun Seok; Min, Dongjoo

    2010-01-01

    Although waveform inversion has been intensively studied in an effort to properly delineate the Earth's structures since the early 1980s, most of the time- and frequency-domain waveform inversion algorithms still have critical limitations

  3. Full waveform inversion based on scattering angle enrichment with application to real dataset

    KAUST Repository

    Wu, Zedong; Alkhalifah, Tariq Ali

    2015-01-01

    Reflected waveform inversion (RWI) provides a method to reduce the nonlinearity of the standard full waveform inversion (FWI). However, the drawback of the existing RWI methods is inability to utilize diving waves and the extra sensitivity

  4. Analysis of LFM-waveform Libraries for Cognitive Tracking Maneuvering Targets

    Directory of Open Access Journals (Sweden)

    Wang Hongyan

    2016-01-01

    Full Text Available Based on the idea of the waveform agility in cognitive radars,the waveform libraries for maneuvering target tracking are discussed. LFM-waveform libraries are designed according to different combinations of chirp parameters and FrFT rotation angles. By applying the interact multiple model (IMM algorithm in tracking maneuvering targets, transmitted waveform is called real time from the LFM-waveform libraries. The waveforms are selected from the library according to the criterion of maximum mutual information between the current state of knowledge of the model and the measurement. Simulation results show that waveform library containing certain amount LFM-waveforms can improve the performance of cognitive tracking radar.

  5. Ascending-ramp biphasic waveform has a lower defibrillation threshold and releases less troponin I than a truncated exponential biphasic waveform.

    Science.gov (United States)

    Huang, Jian; Walcott, Gregory P; Ruse, Richard B; Bohanan, Scott J; Killingsworth, Cheryl R; Ideker, Raymond E

    2012-09-11

    We tested the hypothesis that the shape of the shock waveform affects not only the defibrillation threshold but also the amount of cardiac damage. Defibrillation thresholds were determined for 11 waveforms-3 ascending-ramp waveforms, 3 descending-ramp waveforms, 3 rectilinear first-phase biphasic waveforms, a Gurvich waveform, and a truncated exponential biphasic waveform-in 6 pigs with electrodes in the right ventricular apex and superior vena cava. The ascending, descending, and rectilinear waveforms had 4-, 8-, and 16-millisecond first phases and a 3.5-millisecond rectilinear second phase that was half the voltage of the first phase. The exponential biphasic waveform had a 60% first-phase and a 50% second-phase tilt. In a second study, we attempted to defibrillate after 10 seconds of ventricular fibrillation with a single ≈30-J shock (6 pigs successfully defibrillated with 8-millisecond ascending, 8-millisecond rectilinear, and truncated exponential biphasic waveforms). Troponin I blood levels were determined before and 2 to 10 hours after the shock. The lowest-energy defibrillation threshold was for the 8-milliseconds ascending ramp (14.6±7.3 J [mean±SD]), which was significantly less than for the truncated exponential (19.6±6.3 J). Six hours after shock, troponin I was significantly less for the ascending-ramp waveform (0.80±0.54 ng/mL) than for the truncated exponential (1.92±0.47 ng/mL) or the rectilinear waveform (1.17±0.45 ng/mL). The ascending ramp has a significantly lower defibrillation threshold and at ≈30 J causes 58% less troponin I release than the truncated exponential biphasic shock. Therefore, the shock waveform affects both the defibrillation threshold and the amount of cardiac damage.

  6. Microseismic event location by master-event waveform stacking

    Science.gov (United States)

    Grigoli, F.; Cesca, S.; Dahm, T.

    2016-12-01

    Waveform stacking location methods are nowadays extensively used to monitor induced seismicity monitoring assoiciated with several underground industrial activities such as Mining, Oil&Gas production and Geothermal energy exploitation. In the last decade a significant effort has been spent to develop or improve methodologies able to perform automated seismological analysis for weak events at a local scale. This effort was accompanied by the improvement of monitoring systems, resulting in an increasing number of large microseismicity catalogs. The analysis of microseismicity is challenging, because of the large number of recorded events often characterized by a low signal-to-noise ratio. A significant limitation of the traditional location approaches is that automated picking is often done on each seismogram individually, making little or no use of the coherency information between stations. In order to improve the performance of the traditional location methods, in the last year, alternative approaches have been proposed. These methods exploits the coherence of the waveforms recorded at different stations and do not require any automated picking procedure. The main advantage of this methods relies on their robustness even when the recorded waveforms are very noisy. On the other hand, like any other location method, the location performance strongly depends on the accuracy of the available velocity model. When dealing with inaccurate velocity models, in fact, location results can be affected by large errors. Here we will introduce a new automated waveform stacking location method which is less dependent on the knowledge of the velocity model and presents several benefits, which improve the location accuracy: 1) it accounts for phase delays due to local site effects, e.g. surface topography or variable sediment thickness 2) theoretical velocity model are only used to estimate travel times within the source volume, and not along the whole source-sensor path. We

  7. Computer model analysis of the radial artery pressure waveform.

    Science.gov (United States)

    Schwid, H A; Taylor, L A; Smith, N T

    1987-10-01

    Simultaneous measurements of aortic and radial artery pressures are reviewed, and a model of the cardiovascular system is presented. The model is based on resonant networks for the aorta and axillo-brachial-radial arterial system. The model chosen is a simple one, in order to make interpretation of the observed relationships clear. Despite its simplicity, the model produces realistic aortic and radial artery pressure waveforms. It demonstrates that the resonant properties of the arterial wall significantly alter the pressure waveform as it is propagated from the aorta to the radial artery. Although the mean and end-diastolic radial pressures are usually accurate estimates of the corresponding aortic pressures, the systolic pressure at the radial artery is often much higher than that of the aorta due to overshoot caused by the resonant behavior of the radial artery. The radial artery dicrotic notch is predominantly dependent on the axillo-brachial-radial arterial wall properties, rather than on the aortic valve or peripheral resistance. Hence the use of the radial artery dicrotic notch as an estimate of end systole is unreliable. The rate of systolic upstroke, dP/dt, of the radial artery waveform is a function of many factors, making it difficult to interpret. The radial artery waveform usually provides accurate estimates for mean and diastolic aortic pressures; for all other measurements it is an inadequate substitute for the aortic pressure waveform. In the presence of low forearm peripheral resistance the mean radial artery pressure may significantly underestimate the mean aortic pressure, as explained by a voltage divider model.

  8. Effects of waveform model systematics on the interpretation of GW150914

    OpenAIRE

    Abbott, B. P.; Abbott, R.; Adhikari, R. X.; Ananyeva, A.; Anderson, S. B.; Appert, S.; Arai, K.; Araya, M. C.; Barayoga, J. C.; Barish, B. C.; Berger, B. K.; Billingsley, G.; Biscans, S; Blackburn, J. K.; Bork, R.

    2017-01-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein's e...

  9. A new optimization approach for source-encoding full-waveform inversion

    NARCIS (Netherlands)

    Moghaddam, P.P.; Keers, H.; Herrmann, F.J.; Mulder, W.A.

    2013-01-01

    Waveform inversion is the method of choice for determining a highly heterogeneous subsurface structure. However, conventional waveform inversion requires that the wavefield for each source is computed separately. This makes it very expensive for realistic 3D seismic surveys. Source-encoding waveform

  10. Predicting Electrocardiogram and Arterial Blood Pressure Waveforms with Different Echo State Network Architectures

    Science.gov (United States)

    2014-11-01

    Predicting Electrocardiogram and Arterial Blood Pressure Waveforms with Different Echo State Network Architectures Allan Fong, MS1,3, Ranjeev...the medical staff in Intensive Care Units. The ability to predict electrocardiogram and arterial blood pressure waveforms can potentially help the...type of neural network for mining, understanding, and predicting electrocardiogram and arterial blood pressure waveforms. Several network

  11. The Modularized Software Package ASKI - Full Waveform Inversion Based on Waveform Sensitivity Kernels Utilizing External Seismic Wave Propagation Codes

    Science.gov (United States)

    Schumacher, F.; Friederich, W.

    2015-12-01

    We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full

  12. Non-parametric PSF estimation from celestial transit solar images using blind deconvolution

    Directory of Open Access Journals (Sweden)

    González Adriana

    2016-01-01

    Full Text Available Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. The measured image in a real optical instrument is usually represented by the convolution of an ideal image with a Point Spread Function (PSF. Additionally, the image acquisition process is also contaminated by other sources of noise (read-out, photon-counting. The problem of estimating both the PSF and a denoised image is called blind deconvolution and is ill-posed. Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, our method does not assume a parametric model of the PSF and can thus be applied to any telescope. Methods: Our scheme uses a wavelet analysis prior model on the image and weak assumptions on the PSF. We use observations from a celestial transit, where the occulting body can be assumed to be a black disk. These constraints allow us to retain meaningful solutions for the filter and the image, eliminating trivial, translated, and interchanged solutions. Under an additive Gaussian noise assumption, they also enforce noise canceling and avoid reconstruction artifacts by promoting the whiteness of the residual between the blurred observations and the cleaned data. Results: Our method is applied to synthetic and experimental data. The PSF is estimated for the SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA using the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality to parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

  13. Analysis of soda-lime glasses using non-negative matrix factor deconvolution of Raman spectra

    OpenAIRE

    Woelffel , William; Claireaux , Corinne; Toplis , Michael J.; Burov , Ekaterina; Barthel , Etienne; Shukla , Abhay; Biscaras , Johan; Chopinet , Marie-Hélène; Gouillart , Emmanuelle

    2015-01-01

    International audience; Novel statistical analysis and machine learning algorithms are proposed for the deconvolution and interpretation of Raman spectra of silicate glasses in the Na 2 O-CaO-SiO 2 system. Raman spectra are acquired along diffusion profiles of three pairs of glasses centered around an average composition of 69. 9 wt. % SiO 2 , 12. 7 wt. % CaO , 16. 8 wt. % Na 2 O. The shape changes of the Raman spectra across the compositional domain are analyzed using a combination of princi...

  14. Chromatic aberration correction and deconvolution for UV sensitive imaging of fluorescent sterols in cytoplasmic lipid droplets

    DEFF Research Database (Denmark)

    Wüstner, Daniel; Faergeman, Nils J

    2008-01-01

    adipocyte differentiation. DHE is targeted to transferrin-positive recycling endosomes in preadipocytes but associates with droplets in mature adipocytes. Only in adipocytes but not in foam cells fluorescent sterol was confined to the droplet-limiting membrane. We developed an approach to visualize...... macrophage foam cells and in adipocytes. We used deconvolution microscopy and developed image segmentation techniques to assess the DHE content of lipid droplets in both cell types in an automated manner. Pulse-chase studies and colocalization analysis were performed to monitor the redistribution of DHE upon...

  15. Deconvolution of ferromagnetic resonance in devitrification process of Co-based amorphous alloys

    International Nuclear Information System (INIS)

    Montiel, H.; Alvarez, G.; Betancourt, I.; Zamorano, R.; Valenzuela, R.

    2006-01-01

    Ferromagnetic resonance (FMR) measurements were carried out on soft magnetic amorphous ribbons of composition Co 66 Fe 4 B 12 Si 13 Nb 4 Cu prepared by melt spinning. In the as-cast sample, a simple FMR spectrum was apparent. For treatment times of 5-20 min a complex resonant absorption at lower fields was detected; deconvolution calculations were carried out on the FMR spectra and it was possible to separate two contributions. These results can be interpreted as the combination of two different magnetic phases, corresponding to the amorphous matrix and nanocrystallites. The parameters of resonant absorptions can be associated with the evolution of nanocrystallization during the annealing

  16. Gabor Deconvolution as Preliminary Method to Reduce Pitfall in Deeper Target Seismic Data

    Science.gov (United States)

    Oktariena, M.; Triyoso, W.

    2018-03-01

    Anelastic attenuation process during seismic wave propagation is the trigger of seismic non-stationary characteristic. An absorption and a scattering of energy are causing the seismic energy loss as the depth increasing. A series of thin reservoir layers found in the study area is located within Talang Akar Fm. Level, showing an indication of interpretation pitfall due to attenuation effect commonly occurred in deeper level seismic data. Attenuation effect greatly influences the seismic images of deeper target level, creating pitfalls in several aspect. Seismic amplitude in deeper target level often could not represent its real subsurface character due to a low amplitude value or a chaotic event nearing the Basement. Frequency wise, the decaying could be seen as the frequency content diminishing in deeper target. Meanwhile, seismic amplitude is the simple tool to point out Direct Hydrocarbon Indicator (DHI) in preliminary Geophysical study before a further advanced interpretation method applied. A quick-look of Post-Stack Seismic Data shows the reservoir associated with a bright spot DHI while another bigger bright spot body detected in the North East area near the field edge. A horizon slice confirms a possibility that the other bright spot zone has smaller delineation; an interpretation pitfall commonly occurs in deeper level of seismic. We evaluates this pitfall by applying Gabor Deconvolution to address the attenuation problem. Gabor Deconvolution forms a Partition of Unity to factorize the trace into smaller convolution window that could be processed as stationary packets. Gabor Deconvolution estimates both the magnitudes of source signature alongside its attenuation function. The enhanced seismic shows a better imaging in the pitfall area that previously detected as a vast bright spot zone. When the enhanced seismic is used for further advanced reprocessing process, the Seismic Impedance and Vp/Vs Ratio slices show a better reservoir delineation, in which the

  17. Deconvolution of 2D coincident Doppler broadening spectroscopy using the Richardson-Lucy algorithm

    International Nuclear Information System (INIS)

    Zhang, J.D.; Zhou, T.J.; Cheung, C.K.; Beling, C.D.; Fung, S.; Ng, M.K.

    2006-01-01

    Coincident Doppler Broadening Spectroscopy (CDBS) measurements are popular in positron solid-state studies of materials. By utilizing the instrumental resolution function obtained from a gamma line close in energy to the 511 keV annihilation line, it is possible to significantly enhance the quality of the CDBS spectra using deconvolution algorithms. In this paper, we compare two algorithms, namely the Non-Negativity Least Squares (NNLS) regularized method and the Richardson-Lucy (RL) algorithm. The latter, which is based on the method of maximum likelihood, is found to give superior results to the regularized least-squares algorithm and with significantly less computer processing time

  18. Deconvolution-based resolution enhancement of chemical ice core records obtained by continuous flow analysis

    DEFF Research Database (Denmark)

    Rasmussen, Sune Olander; Andersen, Katrine K.; Johnsen, Sigfus Johann

    2005-01-01

    Continuous flow analysis (CFA) has become a popular measuring technique for obtaining high-resolution chemical ice core records due to an attractive combination of measuring speed and resolution. However, when analyzing the deeper sections of ice cores or cores from low-accumulation areas...... of the data for high-resolution studies such as annual layer counting. The presented method uses deconvolution techniques and is robust to the presence of noise in the measurements. If integrated into the data processing, it requires no additional data collection. The method is applied to selected ice core...

  19. Fourier Deconvolution Methods for Resolution Enhancement in Continuous-Wave EPR Spectroscopy.

    Science.gov (United States)

    Reed, George H; Poyner, Russell R

    2015-01-01

    An overview of resolution enhancement of conventional, field-swept, continuous-wave electron paramagnetic resonance spectra using Fourier transform-based deconvolution methods is presented. Basic steps that are involved in resolution enhancement of calculated spectra using an implementation based on complex discrete Fourier transform algorithms are illustrated. Advantages and limitations of the method are discussed. An application to an experimentally obtained spectrum is provided to illustrate the power of the method for resolving overlapped transitions. © 2015 Elsevier Inc. All rights reserved.

  20. Analytic family of post-merger template waveforms

    Science.gov (United States)

    Del Pozzo, Walter; Nagar, Alessandro

    2017-06-01

    Building on the analytical description of the post-merger (ringdown) waveform of coalescing, nonprecessing, spinning binary black holes introduced by Damour and Nagar [Phys. Rev. D 90, 024054 (2014), 10.1103/PhysRevD.90.024054], we propose an analytic, closed form, time-domain, representation of the ℓ=m =2 gravitational radiation mode emitted after merger. This expression is given as a function of the component masses and dimensionless spins (m1 ,2,χ1 ,2) of the two inspiraling objects, as well as of the mass MBH and (complex) frequency σ1 of the fundamental quasinormal mode of the remnant black hole. Our proposed template is obtained by fitting the post-merger waveform part of several publicly available numerical relativity simulations from the Simulating eXtreme Spacetimes (SXS) catalog and then suitably interpolating over (symmetric) mass ratio and spins. We show that this analytic expression accurately reproduces (˜0.01 rad ) the phasing of the post-merger data of other data sets not used in its construction. This is notably the case of the spin-aligned run SXS:BBH:0305, whose intrinsic parameters are consistent with the 90% credible intervals reported in the parameter-estimation followup of GW150914 by B.P. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016), 10.1103/PhysRevLett.116.241102]. Using SXS waveforms as "experimental" data, we further show that our template could be used on the actual GW150914 data to perform a new measure of the complex frequency of the fundamental quasinormal mode so as to exploit the complete (high signal-to-noise-ratio) post-merger waveform. We assess the usefulness of our proposed template by analyzing, in a realistic setting, SXS full inspiral-merger-ringdown waveforms and constructing posterior probability distribution functions for the central frequency damping time of the first overtone of the fundamental quasinormal mode as well as for the physical parameters of the systems. We also briefly explore the possibility

  1. Deconvolution of Doppler-broadened positron annihilation lineshapes by fast Fourier transformation using a simple automatic filtering technique

    International Nuclear Information System (INIS)

    Britton, D.T.; Bentvelsen, P.; Vries, J. de; Veen, A. van

    1988-01-01

    A deconvolution scheme for digital lineshapes using fast Fourier transforms and a filter based on background subtraction in Fourier space has been developed. In tests on synthetic data this has been shown to give optimum deconvolution without prior inspection of the Fourier spectrum. Although offering significant improvements on the raw data, deconvolution is shown to be limited. The contribution of the resolution function is substantially reduced but not eliminated completely and unphysical oscillations are introduced into the lineshape. The method is further tested on measurements of the lineshape for positron annihilation in single crystal copper at the relatively poor resolution of 1.7 keV at 512 keV. A two-component fit is possible yielding component widths in agreement with previous measurements. (orig.)

  2. Pulsed electric field sensor based on original waveform measurement

    International Nuclear Information System (INIS)

    Ma Liang; Wu Wei; Cheng Yinhui; Zhou Hui; Li Baozhong; Li Jinxi; Zhu Meng

    2010-01-01

    The paper introduces the differential and original waveform measurement principles for pulsed E-field, and develops an pulsed E-field sensor based on original waveform measurement along with its theoretical correction model. The sensor consists of antenna, integrator, amplifier and driver, optic-electric/electric-optic conversion module and transmission module. The time-domain calibration in TEM cell indicates that, its risetime response is shorter than 1.0 ns, and the output pulse width at 90% of the maximum amplitude is wider than 10.0 μs. The output amplitude of the sensor is linear to the electric field intensity in a dynamic range of 20 dB. The measurement capability can be extended to 10 V/m or 50 kV/m by changing the system's antenna and other relative modules. (authors)

  3. A novel PMT test system based on waveform sampling

    Science.gov (United States)

    Yin, S.; Ma, L.; Ning, Z.; Qian, S.; Wang, Y.; Jiang, X.; Wang, Z.; Yu, B.; Gao, F.; Zhu, Y.; Wang, Z.

    2018-01-01

    Comparing with the traditional test system based on a QDC and TDC and scaler, a test system based on waveform sampling is constructed for signal sampling of the 8"R5912 and the 20"R12860 Hamamatsu PMT in different energy states from single to multiple photoelectrons. In order to achieve high throughput and to reduce the dead time in data processing, the data acquisition software based on LabVIEW is developed and runs with a parallel mechanism. The analysis algorithm is realized in LabVIEW and the spectra of charge, amplitude, signal width and rising time are analyzed offline. The results from Charge-to-Digital Converter, Time-to-Digital Converter and waveform sampling are discussed in detailed comparison.

  4. Quantum optical arbitrary waveform manipulation and measurement in real time.

    Science.gov (United States)

    Kowligy, Abijith S; Manurkar, Paritosh; Corzo, Neil V; Velev, Vesselin G; Silver, Michael; Scott, Ryan P; Yoo, S J B; Kumar, Prem; Kanter, Gregory S; Huang, Yu-Ping

    2014-11-17

    We describe a technique for dynamic quantum optical arbitrary-waveform generation and manipulation, which is capable of mode selectively operating on quantum signals without inducing significant loss or decoherence. It is built upon combining the developed tools of quantum frequency conversion and optical arbitrary waveform generation. Considering realistic parameters, we propose and analyze applications such as programmable reshaping of picosecond-scale temporal modes, selective frequency conversion of any one or superposition of those modes, and mode-resolved photon counting. We also report on experimental progress to distinguish two overlapping, orthogonal temporal modes, demonstrating over 8 dB extinction between picosecond-scale time-frequency modes, which agrees well with our theory. Our theoretical and experimental progress, as a whole, points to an enabling optical technique for various applications such as ultradense quantum coding, unity-efficiency cavity-atom quantum memories, and high-speed quantum computing.

  5. Transient waveform acquisition system for the ELMO Bumpy Torus

    International Nuclear Information System (INIS)

    Young, K.G.; Burris, R.D.; Hillis, D.H.; Overbey, D.R.

    1984-10-01

    The transient waveform system described in this report is designed to acquire analog waveforms from the ELMO Bumpy Torus (EBT) diagnostic experiments. Pressure, density, synchrotron radiation, etc., are acquired and digitized with a Kinetic Systems TR812 transient recorder and associated modules located in a CAMAC crate. The system can simultaneously acquire, display, and transmit sets of data consisting of identification parameters and up to 1024 data points for 1 to 64 input signals (frequency range = 0.01 pulse/s to 100 kHz) of data every one or more minutes; thus, it can run continuously without operator intervention. The data are taken on a VAX 11/780 and transmitted to a data base on a DECSystem-10. To aid the programmer in making future modifications to the system, detailed documentation using the Yourdon structural methods has been given

  6. Metering error quantification under voltage and current waveform distortion

    Science.gov (United States)

    Wang, Tao; Wang, Jia; Xie, Zhi; Zhang, Ran

    2017-09-01

    With integration of more and more renewable energies and distortion loads into power grid, the voltage and current waveform distortion results in metering error in the smart meters. Because of the negative effects on the metering accuracy and fairness, it is an important subject to study energy metering combined error. In this paper, after the comparing between metering theoretical value and real recorded value under different meter modes for linear and nonlinear loads, a quantification method of metering mode error is proposed under waveform distortion. Based on the metering and time-division multiplier principles, a quantification method of metering accuracy error is proposed also. Analyzing the mode error and accuracy error, a comprehensive error analysis method is presented which is suitable for new energy and nonlinear loads. The proposed method has been proved by simulation.

  7. Image-domain full waveform inversion: Field data example

    KAUST Repository

    Zhang, Sanzong

    2014-08-05

    The main difficulty with the data-domain full waveform inversion (FWI) is that it tends to get stuck in the local minima associated with the waveform misfit function. This is the result of cycle skipping which degrades the low-wavenumber update in the absence of low-frequencies and long-offset data. An image-domain objective function is defined as the normed difference between the predicted and observed common image gathers (CIGs) in the subsurface offset domain. This new objective function is not constrained by cycle skipping at the far subsurface offsets. To test the effectiveness of this method, we apply it to marine data recorded in the Gulf of Mexico. Results show that image-domain FWI is less sensitive to the initial model and the absence of low-frequency data compared with conventional FWI. The liability, however, is that it is almost an order of magnitude more expensive than standard FWI.

  8. Photonic arbitrary waveform generation applicable to multiband UWB communications.

    Science.gov (United States)

    Bolea, Mario; Mora, José; Ortega, Beatriz; Capmany, José

    2010-12-06

    A novel photonic structure for arbitrary waveform generation (AWG) is proposed based on the electrooptical intensity modulation of a broadband optical signal which is transmitted by a dispersive element and the optoelectrical processing is realized by combining an interferometric structure with balanced photodetection. The generated waveform can be fully reconfigured through the control of the optical source power spectrum and the interferometric structure. The use of balanced photodetection permits to remove the baseband component of the generated signal which is relevant in certain applications. We have theoretically described and experimentally demonstrated the feasibility of the system by means of the generation of different pulse shapes. Specifically, the proposed structure has been applicable to generate Multiband UWB signaling formats regarding to the FCC requirements in order to show the flexibility of the system.

  9. Strategies for the characteristic extraction of gravitational waveforms

    International Nuclear Information System (INIS)

    Babiuc, M. C.; Bishop, N. T.; Szilagyi, B.; Winicour, J.

    2009-01-01

    We develop, test, and compare new numerical and geometrical methods for improving the accuracy of extracting waveforms using characteristic evolution. The new numerical method involves use of circular boundaries to the stereographic grid patches which cover the spherical cross sections of the outgoing null cones. We show how an angular version of numerical dissipation can be introduced into the characteristic code to damp the high frequency error arising form the irregular way the circular patch boundary cuts through the grid. The new geometric method involves use of the Weyl tensor component Ψ 4 to extract the waveform as opposed to the original approach via the Bondi news function. We develop the necessary analytic and computational formula to compute the O(1/r) radiative part of Ψ 4 in terms of a conformally compactified treatment of null infinity. These methods are compared and calibrated in test problems based upon linearized waves.

  10. Image-domain full waveform inversion: Field data example

    KAUST Repository

    Zhang, Sanzong; Schuster, Gerard T.

    2014-01-01

    The main difficulty with the data-domain full waveform inversion (FWI) is that it tends to get stuck in the local minima associated with the waveform misfit function. This is the result of cycle skipping which degrades the low-wavenumber update in the absence of low-frequencies and long-offset data. An image-domain objective function is defined as the normed difference between the predicted and observed common image gathers (CIGs) in the subsurface offset domain. This new objective function is not constrained by cycle skipping at the far subsurface offsets. To test the effectiveness of this method, we apply it to marine data recorded in the Gulf of Mexico. Results show that image-domain FWI is less sensitive to the initial model and the absence of low-frequency data compared with conventional FWI. The liability, however, is that it is almost an order of magnitude more expensive than standard FWI.

  11. Toward Generating More Diagnostic Features from Photoplethysmogram Waveforms

    Directory of Open Access Journals (Sweden)

    Mohamed Elgendi

    2018-03-01

    Full Text Available Photoplethysmogram (PPG signals collected using a pulse oximeter are increasingly being used for screening and diagnosis purposes. Because of the non-invasive, cost-effective, and easy-to-use nature of the pulse oximeter, clinicians and biomedical engineers are investigating how PPG signals can help in the management of many medical conditions, especially for global health application. The study of PPG signal analysis is relatively new compared to research in electrocardiogram signals, for instance; however, we anticipate that in the near future blood pressure, cardiac output, and other clinical parameters will be measured from wearable devices that collect PPG signals, based on the signal’s vast potential. This article attempts to organize and standardize the names of PPG waveforms to ensure consistent terminologies, thereby helping the rapid developments in this research area, decreasing the disconnect within and among different disciplines, and increasing the number of features generated from PPG waveforms.

  12. Toward Generating More Diagnostic Features from Photoplethysmogram Waveforms.

    Science.gov (United States)

    Elgendi, Mohamed; Liang, Yongbo; Ward, Rabab

    2018-03-11

    Photoplethysmogram (PPG) signals collected using a pulse oximeter are increasingly being used for screening and diagnosis purposes. Because of the non-invasive, cost-effective, and easy-to-use nature of the pulse oximeter, clinicians and biomedical engineers are investigating how PPG signals can help in the management of many medical conditions, especially for global health application. The study of PPG signal analysis is relatively new compared to research in electrocardiogram signals, for instance; however, we anticipate that in the near future blood pressure, cardiac output, and other clinical parameters will be measured from wearable devices that collect PPG signals, based on the signal's vast potential. This article attempts to organize and standardize the names of PPG waveforms to ensure consistent terminologies, thereby helping the rapid developments in this research area, decreasing the disconnect within and among different disciplines, and increasing the number of features generated from PPG waveforms.

  13. Temporal changes of the inner core from waveform doublets

    Science.gov (United States)

    Yang, Y.; Song, X.

    2017-12-01

    Temporal changes of the Earth's inner core have been detected from earthquake waveform doublets (repeating sources with similar waveforms at the same station). Using doublets from events up to the present in the South Sandwich Island (SSI) region recorded by the station COLA (Alaska), we confirmed systematic temporal variations in the travel time of the inner-core-refracted phase (PKIKP, the DF branch). The DF phase arrives increasingly earlier than outer core phases (BC and AB) by rate of approximately 0.07 s per decade since 1970s. If we assume that the temporal change is caused by a shift of the lateral gradient from the inner core rotation as in previous studies, we estimate the rotation rate of 0.2-0.4 degree per year. We also analyzed the topography of the inner core boundary (ICB) using SSI waveform doublets recorded by seismic stations in Eurasia and North America with reflected phase (PKiKP) and refracted phases. There are clear temporal changes in the waveforms of doublets for PKiKP under Africa and Central America. In addition, for doublets recorded by three nearby stations (AAK, AML, and UCH), we observed systematic change in the relative travel time of PKiKP and PKIKP. The temporal change of the (PKiKP - PKIKP) differential time is always negative for the event pairs if both events are before 2007, while it fluctuates to positive if the later event occurs after 2007. The rapid temporal changes in space and time may indicate localized processes (e.g., freezing and melting) of the ICB in the recent decades under Africa. We are exploring 4D models consistent with the temporal changes.

  14. Frequency-Dependent Blanking with Digital Linear Chirp Waveform Synthesis

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Andrews, John M. [General Atomics Aeronautical Systems, Inc., San Diego, CA (United States)

    2014-07-01

    Wideband radar systems, especially those that operate at lower frequencies such as VHF and UHF, are often restricted from transmitting within or across specific frequency bands in order to prevent interference to other spectrum users. Herein we describe techniques for notching the transmitted spectrum of a generated and transmitted radar waveform. The notches are fully programmable as to their location, and techniques are given that control the characteristics of the notches.

  15. Arbitrary waveform generator to improve laser diode driver performance

    Science.gov (United States)

    Fulkerson, Jr, Edward Steven

    2015-11-03

    An arbitrary waveform generator modifies the input signal to a laser diode driver circuit in order to reduce the overshoot/undershoot and provide a "flat-top" signal to the laser diode driver circuit. The input signal is modified based on the original received signal and the feedback from the laser diode by measuring the actual current flowing in the laser diode after the original signal is applied to the laser diode.

  16. Acquisition of L2 Japanese Geminates: Training with Waveform Displays

    Science.gov (United States)

    Motohashi-Saigo, Miki; Hardison, Debra M.

    2009-01-01

    The value of waveform displays as visual feedback was explored in a training study involving perception and production of L2 Japanese by beginning-level L1 English learners. A pretest-posttest design compared auditory-visual (AV) and auditory-only (A-only) Web-based training. Stimuli were singleton and geminate /t,k,s/ followed by /a,u/ in two…

  17. Waveform design and diversity for advanced radar systems

    CERN Document Server

    Gini, Fulvio

    2012-01-01

    In recent years, various algorithms for radar signal design, that rely heavily upon complicated processing and/or antenna architectures, have been suggested. These techniques owe their genesis to several factors, including revolutionary technological advances (new flexible waveform generators, high speed signal processing hardware, digital array radar technology, etc.) and the stressing performance requirements, often imposed by defence applications in areas such as airborne early warning and homeland security.Increasingly complex operating scenarios calls for sophisticated algorithms with the

  18. DISECA - A Matlab code for dispersive waveform calculations

    Czech Academy of Sciences Publication Activity Database

    Gaždová, Renata; Vilhelm, J.

    2011-01-01

    Roč. 38, č. 4 (2011), s. 526-531 ISSN 0266-352X R&D Projects: GA AV ČR IAA300460705 Institutional research plan: CEZ:AV0Z30460519 Keywords : velocity dispersion * synthetic waveform * seismic method Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 0.987, year: 2011 http://www.sciencedirect.com/science/article/pii/S0266352X11000425

  19. Faithful effective-one-body waveforms of small-mass-ratio coalescing black hole binaries

    International Nuclear Information System (INIS)

    Damour, Thibault; Nagar, Alessandro

    2007-01-01

    We address the problem of constructing high-accuracy, faithful analytic waveforms describing the gravitational wave signal emitted by inspiralling and coalescing binary black holes. We work within the effective-one-body (EOB) framework and propose a methodology for improving the current (waveform) implementations of this framework based on understanding, element by element, the physics behind each feature of the waveform and on systematically comparing various EOB-based waveforms with exact waveforms obtained by numerical relativity approaches. The present paper focuses on small-mass-ratio nonspinning binary systems, which can be conveniently studied by Regge-Wheeler-Zerilli-type methods. Our results include (i) a resummed, 3 PN-accurate description of the inspiral waveform, (ii) a better description of radiation reaction during the plunge, (iii) a refined analytic expression for the plunge waveform, (iv) an improved treatment of the matching between the plunge and ring-down waveforms. This improved implementation of the EOB approach allows us to construct complete analytic waveforms which exhibit a remarkable agreement with the exact ones in modulus, frequency, and phase. In particular, the analytic and numerical waveforms stay in phase, during the whole process, within ±1.1% of a cycle. We expect that the extension of our methodology to the comparable-mass case will be able to generate comparably accurate analytic waveforms of direct use for the ground-based network of interferometric detectors of gravitational waves

  20. Rapidly reconfigurable high-fidelity optical arbitrary waveform generation in heterogeneous photonic integrated circuits.

    Science.gov (United States)

    Feng, Shaoqi; Qin, Chuan; Shang, Kuanping; Pathak, Shibnath; Lai, Weicheng; Guan, Binbin; Clements, Matthew; Su, Tiehui; Liu, Guangyao; Lu, Hongbo; Scott, Ryan P; Ben Yoo, S J

    2017-04-17

    This paper demonstrates rapidly reconfigurable, high-fidelity optical arbitrary waveform generation (OAWG) in a heterogeneous photonic integrated circuit (PIC). The heterogeneous PIC combines advantages of high-speed indium phosphide (InP) modulators and low-loss, high-contrast silicon nitride (Si3N4) arrayed waveguide gratings (AWGs) so that high-fidelity optical waveform syntheses with rapid waveform updates are possible. The generated optical waveforms spanned a 160 GHz spectral bandwidth starting from an optical frequency comb consisting of eight comb lines separated by 20 GHz channel spacing. The Error Vector Magnitude (EVM) values of the generated waveforms were approximately 16.4%. The OAWG module can rapidly and arbitrarily reconfigure waveforms upon every pulse arriving at 2 ns repetition time. The result of this work indicates the feasibility of truly dynamic optical arbitrary waveform generation where the reconfiguration rate or the modulator bandwidth must exceed the channel spacing of the AWG and the optical frequency comb.

  1. Gravitational Waveforms in the Early Inspiral of Binary Black Hole Systems

    Science.gov (United States)

    Barkett, Kevin; Kumar, Prayush; Bhagwat, Swetha; Brown, Duncan; Scheel, Mark; Szilagyi, Bela; Simulating eXtreme Spacetimes Collaboration

    2015-04-01

    The inspiral, merger and ringdown of compact object binaries are important targets for gravitational wave detection by aLIGO. Detection and parameter estimation will require long, accurate waveforms for comparison. There are a number of analytical models for generating gravitational waveforms for these systems, but the only way to ensure their consistency and correctness is by comparing with numerical relativity simulations that cover many inspiral orbits. We've simulated a number of binary black hole systems with mass ratio 7 and a moderate, aligned spin on the larger black hole. We have attached these numerical waveforms to analytical waveform models to generate long hybrid gravitational waveforms that span the entire aLIGO frequency band. We analyze the robustness of these hybrid waveforms and measure the faithfulness of different hybrids with each other to obtain an estimate on how long future numerical simulations need to be in order to ensure that waveforms are accurate enough for use by aLIGO.

  2. Single-spin precessing gravitational waveform in closed form

    Science.gov (United States)

    Lundgren, Andrew; O'Shaughnessy, R.

    2014-02-01

    In coming years, gravitational-wave detectors should find black hole-neutron star (BH-NS) binaries, potentially coincident with astronomical phenomena like short gamma ray bursts. These binaries are expected to precess. Gravitational-wave science requires a tractable model for precessing binaries, to disentangle precession physics from other phenomena like modified strong field gravity, tidal deformability, or Hubble flow; and to measure compact object masses, spins, and alignments. Moreover, current searches for gravitational waves from compact binaries use templates where the binary does not precess and are ill-suited for detection of generic precessing sources. In this paper we provide a closed-form representation of the single-spin precessing waveform in the frequency domain by reorganizing the signal as a sum over harmonics, each of which resembles a nonprecessing waveform. This form enables simple analytic calculations of the Fisher matrix for use in template bank generation and coincidence metrics, and jump proposals to improve the efficiency of Markov chain Monte Carlo sampling. We have verified that for generic BH-NS binaries, our model agrees with the time-domain waveform to 2%. Straightforward extensions of the derivations outlined here (and provided in full online) allow higher accuracy and error estimates.

  3. Photoplethysmographic signal waveform index for detection of increased arterial stiffness

    International Nuclear Information System (INIS)

    Pilt, K; Meigas, K; Ferenets, R; Temitski, K; Viigimaa, M

    2014-01-01

    The aim of this research was to assess the validity of the photoplethysmographic (PPG) waveform index PPGAI for the estimation of increased arterial stiffness. For this purpose, PPG signals were recorded from 24 healthy subjects and from 20 type II diabetes patients. The recorded PPG signals were processed with the analysis algorithm developed and the waveform index PPGAI similar to the augmentation index (AIx) was calculated. As a reference, the aortic AIx was assessed and normalized for a heart rate of 75 bpm (AIx@75) by a SphygmoCor device. A strong correlation (r = 0.85) between the PPGAI and the aortic AIx@75 and a positive correlation of both indices with age were found. Age corrections for the indices PPGAI and AIx@75 as regression models from the signals of healthy subjects were constructed. Both indices revealed a significant difference between the groups of diabetes patients and healthy controls. However, the PPGAI provided the best statistical discrimination for the group of subjects with increased arterial stiffness. The waveform index PPGAI based on the inexpensive PPG technology can be considered as a perspective measure of increased arterial stiffness estimation in clinical screenings. (paper)

  4. Waveform inversion for acoustic VTI media in frequency domain

    KAUST Repository

    Wu, Zedong

    2016-09-06

    Reflected waveform inversion (RWI) provides a method to reduce the nonlinearity of the standard full waveform inversion (FWI) by inverting for the background model using a single scattered wavefield from an inverted perturbation. However, current RWI methods are mostly based on isotropic media assumption. We extend the idea of the combining inversion for the background model and perturbations to address transversely isotropic with a vertical axis of symmetry (VTI) media taking into consideration of the optimal parameter sensitivity information. As a result, we apply Born modeling corresponding to perturbations in only for the variable e to derive the relative reflected waveform inversion formulation. To reduce the number of parameters, we assume the background part of η = ε and work with a single variable to describe the anisotropic part of the wave propagation. Thus, the optimization variables are the horizontal velocity v, η = ε and the e perturbation. Application to the anisotropic version of Marmousi model with a single frequency of 2.5 Hz shows that this method can converge to the accurate result starting from a linearly increasing isotropic initial velocity. Application to a real dataset demonstrates the versatility of the approach.

  5. Frequency-domain waveform inversion using the phase derivative

    KAUST Repository

    Choi, Yun Seok

    2013-09-26

    Phase wrapping in the frequency domain or cycle skipping in the time domain is the major cause of the local minima problem in the waveform inversion when the starting model is far from the true model. Since the phase derivative does not suffer from the wrapping effect, its inversion has the potential of providing a robust and reliable inversion result. We propose a new waveform inversion algorithm using the phase derivative in the frequency domain along with the exponential damping term to attenuate reflections. We estimate the phase derivative, or what we refer to as the instantaneous traveltime, by taking the derivative of the Fourier-transformed wavefield with respect to the angular frequency, dividing it by the wavefield itself and taking the imaginary part. The objective function is constructed using the phase derivative and the gradient of the objective function is computed using the back-propagation algorithm. Numerical examples show that our inversion algorithm with a strong damping generates a tomographic result even for a high ‘single’ frequency, which can be a good initial model for full waveform inversion and migration.

  6. Nonspinning numerical relativity waveform surrogates: assessing the model

    Science.gov (United States)

    Field, Scott; Blackman, Jonathan; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel

    2015-04-01

    Recently, multi-modal gravitational waveform surrogate models have been built directly from data numerically generated by the Spectral Einstein Code (SpEC). I will describe ways in which the surrogate model error can be quantified. This task, in turn, requires (i) characterizing differences between waveforms computed by SpEC with those predicted by the surrogate model and (ii) estimating errors associated with the SpEC waveforms from which the surrogate is built. Both pieces can have numerous sources of numerical and systematic errors. We make an attempt to study the most dominant error sources and, ultimately, the surrogate model's fidelity. These investigations yield information about the surrogate model's uncertainty as a function of time (or frequency) and parameter, and could be useful in parameter estimation studies which seek to incorporate model error. Finally, I will conclude by comparing the numerical relativity surrogate model to other inspiral-merger-ringdown models. A companion talk will cover the building of multi-modal surrogate models.

  7. Arbitrary waveform modulated pulse EPR at 200 GHz

    Science.gov (United States)

    Kaminker, Ilia; Barnes, Ryan; Han, Songi

    2017-06-01

    We report here on the implementation of arbitrary waveform generation (AWG) capabilities at ∼200 GHz into an Electron Paramagnetic Resonance (EPR) and Dynamic Nuclear Polarization (DNP) instrument platform operating at 7 T. This is achieved with the integration of a 1 GHz, 2 channel, digital to analog converter (DAC) board that enables the generation of coherent arbitrary waveforms at Ku-band frequencies with 1 ns resolution into an existing architecture of a solid state amplifier multiplier chain (AMC). This allows for the generation of arbitrary phase- and amplitude-modulated waveforms at 200 GHz with >150 mW power. We find that the non-linearity of the AMC poses significant difficulties in generating amplitude-modulated pulses at 200 GHz. We demonstrate that in the power-limited regime of ω1 10 MHz) spin manipulation in incoherent (inversion), as well as coherent (echo formation) experiments. Highlights include the improvement by one order of magnitude in inversion bandwidth compared to that of conventional rectangular pulses, as well as a factor of two in improvement in the refocused echo intensity at 200 GHz.

  8. Handling of computational in vitro/in vivo correlation problems by Microsoft Excel: III. Convolution and deconvolution.

    Science.gov (United States)

    Langenbucher, Frieder

    2003-11-01

    Convolution and deconvolution are the classical in-vitro-in-vivo correlation tools to describe the relationship between input and weighting/response in a linear system, where input represents the drug release in vitro, weighting/response any body response in vivo. While functional treatment, e.g. in terms of polyexponential or Weibull distribution, is more appropriate for general survey or prediction, numerical algorithms are useful for treating actual experimental data. Deconvolution is not considered an algorithm by its own, but the inversion of a corresponding convolution. MS Excel is shown to be a useful tool for all these applications.

  9. Deconvolution of X-ray diffraction profiles using series expansion: a line-broadening study of polycrystalline 9-YSZ

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez-Bajo, F. [Universidad de Extremadura, Badajoz (Spain). Dept. de Electronica e Ingenieria Electromecanica; Ortiz, A.L.; Cumbrera, F.L. [Universidad de Extremadura, Badajoz (Spain). Dept. de Fisica

    2001-07-01

    Deconvolution of X-ray diffraction profiles is a fundamental step in obtaining reliable results in the microstructural characterization (crystallite size, lattice microstrain, etc) of polycrystalline materials. In this work we have analyzed a powder sample of 9-YSZ using a technique based on the Fourier series expansion of the pure profile. This procedure, which can be combined with regularization methods, is specially powerful to minimize the effects of the ill-posed nature of the linear integral equation involved in the kinematical theory of X-ray diffraction. Finally, the deconvoluted profiles have been used to obtain microstructural parameters by means of the integral-breadth method. (orig.)

  10. Determining mineralogical variations of aeolian deposits using thermal infrared emissivity and linear deconvolution methods

    Science.gov (United States)

    Hubbard, Bernard E.; Hooper, Donald M.; Solano, Federico; Mars, John C.

    2018-01-01

    We apply linear deconvolution methods to derive mineral and glass proportions for eight field sample training sites at seven dune fields: (1) Algodones, California; (2) Big Dune, Nevada; (3) Bruneau, Idaho; (4) Great Kobuk Sand Dunes, Alaska; (5) Great Sand Dunes National Park and Preserve, Colorado; (6) Sunset Crater, Arizona; and (7) White Sands National Monument, New Mexico. These dune fields were chosen because they represent a wide range of mineral grain mixtures and allow us to gauge a better understanding of both compositional and sorting effects within terrestrial and extraterrestrial dune systems. We also use actual ASTER TIR emissivity imagery to map the spatial distribution of these minerals throughout the seven dune fields and evaluate the effects of degraded spectral resolution on the accuracy of mineral abundances retrieved. Our results show that hyperspectral data convolutions of our laboratory emissivity spectra outperformed multispectral data convolutions of the same data with respect to the mineral, glass and lithic abundances derived. Both the number and wavelength position of spectral bands greatly impacts the accuracy of linear deconvolution retrieval of feldspar proportions (e.g. K-feldspar vs. plagioclase) especially, as well as the detection of certain mafic and carbonate minerals. In particular, ASTER mapping results show that several of the dune sites display patterns such that less dense minerals typically have higher abundances near the center of the active and most evolved dunes in the field, while more dense minerals and glasses appear to be more abundant along the margins of the active dune fields.

  11. Determining mineralogical variations of aeolian deposits using thermal infrared emissivity and linear deconvolution methods

    Science.gov (United States)

    Hubbard, Bernard E.; Hooper, Donald M.; Solano, Federico; Mars, John C.

    2018-02-01

    We apply linear deconvolution methods to derive mineral and glass proportions for eight field sample training sites at seven dune fields: (1) Algodones, California; (2) Big Dune, Nevada; (3) Bruneau, Idaho; (4) Great Kobuk Sand Dunes, Alaska; (5) Great Sand Dunes National Park and Preserve, Colorado; (6) Sunset Crater, Arizona; and (7) White Sands National Monument, New Mexico. These dune fields were chosen because they represent a wide range of mineral grain mixtures and allow us to gauge a better understanding of both compositional and sorting effects within terrestrial and extraterrestrial dune systems. We also use actual ASTER TIR emissivity imagery to map the spatial distribution of these minerals throughout the seven dune fields and evaluate the effects of degraded spectral resolution on the accuracy of mineral abundances retrieved. Our results show that hyperspectral data convolutions of our laboratory emissivity spectra outperformed multispectral data convolutions of the same data with respect to the mineral, glass and lithic abundances derived. Both the number and wavelength position of spectral bands greatly impacts the accuracy of linear deconvolution retrieval of feldspar proportions (e.g. K-feldspar vs. plagioclase) especially, as well as the detection of certain mafic and carbonate minerals. In particular, ASTER mapping results show that several of the dune sites display patterns such that less dense minerals typically have higher abundances near the center of the active and most evolved dunes in the field, while more dense minerals and glasses appear to be more abundant along the margins of the active dune fields.

  12. Obtaining Crustal Properties From the P Coda Without Deconvolution: an Example From the Dakotas

    Science.gov (United States)

    Frederiksen, A. W.; Delaney, C.

    2013-12-01

    Receiver functions are a popular technique for mapping variations in crustal thickness and bulk properties, as the travel times of Ps conversions and multiples from the Moho constrain both Moho depth (h) and the Vp/Vs ratio (k) of the crust. The established approach is to generate a suite of receiver functions, which are then stacked along arrival-time curves for a set of (h,k) values (the h-k stacking approach of Zhu and Kanamori, 2000). However, this approach is sensitive to noise issues with the receiver functions, deconvolution artifacts, and the effects of strong crustal layering (such as in sedimentary basins). In principle, however, the deconvolution is unnecessary; for any given crustal model, we can derive a transfer function allowing us to predict the radial component of the P coda from the vertical, and so determine a misfit value for a particular crustal model. We apply this idea to an Earthscope Transportable Array data set from North and South Dakota and western Minnesota, for which we already have measurements obtained using conventional h-k stacking, and so examine the possibility of crustal thinning and modification by a possible failed branch of the Mid-Continent Rift.

  13. Blind deconvolution of time-of-flight mass spectra from atom probe tomography

    International Nuclear Information System (INIS)

    Johnson, L.J.S.; Thuvander, M.; Stiller, K.; Odén, M.; Hultman, L.

    2013-01-01

    A major source of uncertainty in compositional measurements in atom probe tomography stems from the uncertainties of assigning peaks or parts of peaks in the mass spectrum to their correct identities. In particular, peak overlap is a limiting factor, whereas an ideal mass spectrum would have peaks at their correct positions with zero broadening. Here, we report a method to deconvolute the experimental mass spectrum into such an ideal spectrum and a system function describing the peak broadening introduced by the field evaporation and detection of each ion. By making the assumption of a linear and time-invariant behavior, a system of equations is derived that describes the peak shape and peak intensities. The model is fitted to the observed spectrum by minimizing the squared residuals, regularized by the maximum entropy method. For synthetic data perfectly obeying the assumptions, the method recovered peak intensities to within ±0.33at%. The application of this model to experimental APT data is exemplified with Fe–Cr data. Knowledge of the peak shape opens up several new possibilities, not just for better overall compositional determination, but, e.g., for the estimation of errors of ranging due to peak overlap or peak separation constrained by isotope abundances. - Highlights: • A method for the deconvolution of atom probe mass spectra is proposed. • Applied to synthetic randomly generated spectra the accuracy was ±0.33 at. • Application of the method to an experimental Fe–Cr spectrum is demonstrated

  14. Application of Glow Curve Deconvolution Method to Evaluate Low Dose TLD LiF

    International Nuclear Information System (INIS)

    Kurnia, E; Oetami, H R; Mutiah

    1996-01-01

    Thermoluminescence Dosimeter (TLD), especially LiF:Mg, Ti material, is one of the most practical personal dosimeter in known to date. Dose measurement under 100 uGy using TLD reader is very difficult in high precision level. The software application is used to improve the precision of the TLD reader. The objectives of the research is to compare three Tl-glow curve analysis method irradiated in the range between 5 up to 250 uGy. The first method is manual analysis, dose information is obtained from the area under the glow curve between pre selected temperature limits, and background signal is estimated by a second readout following the first readout. The second method is deconvolution method, separating glow curve into four peaks mathematically and dose information is obtained from area of peak 5, and background signal is eliminated computationally. The third method is deconvolution method but the dose is represented by the sum of area of peak 3,4 and 5. The result shown that the sum of peak 3,4 and 5 method can improve reproducibility six times better than manual analysis for dose 20 uGy, the ability to reduce MMD until 10 uGy rather than 60 uGy with manual analysis or 20 uGy with peak 5 area method. In linearity, the sum of peak 3,4 and 5 method yields exactly linear dose response curve over the entire dose range

  15. Ultrasonic inspection of studs (bolts) using dynamic predictive deconvolution and wave shaping.

    Science.gov (United States)

    Suh, D M; Kim, W W; Chung, J G

    1999-01-01

    Bolt degradation has become a major issue in the nuclear industry since the 1980's. If small cracks in stud bolts are not detected early enough, they grow rapidly and cause catastrophic disasters. Their detection, despite its importance, is known to be a very difficult problem due to the complicated structures of the stud bolts. This paper presents a method of detecting and sizing a small crack in the root between two adjacent crests in threads. The key idea is from the fact that the mode-converted Rayleigh wave travels slowly down the face of the crack and turns from the intersection of the crack and the root of thread to the transducer. Thus, when a crack exists, a small delayed pulse due to the Rayleigh wave is detected between large regularly spaced pulses from the thread. The delay time is the same as the propagation delay time of the slow Rayleigh wave and is proportional to the site of the crack. To efficiently detect the slow Rayleigh wave, three methods based on digital signal processing are proposed: wave shaping, dynamic predictive deconvolution, and dynamic predictive deconvolution combined with wave shaping.

  16. The measurement of layer thickness by the deconvolution of ultrasonic signals

    International Nuclear Information System (INIS)

    McIntyre, P.J.

    1977-07-01

    An ultrasonic technique for measuring layer thickness, such as oxide on corroded steel, is described. A time domain response function is extracted from an ultrasonic signal reflected from the layered system. This signal is the convolution of the input signal with the response function of the layer. By using a signal reflected from a non-layered surface to represent the input, the response function may be obtained by deconvolution. The advantage of this technique over that described by Haines and Bel (1975) is that the quality of the results obtained using their method depends on the ability of a skilled operator in lining up an arbitrary common feature of the signals received. Using deconvolution no operator manipulations are necessary and so less highly trained personnel may successfully make the measurements. Results are presented for layers of araldite on aluminium and magnetite of steel. The results agreed satisfactorily with predictions but in the case of magnetite, its high velocity of sound meant that thicknesses of less than 250 microns were difficult to measure accurately. (author)

  17. Optimization of deconvolution software used in the study of spectra of soil samples from Madagascar

    International Nuclear Information System (INIS)

    ANDRIAMADY NARIMANANA, S.F.

    2005-01-01

    The aim of this work is to perform the deconvolution of gamma spectra by using the deconvolution peak program. Synthetic spectra, reference materials and ten soil samples with various U-238 activities from three regions of Madagascar were used. This work concerns : soil sample spectra with low activities of about (47±2) Bq.kg -1 from Ankatso, soil sample spectra with average activities of about (125±2)Bq.kg -1 from Antsirabe and soil sample spectra with high activities of about (21100± 120) Bq.kg -1 from Vinaninkarena. Singlet and multiplet peaks with various intensities were found in each soil spectrum. Interactive Peak Fit (IPF) program in Genie-PC from Canberra Industries allows to deconvoluate many multiplet regions : quartet within 235 keV-242 keV, Pb-214 and Pb-212 within 294 keV -301 keV; Th-232 daughters within 582 keV - 584 keV; Ac-228 within 904 keV -911 keV and within 964 keV-970 keV and Bi-214 within 1401 keV - 1408 keV. Those peaks were used to quantify considered radionuclides. However, IPF cannot resolve Ra-226 peak at 186,1 keV. [fr

  18. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    Science.gov (United States)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  19. Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit

    Science.gov (United States)

    Schawinski, Kevin; Zhang, Ce; Zhang, Hantian; Fowler, Lucas; Santhanam, Gokula Krishnan

    2017-05-01

    Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon-Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.

  20. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    Science.gov (United States)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  1. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    Science.gov (United States)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  2. The thermoluminescence glow-curve analysis using GlowFit - the new powerful tool for deconvolution

    International Nuclear Information System (INIS)

    Puchalska, M.; Bilski, P.

    2005-10-01

    A new computer program, GlowFit, for deconvoluting first-order kinetics thermoluminescence (TL) glow-curves has been developed. A non-linear function describing a single glow-peak is fitted to experimental points using the least squares Levenberg-Marquardt method. The main advantage of GlowFit is in its ability to resolve complex TL glow-curves consisting of strongly overlapping peaks, such as those observed in heavily doped LiF:Mg,Ti (MTT) detectors. This resolution is achieved mainly by setting constraints or by fixing selected parameters. The initial values of the fitted parameters are placed in the so-called pattern files. GlowFit is a Microsoft Windows-operated user-friendly program. Its graphic interface enables easy intuitive manipulation of glow-peaks, at the initial stage (parameter initialization) and at the final stage (manual adjustment) of fitting peak parameters to the glow-curves. The program is freely downloadable from the web site www.ifj.edu.pl/NPP/deconvolution.htm (author)

  3. Deconvolution analysis of sup(99m)Tc-methylene diphosphonate kinetics in metabolic bone disease

    Energy Technology Data Exchange (ETDEWEB)

    Knop, J.; Kroeger, E.; Stritzke, P.; Schneider, C.; Kruse, H.P.

    1981-02-01

    The kinetics of sup(99m)Tc-methylene diphosphonate (MDP) and /sup 47/Ca were studied in three patients with osteoporosis, three patients with hyperparathyroidism, and two patients with osteomalacia. The activities of sup(99m)Tc-MDP were recorded in the lumbar spine, paravertebral soft tissues, and in venous blood samples for 1 h after injection. The results were submitted to deconvolution analysis to determine regional bone accumulation rates. /sup 47/Ca kinetics were analysed by a linear two-compartment model quantitating short-term mineral exchange, exchangeable bone calcium, and calcium accretion. The sup(99m)Tc-MDP accumulation rates were small in osteoporosis, greater in hyperparathyroidism, and greatest in osteomalacia. No correlations were obtained between sup(99m)Tc-MDP bone accumulation rates and the results of /sup 47/Ca kinetics. However, there was a significant relationship between the level of serum alkaline phosphatase and bone accumulation rates (R = 0.71, P < 0.025). As a result deconvolution analysis of regional sup(99m)Tc-MDP kinetics in dynamic bone scans might be useful to quantitate osseous tracer accumulation in metabolic bone disease. The lack of correlation between the results of sup(99m)Tc-MDP kinetics and /sup 47/Ca kinetics might suggest a preferential binding of sup(99m)Tc-MDP to the organic matrix of the bone, as has been suggested by other authors on the basis of experimental and clinical investigations.

  4. Colocated MIMO Radar: Beamforming, Waveform design, and Target Parameter Estimation

    KAUST Repository

    Jardak, Seifallah

    2014-04-01

    Thanks to its improved capabilities, the Multiple Input Multiple Output (MIMO) radar is attracting the attention of researchers and practitioners alike. Because it transmits orthogonal or partially correlated waveforms, this emerging technology outperformed the phased array radar by providing better parametric identifiability, achieving higher spatial resolution, and designing complex beampatterns. To avoid jamming and enhance the signal to noise ratio, it is often interesting to maximize the transmitted power in a given region of interest and minimize it elsewhere. This problem is known as the transmit beampattern design and is usually tackled as a two-step process: a transmit covariance matrix is firstly designed by minimizing a convex optimization problem, which is then used to generate practical waveforms. In this work, we propose simple novel methods to generate correlated waveforms using finite alphabet constant and non-constant-envelope symbols. To generate finite alphabet waveforms, the proposed method maps easily generated Gaussian random variables onto the phase-shift-keying, pulse-amplitude, and quadrature-amplitude modulation schemes. For such mapping, the probability density function of Gaussian random variables is divided into M regions, where M is the number of alphabets in the corresponding modulation scheme. By exploiting the mapping function, the relationship between the cross-correlation of Gaussian and finite alphabet symbols is derived. The second part of this thesis covers the topic of target parameter estimation. To determine the reflection coefficient, spatial location, and Doppler shift of a target, maximum likelihood estimation yields the best performance. However, it requires a two dimensional search problem. Therefore, its computational complexity is prohibitively high. So, we proposed a reduced complexity and optimum performance algorithm which allows the two dimensional fast Fourier transform to jointly estimate the spatial location

  5. Improved Transient Response Estimations in Predicting 40 Hz Auditory Steady-State Response Using Deconvolution Methods

    Directory of Open Access Journals (Sweden)

    Xiaodan Tan

    2017-12-01

    Full Text Available The auditory steady-state response (ASSR is one of the main approaches in clinic for health screening and frequency-specific hearing assessment. However, its generation mechanism is still of much controversy. In the present study, the linear superposition hypothesis for the generation of ASSRs was investigated by comparing the relationships between the classical 40 Hz ASSR and three synthetic ASSRs obtained from three different templates for transient auditory evoked potential (AEP. These three AEPs are the traditional AEP at 5 Hz and two 40 Hz AEPs derived from two deconvolution algorithms using stimulus sequences, i.e., continuous loop averaging deconvolution (CLAD and multi-rate steady-state average deconvolution (MSAD. CLAD requires irregular inter-stimulus intervals (ISIs in the sequence while MSAD uses the same ISIs but evenly-spaced stimulus sequences which mimics the classical 40 Hz ASSR. It has been reported that these reconstructed templates show similar patterns but significant difference in morphology and distinct frequency characteristics in synthetic ASSRs. The prediction accuracies of ASSR using these templates show significant differences (p < 0.05 in 45.95, 36.28, and 10.84% of total time points within four cycles of ASSR for the traditional, CLAD, and MSAD templates, respectively, as compared with the classical 40 Hz ASSR, and the ASSR synthesized from the MSAD transient AEP suggests the best similarity. And such a similarity is also demonstrated at individuals only in MSAD showing no statistically significant difference (Hotelling's T2 test, T2 = 6.96, F = 0.80, p = 0.592 as compared with the classical 40 Hz ASSR. The present results indicate that both stimulation rate and sequencing factor (ISI variation affect transient AEP reconstructions from steady-state stimulation protocols. Furthermore, both auditory brainstem response (ABR and middle latency response (MLR are observed in contributing to the composition of ASSR but

  6. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode /SP Data

    Energy Technology Data Exchange (ETDEWEB)

    Oba, T. [SOKENDAI (The Graduate University for Advanced Studies), 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan); Riethmüller, T. L.; Solanki, S. K. [Max-Planck-Institut für Sonnensystemforschung (MPS), Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Iida, Y. [Department of Science and Technology/Kwansei Gakuin University, Gakuen 2-1, Sanda, Hyogo, 669–1337 Japan (Japan); Quintero Noda, C.; Shimizu, T. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan)

    2017-11-01

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.

  7. Deconvolution of 238,239,240Pu conversion electron spectra measured with a silicon drift detector

    DEFF Research Database (Denmark)

    Pommé, S.; Marouli, M.; Paepen, J.

    2018-01-01

    Internal conversion electron (ICE) spectra of thin 238,239,240Pu sources, measured with a windowless Peltier-cooled silicon drift detector (SDD), were deconvoluted and relative ICE intensities were derived from the fitted peak areas. Corrections were made for energy dependence of the full...

  8. Facilitating high resolution mass spectrometry data processing for screening of environmental water samples: An evaluation of two deconvolution tools

    NARCIS (Netherlands)

    Bade, R.; Causanilles, A.; Emke, E.; Bijlsma, L.; Sancho, J.V.; Hernandez, F.; de Voogt, P.

    2016-01-01

    A screening approach was applied to influent and effluent wastewater samples. After injection in a LC-LTQ-Orbitrap, data analysis was performed using two deconvolution tools, MsXelerator (modules MPeaks and MS Compare) and Sieve 2.1. The outputs were searched incorporating an in-house database of >

  9. Deconvolution, differentiation and Fourier transformation algorithms for noise-containing data based on splines and global approximation

    NARCIS (Netherlands)

    Wormeester, Herbert; Sasse, A.G.B.M.; van Silfhout, Arend

    1988-01-01

    One of the main problems in the analysis of measured spectra is how to reduce the influence of noise in data processing. We show a deconvolution, a differentiation and a Fourier Transform algorithm that can be run on a small computer (64 K RAM) and suffer less from noise than commonly used routines.

  10. Full-waveform data for building roof step edge localization

    Science.gov (United States)

    Słota, Małgorzata

    2015-08-01

    Airborne laser scanning data perfectly represent flat or gently sloped areas; to date, however, accurate breakline detection is the main drawback of this technique. This issue becomes particularly important in the case of modeling buildings, where accuracy higher than the footprint size is often required. This article covers several issues related to full-waveform data registered on building step edges. First, the full-waveform data simulator was developed and presented in this paper. Second, this article provides a full description of the changes in echo amplitude, echo width and returned power caused by the presence of edges within the laser footprint. Additionally, two important properties of step edge echoes, peak shift and echo asymmetry, were noted and described. It was shown that these properties lead to incorrect echo positioning along the laser center line and can significantly reduce the edge points' accuracy. For these reasons and because all points are aligned with the center of the beam, regardless of the actual target position within the beam footprint, we can state that step edge points require geometric corrections. This article presents a novel algorithm for the refinement of step edge points. The main distinguishing advantage of the developed algorithm is the fact that none of the additional data, such as emitted signal parameters, beam divergence, approximate edge geometry or scanning settings, are required. The proposed algorithm works only on georeferenced profiles of reflected laser energy. Another major advantage is the simplicity of the calculation, allowing for very efficient data processing. Additionally, the developed method of point correction allows for the accurate determination of points lying on edges and edge point densification. For this reason, fully automatic localization of building roof step edges based on LiDAR full-waveform data with higher accuracy than the size of the lidar footprint is feasible.

  11. Development of optoelectronic monitoring system for ear arterial pressure waveforms

    Science.gov (United States)

    Sasayama, Satoshi; Imachi, Yu; Yagi, Tamotsu; Imachi, Kou; Ono, Toshirou; Man-i, Masando

    1994-02-01

    Invasive intra-arterial blood pressure measurement is the most accurate method but not practical if the subject is in motion. The apparatus developed by Wesseling et al., based on a volume-clamp method of Penaz (Finapres), is able to monitor continuous finger arterial pressure waveforms noninvasively. The limitation of Finapres is the difficulty in measuring the pressure of a subject during work that involves finger or arm action. Because the Finapres detector is attached to subject's finger, the measurements are affected by inertia of blood and hydrostatic effect cause by arm or finger motion. To overcome this problem, the authors made a detector that is attached to subject's ear and developed and optoelectronic monitoring systems for ear arterial pressure waveform (Earpres). An IR LEDs, photodiode, and air cuff comprised the detector. The detector was attached to a subject's ear, and the space adjusted between the air cuff and the rubber plate on which the LED and photodiode were positioned. To evaluate the accuracy of Earpres, the following tests were conducted with participation of 10 healthy male volunteers. The subjects rested for about five minutes, then performed standing and squatting exercises to provide wide ranges of systolic and diastolic arterial pressure. Intra- and inter-individual standard errors were calculated according to the method of van Egmond et al. As a result, average, the averages of intra-individual standard errors for earpres appeared small (3.7 and 2.7 mmHg for systolic and diastolic pressure respectively). The inter-individual standard errors for Earpres were about the same was Finapres for both systolic and diastolic pressure. The results showed the ear monitor was reliable in measuring arterial blood pressure waveforms and might be applicable to various fields such as sports medicine and ergonomics.

  12. Improved gravitational waveforms from spinning black hole binaries

    International Nuclear Information System (INIS)

    Porter, Edward K.; Sathyaprakash, B.S.

    2005-01-01

    The standard post-Newtonian approximation to gravitational waveforms, called T-approximants, from nonspinning black hole binaries are known not to be sufficiently accurate close to the last stable orbit of the system. A new approximation, called P-approximants, is believed to improve the accuracy of the waveforms rendering them applicable up to the last stable orbit. In this study we apply P-approximants to the case of a test particle in equatorial orbit around a Kerr black hole parameterized by a spin-parameter q that takes values between -1 and 1. In order to assess the performance of the two approximants we measure their effectualness (i.e., larger overlaps with the exact signal), and faithfulness (i.e., smaller biases while measuring the parameters of the signal) with the exact (numerical) waveforms. We find that in the case of prograde orbits, that is orbits whose angular momentum is in the same sense as the spin angular momentum of the black hole, T-approximant templates obtain an effectualness of ∼0.99 for spins q 0.99 for all spins up to q=0.95. The bias in the estimation of parameters is much lower in the case of P-approximants than T-approximants. We find that P-approximants are both effectual and faithful and should be more effective than T-approximants as a detection template family when q>0. For q<0 both T- and P-approximants perform equally well so that either of them could be used as a detection template family

  13. Dereplication of Natural Products Using GC-TOF Mass Spectrometry: Improved Metabolite Identification By Spectral Deconvolution Ratio Analysis

    Directory of Open Access Journals (Sweden)

    Fausto Carnevale Neto

    2016-09-01

    Full Text Available Dereplication based on hyphenated techniques has been extensively applied in plant metabolomics, avoiding re-isolation of known natural products. However, due to the complex nature of biological samples and their large concentration range, dereplication requires the use of chemometric tools to comprehensively extract information from the acquired data. In this work we developed a reliable GC-MS-based method for the identification of non-targeted plant metabolites by combining the Ratio Analysis of Mass Spectrometry deconvolution tool (RAMSY with Automated Mass Spectral Deconvolution and Identification System software (AMDIS. Plants species from Solanaceae, Chrysobalanaceae and Euphorbiaceae were selected as model systems due to their molecular diversity, ethnopharmacological potential and economical value. The samples were analyzed by GC-MS after methoximation and silylation reactions. Dereplication initiated with the use of a factorial design of experiments to determine the best AMDIS configuration for each sample, considering linear retention indices and mass spectral data. A heuristic factor (CDF, compound detection factor was developed and applied to the AMDIS results in order to decrease the false-positive rates. Despite the enhancement in deconvolution and peak identification, the empirical AMDIS method was not able to fully deconvolute all GC-peaks, leading to low MF values and/or missing metabolites. RAMSY was applied as a complementary deconvolution method to AMDIS to peaks exhibiting substantial overlap, resulting in recovery of low-intensity co-eluted ions. The results from this combination of optimized AMDIS with RAMSY attested to the ability of this approach as an improved dereplication method for complex biological samples such as plant extracts.

  14. Multi-processor system for real-time deconvolution and flow estimation in medical ultrasound

    DEFF Research Database (Denmark)

    Jensen, Jesper Lomborg; Jensen, Jørgen Arendt; Stetson, Paul F.

    1996-01-01

    of the algorithms. Many of the algorithms can only be properly evaluated in a clinical setting with real-time processing, which generally cannot be done with conventional equipment. This paper therefore presents a multi-processor system capable of performing 1.2 billion floating point operations per second on RF...... filter is used with a second time-reversed recursive estimation step. Here it is necessary to perform about 70 arithmetic operations per RF sample or about 1 billion operations per second for real-time deconvolution. Furthermore, these have to be floating point operations due to the adaptive nature...... interfaced to our previously-developed real-time sampling system that can acquire RF data at a rate of 20 MHz and simultaneously transmit the data at 20 MHz to the processing system via several parallel channels. These two systems can, thus, perform real-time processing of ultrasound data. The advantage...

  15. Specter: linear deconvolution for targeted analysis of data-independent acquisition mass spectrometry proteomics.

    Science.gov (United States)

    Peckner, Ryan; Myers, Samuel A; Jacome, Alvaro Sebastian Vaca; Egertson, Jarrett D; Abelin, Jennifer G; MacCoss, Michael J; Carr, Steven A; Jaffe, Jacob D

    2018-05-01

    Mass spectrometry with data-independent acquisition (DIA) is a promising method to improve the comprehensiveness and reproducibility of targeted and discovery proteomics, in theory by systematically measuring all peptide precursors in a biological sample. However, the analytical challenges involved in discriminating between peptides with similar sequences in convoluted spectra have limited its applicability in important cases, such as the detection of single-nucleotide polymorphisms (SNPs) and alternative site localizations in phosphoproteomics data. We report Specter (https://github.com/rpeckner-broad/Specter), an open-source software tool that uses linear algebra to deconvolute DIA mixture spectra directly through comparison to a spectral library, thus circumventing the problems associated with typical fragment-correlation-based approaches. We validate the sensitivity of Specter and its performance relative to that of other methods, and show that Specter is able to successfully analyze cases involving highly similar peptides that are typically challenging for DIA analysis methods.

  16. The deconvolution of sputter-etching surface concentration measurements to determine impurity depth profiles

    International Nuclear Information System (INIS)

    Carter, G.; Katardjiev, I.V.; Nobes, M.J.

    1989-01-01

    The quasi-linear partial differential continuity equations that describe the evolution of the depth profiles and surface concentrations of marker atoms in kinematically equivalent systems undergoing sputtering, ion collection and atomic mixing are solved using the method of characteristics. It is shown how atomic mixing probabilities can be deduced from measurements of ion collection depth profiles with increasing ion fluence, and how this information can be used to predict surface concentration evolution. Even with this information, however, it is shown that it is not possible to deconvolute directly the surface concentration measurements to provide initial depth profiles, except when only ion collection and sputtering from the surface layer alone occur. It is demonstrated further that optimal recovery of initial concentration depth profiles could be ensured if the concentration-measuring analytical probe preferentially sampled depths near and at the maximum depth of bombardment-induced perturbations. (author)

  17. Analysis of gravity data beneath Endut geothermal prospect using horizontal gradient and Euler deconvolution

    Science.gov (United States)

    Supriyanto, Noor, T.; Suhanto, E.

    2017-07-01

    The Endut geothermal prospect is located in Banten Province, Indonesia. The geological setting of the area is dominated by quaternary volcanic, tertiary sediments and tertiary rock intrusion. This area has been in the preliminary study phase of geology, geochemistry, and geophysics. As one of the geophysical study, the gravity data measurement has been carried out and analyzed in order to understand geological condition especially subsurface fault structure that control the geothermal system in Endut area. After precondition applied to gravity data, the complete Bouguer anomaly have been analyzed using advanced derivatives method such as Horizontal Gradient (HG) and Euler Deconvolution (ED) to clarify the existance of fault structures. These techniques detected boundaries of body anomalies and faults structure that were compared with the lithologies in the geology map. The analysis result will be useful in making a further realistic conceptual model of the Endut geothermal area.

  18. Deconvolution based attenuation correction for time-of-flight positron emission tomography

    Science.gov (United States)

    Lee, Nam-Yong

    2017-10-01

    For an accurate quantitative reconstruction of the radioactive tracer distribution in positron emission tomography (PET), we need to take into account the attenuation of the photons by the tissues. For this purpose, we propose an attenuation correction method for the case when a direct measurement of the attenuation distribution in the tissues is not available. The proposed method can determine the attenuation factor up to a constant multiple by exploiting the consistency condition that the exact deconvolution of noise-free time-of-flight (TOF) sinogram must satisfy. Simulation studies shows that the proposed method corrects attenuation artifacts quite accurately for TOF sinograms of a wide range of temporal resolutions and noise levels, and improves the image reconstruction for TOF sinograms of higher temporal resolutions by providing more accurate attenuation correction.

  19. Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media

    Science.gov (United States)

    Edrei, Eitan; Scarcelli, Giuliano

    2016-09-01

    High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.

  20. Data matching for free-surface multiple attenuation by multidimensional deconvolution

    Science.gov (United States)

    van der Neut, Joost; Frijlink, Martijn; van Borselen, Roald

    2012-09-01

    A common strategy for surface-related multiple elimination of seismic data is to predict multiples by a convolutional model and subtract these adaptively from the input gathers. Problems can be posed by interfering multiples and primaries. Removing multiples by multidimensional deconvolution (MDD) (inversion) does not suffer from these problems. However, this approach requires data to be consistent, which is often not the case, especially not at interpolated near-offsets. A novel method is proposed to improve data consistency prior to inversion. This is done by backpropagating first-order multiples with a time-gated reference primary event and matching these with early primaries in the input gather. After data matching, multiple elimination by MDD can be applied with a deterministic inversion scheme.

  1. Application of blind deconvolution with crest factor for recovery of original rolling element bearing defect signals

    International Nuclear Information System (INIS)

    Son, J. D.; Yang, B. S.; Tan, A. C. C.; Mathew, J.

    2004-01-01

    Many machine failures are not detected well in advance due to the masking of background noise and attenuation of the source signal through the transmission mediums. Advanced signal processing techniques using adaptive filters and higher order statistics have been attempted to extract the source signal from the measured data at the machine surface. In this paper, blind deconvolution using the Eigenvector Algorithm (EVA) technique is used to recover a damaged bearing signal using only the measured signal at the machine surface. A damaged bearing signal corrupted by noise with varying signal-to-noise (s/n) was used to determine the effectiveness of the technique in detecting an incipient signal and the optimum choice of filter length. The results show that the technique is effective in detecting the source signal with an s/n ratio as low as 0.21, but requires a relatively large filter length

  2. Learning High-Order Filters for Efficient Blind Deconvolution of Document Photographs

    KAUST Repository

    Xiao, Lei

    2016-09-16

    Photographs of text documents taken by hand-held cameras can be easily degraded by camera motion during exposure. In this paper, we propose a new method for blind deconvolution of document images. Observing that document images are usually dominated by small-scale high-order structures, we propose to learn a multi-scale, interleaved cascade of shrinkage fields model, which contains a series of high-order filters to facilitate joint recovery of blur kernel and latent image. With extensive experiments, we show that our method produces high quality results and is highly efficient at the same time, making it a practical choice for deblurring high resolution text images captured by modern mobile devices. © Springer International Publishing AG 2016.

  3. Deconvolution of H-alpha profiles measured by Thompson scattering collecting optics

    International Nuclear Information System (INIS)

    LeBlanc, B.; Grek, B.

    1986-01-01

    This paper discusses that optically fast multichannel Thomson scattering optics that can be used for H-alpha emission profile measurement. A technique based on the fact that a particular volume element of the overall field of view can be seen by many channels, depending on its location, is discussed. It is applied to measurement made on PDX with the vertically viewing TVTS collecting optics (56 channels). The authors found that for this case, about 28 Fourier modes are optimum to represent the spatial behavior of the plasma emissivity. The coefficients for these modes are obtained by doing a least-square-fit to the data subjet to certain constraints. The important constraints are non-negative emissivity, the assumed up and down symmetry and zero emissivity beyond the liners. H-alpha deconvolutions are presented for diverted and circular discharges

  4. Multichannel deconvolution and source detection using sparse representations: application to Fermi project

    International Nuclear Information System (INIS)

    Schmitt, Jeremy

    2011-01-01

    This thesis presents new methods for spherical Poisson data analysis for the Fermi mission. Fermi main scientific objectives, the study of diffuse galactic background et the building of the source catalog, are complicated by the weakness of photon flux and the point spread function of the instrument. This thesis proposes a new multi-scale representation for Poisson data on the sphere, the Multi-Scale Variance Stabilizing Transform on the Sphere (MS-VSTS), consisting in the combination of a spherical multi-scale transform (wavelets, curvelets) with a variance stabilizing transform (VST). This method is applied to mono- and multichannel Poisson noise removal, missing data interpolation, background extraction and multichannel deconvolution. Finally, this thesis deals with the problem of component separation using sparse representations (template fitting). (author) [fr

  5. Imaging by Electrochemical Scanning Tunneling Microscopy and Deconvolution Resolving More Details of Surfaces Nanomorphology

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    observed in high-resolution images of metallic nanocrystallites may be effectively deconvoluted, as to resolve more details of the crystalline morphology (see figure). Images of surface-crystalline metals indicate that more than a single atomic layer is involved in mediating the tunneling current......Upon imaging, electrochemical scanning tunneling microscopy (ESTM), scanning electrochemical micro-scopy (SECM) and in situ STM resolve information on electronic structures and on surface topography. At very high resolution, imaging processing is required, as to obtain information that relates...... to crystallographic-surface structures. Within the wide range of new technologies, those images surface features, the electrochemical scanning tunneling microscope (ESTM) provides means of atomic resolution where the tip participates actively in the process of imaging. Two metallic surfaces influence ions trapped...

  6. Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation

    Science.gov (United States)

    Wen, Bo; Zhang, Qiheng; Zhang, Jianlin

    2011-11-01

    Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.

  7. Measurement and deconvolution of detector response time for short HPM pulses: Part 1, Microwave diodes

    International Nuclear Information System (INIS)

    Bolton, P.R.

    1987-06-01

    A technique is described for measuring and deconvolving response times of microwave diode detection systems in order to generate corrected input signals typical of an infinite detection rate. The method has been applied to cases of 2.86 GHz ultra-short HPM pulse detection where pulse rise time is comparable to that of the detector; whereas, the duration of a few nanoseconds is significantly longer. Results are specified in terms of the enhancement of equivalent deconvolved input voltages for given observed voltages. The convolution integral imposes the constraint of linear detector response to input power levels. This is physically equivalent to the conservation of integrated pulse energy in the deconvolution process. The applicable dynamic range of a microwave diode is therefore limited to a smaller signal region as determined by its calibration

  8. Interferometric full-waveform inversion of time-lapse data

    KAUST Repository

    Sinha, Mrinal

    2017-08-17

    One of the key challenges associated with time-lapse surveys is ensuring the repeatability between the baseline and monitor surveys. Non-repeatability between the surveys is caused by varying environmental conditions over the course of different surveys. To overcome this challenge, we propose the use of interferometric full waveform inversion (IFWI) for inverting the velocity model from data recorded by baseline and monitor surveys. A known reflector is used as the reference reflector for IFWI, and the data are naturally redatumed to this reference reflector using natural reflections as the redatuming operator. This natural redatuming mitigates the artifacts introduced by the repeatability errors that originate above the reference reflector.

  9. Optimal control of photoelectron emission by realistic waveforms

    Czech Academy of Sciences Publication Activity Database

    Solanpää, J.; Ciappina, Marcelo F.; Räsänen, J.

    2017-01-01

    Roč. 64, č. 17 (2017), s. 1784-1792 ISSN 0950-0340 R&D Projects: GA MŠk EF15_008/0000162; GA MŠk LQ1606 Grant - others:ELI Beamlines(XE) CZ.02.1.01/0.0/0.0/15_008/0000162 Institutional support: RVO:68378271 Keywords : above-threshold ionization * optimal control * waveforms Subject RIV: BL - Plasma and Gas Discharge Physics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 1.328, year: 2016

  10. Ultrafast chirped optical waveform recorder using a time microscope

    Science.gov (United States)

    Bennett, Corey Vincent

    2015-04-21

    A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.

  11. Plasma density calculation based on the HCN waveform data

    International Nuclear Information System (INIS)

    Chen Liaoyuan; Pan Li; Luo Cuiwen; Zhou Yan; Deng Zhongchao

    2004-01-01

    A method to improve the plasma density calculation is introduced using the base voltage and the phase zero points obtained from the HCN interference waveform data. The method includes making the signal quality higher by putting the signal control device and the analog-to-digit converters in the same location and charging them by the same power, and excluding the noise's effect according to the possible changing rate of the signal's phase, and to make the base voltage more accurate by dynamical data processing. (authors)

  12. Frequency domain, waveform inversion of laboratory crosswell radar data

    Science.gov (United States)

    Ellefsen, Karl J.; Mazzella, Aldo T.; Horton, Robert J.; McKenna, Jason R.

    2010-01-01

    A new waveform inversion for crosswell radar is formulated in the frequency-domain for a 2.5D model. The inversion simulates radar waves using the vector Helmholtz equation for electromagnetic waves. The objective function is minimized using a backpropagation method suitable for a 2.5D model. The inversion is tested by processing crosswell radar data collected in a laboratory tank. The estimated model is consistent with the known electromagnetic properties of the tank. The formulation for the 2.5D model can be extended to inversions of acoustic and elastic data.

  13. Complete waveform model for compact binaries on eccentric orbits

    Science.gov (United States)

    Huerta, E. A.; Kumar, Prayush; Agarwal, Bhanu; George, Daniel; Schive, Hsi-Yu; Pfeiffer, Harald P.; Haas, Roland; Ren, Wei; Chu, Tony; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela

    2017-01-01

    We present a time domain waveform model that describes the inspiral, merger and ringdown of compact binary systems whose components are nonspinning, and which evolve on orbits with low to moderate eccentricity. The inspiral evolution is described using third-order post-Newtonian equations both for the equations of motion of the binary, and its far-zone radiation field. This latter component also includes instantaneous, tails and tails-of-tails contributions, and a contribution due to nonlinear memory. This framework reduces to the post-Newtonian approximant TaylorT4 at third post-Newtonian order in the zero-eccentricity limit. To improve phase accuracy, we also incorporate higher-order post-Newtonian corrections for the energy flux of quasicircular binaries and gravitational self-force corrections to the binding energy of compact binaries. This enhanced prescription for the inspiral evolution is combined with a fully analytical prescription for the merger-ringdown evolution constructed using a catalog of numerical relativity simulations. We show that this inspiral-merger-ringdown waveform model reproduces the effective-one-body model of Ref. [Y. Pan et al., Phys. Rev. D 89, 061501 (2014)., 10.1103/PhysRevD.89.061501] for quasicircular black hole binaries with mass ratios between 1 to 15 in the zero-eccentricity limit over a wide range of the parameter space under consideration. Using a set of eccentric numerical relativity simulations, not used during calibration, we show that our new eccentric model reproduces the true features of eccentric compact binary coalescence throughout merger. We use this model to show that the gravitational-wave transients GW150914 and GW151226 can be effectively recovered with template banks of quasicircular, spin-aligned waveforms if the eccentricity e0 of these systems when they enter the aLIGO band at a gravitational-wave frequency of 14 Hz satisfies e0GW 150914≤0.15 and e0GW 151226≤0.1 . We also find that varying the spin

  14. Sinusoidal oscillators and waveform generators using modern electronic circuit building blocks

    CERN Document Server

    Senani, Raj; Singh, V K; Sharma, R K

    2016-01-01

    This book serves as a single-source reference to sinusoidal oscillators and waveform generators, using classical as well as a variety of modern electronic circuit building blocks. It provides a state-of-the-art review of a large variety of sinusoidal oscillators and waveform generators and includes a catalogue of over 600 configurations of oscillators and waveform generators, describing their relevant design details and salient performance features/limitations. The authors discuss a number of interesting, open research problems and include a comprehensive collection of over 1500 references on oscillators and non-sinusoidal waveform generators/relaxation oscillators. Offers readers a single-source reference to everything connected to sinusoidal oscillators and waveform generators, using classical as well as modern electronic circuit building blocks; Provides a state-of-the-art review of a large variety of sinusoidal oscillators and waveform generators; Includes a catalog of over 600 configurations of oscillato...

  15. A Denoising Method for LiDAR Full-Waveform Data

    Directory of Open Access Journals (Sweden)

    Xudong Lai

    2015-01-01

    Full Text Available Decomposition of LiDAR full-waveform data can not only enhance the density and positioning accuracy of a point cloud, but also provide other useful parameters, such as pulse width, peak amplitude, and peak position which are important information for subsequent processing. Full-waveform data usually contain some random noises. Traditional filtering algorithms always cause distortion in the waveform. λ/μ filtering algorithm is based on Mean Shift method. It can smooth the signal iteratively and will not cause any distortion in the waveform. In this paper, an improved λ/μ filtering algorithm is proposed, and several experiments on both simulated waveform data and real waveform data are implemented to prove the effectiveness of the proposed algorithm.

  16. Time-domain simulation and waveform reconstruction for shielding effectiveness of materials against electromagnetic pulse

    International Nuclear Information System (INIS)

    Hu, Xiao-feng; Chen, Xiang; Wei, Ming

    2013-01-01

    Shielding effectiveness (SE) of materials of current testing standards is often carried out by using continuous-wave measurement and amplitude-frequency characteristics curve is used to characterize the results. However, with in-depth study of high-power electromagnetic pulse (EMP) interference, it was discovered that only by frequency-domain SE of materials cannot be completely characterized by shielding performance of time-domain pulsed-field. And there is no uniform testing methods and standards of SE of materials against EMP. In this paper, the method of minimum phase transfer function is used to reconstruct shielded time-domain waveform based on the analysis of the waveform reconstruction method. Pulse of plane waves through an infinite planar material is simulated by using CST simulation software. The reconstructed waveform and simulation waveform is compared. The results show that the waveform reconstruction method based on the minimum phase can be well estimated EMP waveform through the infinite planar materials.

  17. Waveform efficiency analysis of auditory nerve fiber stimulation for cochlear implants

    International Nuclear Information System (INIS)

    Navaii, Mehdi Lotfi; Sadhedi, Hamed; Jalali, Mohsen

    2013-01-01

    Evaluation of the electrical stimulation efficiency of various stimulating waveforms is an important issue for efficient neural stimulator design. Concerning the implantable micro devices design, it is also necessary to consider the feasibility of hardware implementation of the desired waveforms. In this paper, the charge, power and energy efficiency of four waveforms (i.e. square, rising ramp, triangular and rising ramp-decaying exponential) in various durations have been simulated and evaluated based on the computational model of the auditory nerve fibers. Moreover, for a fair comparison of their feasibility, a fully integrated current generator circuit has been developed so that the desired stimulating waveforms can be generated. The simulation results show that stimulation with the square waveforms is a proper choice in short and intermediate durations while the rising ramp-decaying exponential or triangular waveforms can be employed for long durations.

  18. Visualizing Escherichia coli sub-cellular structure using sparse deconvolution Spatial Light Interference Tomography.

    Directory of Open Access Journals (Sweden)

    Mustafa Mir

    Full Text Available Studying the 3D sub-cellular structure of living cells is essential to our understanding of biological function. However, tomographic imaging of live cells is challenging mainly because they are transparent, i.e., weakly scattering structures. Therefore, this type of imaging has been implemented largely using fluorescence techniques. While confocal fluorescence imaging is a common approach to achieve sectioning, it requires fluorescence probes that are often harmful to the living specimen. On the other hand, by using the intrinsic contrast of the structures it is possible to study living cells in a non-invasive manner. One method that provides high-resolution quantitative information about nanoscale structures is a broadband interferometric technique known as Spatial Light Interference Microscopy (SLIM. In addition to rendering quantitative phase information, when combined with a high numerical aperture objective, SLIM also provides excellent depth sectioning capabilities. However, like in all linear optical systems, SLIM's resolution is limited by diffraction. Here we present a novel 3D field deconvolution algorithm that exploits the sparsity of phase images and renders images with resolution beyond the diffraction limit. We employ this label-free method, called deconvolution Spatial Light Interference Tomography (dSLIT, to visualize coiled sub-cellular structures in E. coli cells which are most likely the cytoskeletal MreB protein and the division site regulating MinCDE proteins. Previously these structures have only been observed using specialized strains and plasmids and fluorescence techniques. Our results indicate that dSLIT can be employed to study such structures in a practical and non-invasive manner.

  19. Linear MALDI-ToF simultaneous spectrum deconvolution and baseline removal.

    Science.gov (United States)

    Picaud, Vincent; Giovannelli, Jean-Francois; Truntzer, Caroline; Charrier, Jean-Philippe; Giremus, Audrey; Grangeat, Pierre; Mercier, Catherine

    2018-04-05

    Thanks to a reasonable cost and simple sample preparation procedure, linear MALDI-ToF spectrometry is a growing technology for clinical microbiology. With appropriate spectrum databases, this technology can be used for early identification of pathogens in body fluids. However, due to the low resolution of linear MALDI-ToF instruments, robust and accurate peak picking remains a challenging task. In this context we propose a new peak extraction algorithm from raw spectrum. With this method the spectrum baseline and spectrum peaks are processed jointly. The approach relies on an additive model constituted by a smooth baseline part plus a sparse peak list convolved with a known peak shape. The model is then fitted under a Gaussian noise model. The proposed method is well suited to process low resolution spectra with important baseline and unresolved peaks. We developed a new peak deconvolution procedure. The paper describes the method derivation and discusses some of its interpretations. The algorithm is then described in a pseudo-code form where the required optimization procedure is detailed. For synthetic data the method is compared to a more conventional approach. The new method reduces artifacts caused by the usual two-steps procedure, baseline removal then peak extraction. Finally some results on real linear MALDI-ToF spectra are provided. We introduced a new method for peak picking, where peak deconvolution and baseline computation are performed jointly. On simulated data we showed that this global approach performs better than a classical one where baseline and peaks are processed sequentially. A dedicated experiment has been conducted on real spectra. In this study a collection of spectra of spiked proteins were acquired and then analyzed. Better performances of the proposed method, in term of accuracy and reproductibility, have been observed and validated by an extended statistical analysis.

  20. Thermogravimetric pyrolysis kinetics of bamboo waste via Asymmetric Double Sigmoidal (Asym2sig) function deconvolution.

    Science.gov (United States)

    Chen, Chuihan; Miao, Wei; Zhou, Cheng; Wu, Hongjuan

    2017-02-01

    Thermogravimetric kinetic of bamboo waste (BW) pyrolysis has been studied using Asymmetric Double Sigmoidal (Asym2sig) function deconvolution. Through deconvolution, BW pyrolytic profiles could be separated into three reactions well, each of which corresponded to pseudo hemicelluloses (P-HC), pseudo cellulose (P-CL), and pseudo lignin (P-LG) decomposition. Based on Friedman method, apparent activation energy of P-HC, P-CL, P-LG was found to be 175.6kJ/mol, 199.7kJ/mol, and 158.4kJ/mol, respectively. Energy compensation effects (lnk 0, z vs. E z ) of pseudo components were in well linearity, from which pre-exponential factors (k 0 ) were determined as 6.22E+11s -1 (P-HC), 4.50E+14s -1 (P-CL) and 1.3E+10s -1 (P-LG). Integral master-plots results showed pyrolytic mechanism of P-HC, P-CL, and P-LG was reaction order of f(α)=(1-α) 2 , f(α)=1-α and f(α)=(1-α) n (n=6-8), respectively. Mechanism of P-HC and P-CL could be further reconstructed to n-th order Avrami-Erofeyev model of f(α)=0.62(1-α)[-ln(1-α)] -0.61 (n=0.62) and f(α)=1.08(1-α)[-ln(1-α)] 0.074 (n=1.08). Two-steps reaction was more suitable for P-LG pyrolysis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Application of an improved maximum correlated kurtosis deconvolution method for fault diagnosis of rolling element bearings

    Science.gov (United States)

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Lei, Yaguo

    2017-08-01

    The extraction of periodic impulses, which are the important indicators of rolling bearing faults, from vibration signals is considerably significance for fault diagnosis. Maximum correlated kurtosis deconvolution (MCKD) developed from minimum entropy deconvolution (MED) has been proven as an efficient tool for enhancing the periodic impulses in the diagnosis of rolling element bearings and gearboxes. However, challenges still exist when MCKD is applied to the bearings operating under harsh working conditions. The difficulties mainly come from the rigorous requires for the multi-input parameters and the complicated resampling process. To overcome these limitations, an improved MCKD (IMCKD) is presented in this paper. The new method estimates the iterative period by calculating the autocorrelation of the envelope signal rather than relies on the provided prior period. Moreover, the iterative period will gradually approach to the true fault period through updating the iterative period after every iterative step. Since IMCKD is unaffected by the impulse signals with the high kurtosis value, the new method selects the maximum kurtosis filtered signal as the final choice from all candidates in the assigned iterative counts. Compared with MCKD, IMCKD has three advantages. First, without considering prior period and the choice of the order of shift, IMCKD is more efficient and has higher robustness. Second, the resampling process is not necessary for IMCKD, which is greatly convenient for the subsequent frequency spectrum analysis and envelope spectrum analysis without resetting the sampling rate. Third, IMCKD has a significant performance advantage in diagnosing the bearing compound-fault which expands the application range. Finally, the effectiveness and superiority of IMCKD are validated by a number of simulated bearing fault signals and applying to compound faults and single fault diagnosis of a locomotive bearing.

  2. Rapid analysis for 567 pesticides and endocrine disrupters by GC/MS using deconvolution reporting software

    Energy Technology Data Exchange (ETDEWEB)

    Wylie, P.; Szelewski, M.; Meng, Chin-Kai [Agilent Technologies, Wilmington, DE (United States)

    2004-09-15

    More than 700 pesticides are approved for use around the world, many of which are suspected endocrine disrupters. Other pesticides, though no longer used, persist in the environment where they bioaccumulate in the flora and fauna. Analytical methods target only a subset of the possible compounds. The analysis of food and environmental samples for pesticides is usually complicated by the presence of co-extracted natural products. Food or tissue extracts can be exceedingly complex matrices that require several stages of sample cleanup prior to analysis. Even then, it can be difficult to detect trace levels of contaminants in the presence of the remaining matrix. For efficiency, multi-residue methods (MRMs) must be used to analyze for most pesticides. Traditionally, these methods have relied upon gas chromatography (GC) with a constellation of element-selective detectors to locate pesticides in the midst of a variable matrix. GC with mass spectral detection (GC/MS) has been widely used for confirmation of hits. Liquid chromatography (LC) has been used for those compounds that are not amenable to GC. Today, more and more pesticide laboratories are relying upon LC with mass spectral detection (LC/MS) and GC/MS as their primary analytical tools. Still, most MRMs are target compound methods that look for a small subset of the possible pesticides. Any compound not on the target list is likely to be missed by these methods. Using the techniques of retention time locking (RTL) and RTL database searching together with spectral deconvolution, a method has been developed to screen for 567 pesticides and suspected endocrine disrupters in a single GC/MS analysis. Spectral deconvolution helps to identify pesticides even when they co-elute with matrix compounds while RTL helps to eliminate false positives and gives greater confidence in the results.

  3. Design and implement of system for browsing remote seismic waveform based on B/S schema

    International Nuclear Information System (INIS)

    Zheng Xuefeng; Shen Junyi; Wang Zhihai; Sun Peng; Jin Ping; Yan Feng

    2006-01-01

    Browsing remote seismic waveform based on B/S schema is of significance in modern seismic research and data service, and the technology should be improved urgently. This paper describes the basic plan, architecture and implement of system for browsing remote seismic waveform based on B/S schema. The problem to access, browse and edit the waveform data on serve from client only using browser has been solved. On this basis, the system has been established and been in use. (authors)

  4. Computational Stimulation of the Basal Ganglia Neurons with Cost Effective Delayed Gaussian Waveforms.

    Science.gov (United States)

    Daneshzand, Mohammad; Faezipour, Miad; Barkana, Buket D

    2017-01-01

    Deep brain stimulation (DBS) has compelling results in the desynchronization of the basal ganglia neuronal activities and thus, is used in treating the motor symptoms of Parkinson's disease (PD). Accurate definition of DBS waveform parameters could avert tissue or electrode damage, increase the neuronal activity and reduce energy cost which will prolong the battery life, hence avoiding device replacement surgeries. This study considers the use of a charge balanced Gaussian waveform pattern as a method to disrupt the firing patterns of neuronal cell activity. A computational model was created to simulate ganglia cells and their interactions with thalamic neurons. From the model, we investigated the effects of modified DBS pulse shapes and proposed a delay period between the cathodic and anodic parts of the charge balanced Gaussian waveform to desynchronize the firing patterns of the GPe and GPi cells. The results of the proposed Gaussian waveform with delay outperformed that of rectangular DBS waveforms used in in-vivo experiments. The Gaussian Delay Gaussian (GDG) waveforms achieved lower number of misses in eliciting action potential while having a lower amplitude and shorter length of delay compared to numerous different pulse shapes. The amount of energy consumed in the basal ganglia network due to GDG waveforms was dropped by 22% in comparison with charge balanced Gaussian waveforms without any delay between the cathodic and anodic parts and was also 60% lower than a rectangular charged balanced pulse with a delay between the cathodic and anodic parts of the waveform. Furthermore, by defining a Synchronization Level metric, we observed that the GDG waveform was able to reduce the synchronization of GPi neurons more effectively than any other waveform. The promising results of GDG waveforms in terms of eliciting action potential, desynchronization of the basal ganglia neurons and reduction of energy consumption can potentially enhance the performance of DBS

  5. Full Waveform Inversion with Multisource Frequency Selection of Marine Streamer Data

    KAUST Repository

    Huang, Yunsong; Schuster, Gerard T.

    2017-01-01

    The theory and practice of multisource full waveform inversion of marine supergathers are described with a frequency-selection strategy. The key enabling property of frequency selection is that it eliminates the crosstalk among sources, thus overcoming the aperture mismatch of marine multisource inversion. Tests on multisource full waveform inversion of synthetic marine data and Gulf of Mexico data show speedups of 4× and 8×, respectively, compared to conventional full waveform inversion.

  6. Computational Stimulation of the Basal Ganglia Neurons with Cost Effective Delayed Gaussian Waveforms

    Directory of Open Access Journals (Sweden)

    Mohammad Daneshzand

    2017-08-01

    Full Text Available Deep brain stimulation (DBS has compelling results in the desynchronization of the basal ganglia neuronal activities and thus, is used in treating the motor symptoms of Parkinson's disease (PD. Accurate definition of DBS waveform parameters could avert tissue or electrode damage, increase the neuronal activity and reduce energy cost which will prolong the battery life, hence avoiding device replacement surgeries. This study considers the use of a charge balanced Gaussian waveform pattern as a method to disrupt the firing patterns of neuronal cell activity. A computational model was created to simulate ganglia cells and their interactions with thalamic neurons. From the model, we investigated the effects of modified DBS pulse shapes and proposed a delay period between the cathodic and anodic parts of the charge balanced Gaussian waveform to desynchronize the firing patterns of the GPe and GPi cells. The results of the proposed Gaussian waveform with delay outperformed that of rectangular DBS waveforms used in in-vivo experiments. The Gaussian Delay Gaussian (GDG waveforms achieved lower number of misses in eliciting action potential while having a lower amplitude and shorter length of delay compared to numerous different pulse shapes. The amount of energy consumed in the basal ganglia network due to GDG waveforms was dropped by 22% in comparison with charge balanced Gaussian waveforms without any delay between the cathodic and anodic parts and was also 60% lower than a rectangular charged balanced pulse with a delay between the cathodic and anodic parts of the waveform. Furthermore, by defining a Synchronization Level metric, we observed that the GDG waveform was able to reduce the synchronization of GPi neurons more effectively than any other waveform. The promising results of GDG waveforms in terms of eliciting action potential, desynchronization of the basal ganglia neurons and reduction of energy consumption can potentially enhance the

  7. Full Waveform Inversion with Multisource Frequency Selection of Marine Streamer Data

    KAUST Repository

    Huang, Yunsong

    2017-10-27

    The theory and practice of multisource full waveform inversion of marine supergathers are described with a frequency-selection strategy. The key enabling property of frequency selection is that it eliminates the crosstalk among sources, thus overcoming the aperture mismatch of marine multisource inversion. Tests on multisource full waveform inversion of synthetic marine data and Gulf of Mexico data show speedups of 4× and 8×, respectively, compared to conventional full waveform inversion.

  8. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    Science.gov (United States)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  9. Source-independent time-domain waveform inversion using convolved wavefields: Application to the encoded multisource waveform inversion

    KAUST Repository

    Choi, Yun Seok

    2011-09-01

    Full waveform inversion requires a good estimation of the source wavelet to improve our chances of a successful inversion. This is especially true for an encoded multisource time-domain implementation, which, conventionally, requires separate-source modeling, as well as the Fourier transform of wavefields. As an alternative, we have developed the source-independent time-domain waveform inversion using convolved wavefields. Specifically, the misfit function consists of the convolution of the observed wavefields with a reference trace from the modeled wavefield, plus the convolution of the modeled wavefields with a reference trace from the observed wavefield. In this case, the source wavelet of the observed and the modeled wavefields are equally convolved with both terms in the misfit function, and thus, the effects of the source wavelets are eliminated. Furthermore, because the modeled wavefields play a role of low-pass filtering, the observed wavefields in the misfit function, the frequency-selection strategy from low to high can be easily adopted just by setting the maximum frequency of the source wavelet of the modeled wavefields; and thus, no filtering is required. The gradient of the misfit function is computed by back-propagating the new residual seismograms and applying the imaging condition, similar to reverse-time migration. In the synthetic data evaluations, our waveform inversion yields inverted models that are close to the true model, but demonstrates, as predicted, some limitations when random noise is added to the synthetic data. We also realized that an average of traces is a better choice for the reference trace than using a single trace. © 2011 Society of Exploration Geophysicists.

  10. A New Waveform Mosaic Algorithm in the Vectorization of Paper Seismograms

    Directory of Open Access Journals (Sweden)

    Maofa Wang

    2014-11-01

    Full Text Available History paper seismograms are very important information for earthquake monitoring and prediction, and the vectorization of paper seismograms is a very import problem to be resolved. In this paper, a new waveform mosaic algorithm in the vectorization of paper seismograms is presented. We also give out the technological process to waveform mosaic, and a waveform mosaic system used to vectorize analog seismic record has been accomplished independently. Using it, we can precisely and speedy accomplish waveform mosaic for vectorizing analog seismic records.

  11. GO JUPITER PWS EDITED EDR 10KHZ WAVEFORM RECEIVER V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes wideband waveform measurements from the Galileo plasma wave receiver obtained during Jupiter orbital operations. These data were obtained...

  12. GO JUPITER PWS EDITED EDR 1KHZ WAVEFORM RECEIVER V1.0

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set includes wideband waveform measurements from the Galileo plasma wave receiver obtained during Jupiter orbital operations. These data were obtained...

  13. Development of plasma current waveform adjusting system ZLJ for tokamak device HL-1

    International Nuclear Information System (INIS)

    Wang Shangbing; Hu Haotian; Tang Fangqun; Zhou Yongzheng; Chu Xiuzhong; Cheng Jiashun; Gao Yunxia

    1989-12-01

    The control of some typical Tokamak discharge waveforms has been achieved by using plasma current waveform adjusting system ZLJ in the ohmic heating of HL-1. The discharge waveforms include a series of regular plasma current waveforms with various slow rising rate, such as 80 kA, 450 ms long flat-topping; 100 kA, 200 ms rising; 200 ms falt-topping and 180 kA, 400 ms slow rising etc. The design principle of the system and the initial experimental results are described

  14. LPI Radar Waveform Recognition Based on Time-Frequency Distribution

    Directory of Open Access Journals (Sweden)

    Ming Zhang

    2016-10-01

    Full Text Available In this paper, an automatic radar waveform recognition system in a high noise environment is proposed. Signal waveform recognition techniques are widely applied in the field of cognitive radio, spectrum management and radar applications, etc. We devise a system to classify the modulating signals widely used in low probability of intercept (LPI radar detection systems. The radar signals are divided into eight types of classifications, including linear frequency modulation (LFM, BPSK (Barker code modulation, Costas codes and polyphase codes (comprising Frank, P1, P2, P3 and P4. The classifier is Elman neural network (ENN, and it is a supervised classification based on features extracted from the system. Through the techniques of image filtering, image opening operation, skeleton extraction, principal component analysis (PCA, image binarization algorithm and Pseudo–Zernike moments, etc., the features are extracted from the Choi–Williams time-frequency distribution (CWD image of the received data. In order to reduce the redundant features and simplify calculation, the features selection algorithm based on mutual information between classes and features vectors are applied. The superiority of the proposed classification system is demonstrated by the simulations and analysis. Simulation results show that the overall ratio of successful recognition (RSR is 94.7% at signal-to-noise ratio (SNR of −2 dB.

  15. Frequency spectrum analysis of finger photoplethysmographic waveform variability during haemodialysis.

    Science.gov (United States)

    Javed, Faizan; Middleton, Paul M; Malouf, Philip; Chan, Gregory S H; Savkin, Andrey V; Lovell, Nigel H; Steel, Elizabeth; Mackie, James

    2010-09-01

    This study investigates the peripheral circulatory and autonomic response to volume withdrawal in haemodialysis based on spectral analysis of photoplethysmographic waveform variability (PPGV). Frequency spectrum analysis was performed on the baseline and pulse amplitude variabilities of the finger infrared photoplethysmographic (PPG) waveform and on heart rate variability extracted from the ECG signal collected from 18 kidney failure patients undergoing haemodialysis. Spectral powers were calculated from the low frequency (LF, 0.04-0.145 Hz) and high frequency (HF, 0.145-0.45 Hz) bands. In eight stable fluid overloaded patients (fluid removal of >2 L) not on alpha blockers, progressive reduction in relative blood volume during haemodialysis resulted in significant increase in LF and HF powers of PPG baseline and amplitude variability (P analysis of finger PPGV may provide valuable information on the autonomic vascular response to blood volume reduction in haemodialysis, and can be potentially utilized as a non-invasive tool for assessing peripheral circulatory control during routine dialysis procedure.

  16. Elastic reflection based waveform inversion with a nonlinear approach

    KAUST Repository

    Guo, Qiang; Alkhalifah, Tariq Ali

    2017-01-01

    Full waveform inversion (FWI) is a highly nonlinear problem due to the complex reflectivity of the Earth, and this nonlinearity only increases under the more expensive elastic assumption. In elastic media, we need a good initial P-wave velocity and even a better initial S-wave velocity models with accurate representation of the low model wavenumbers for FWI to converge. However, inverting for the low wavenumber components of P- and S-wave velocities using reflection waveform inversion (RWI) with an objective to fit the reflection shape, rather than produce reflections, may mitigate the limitations of FWI. Because FWI, performing as a migration operator, is in preference of the high wavenumber updates along reflectors. We propose a nonlinear elastic RWI that inverts for both the low wavenumber and perturbation components of the P- and S-wave velocities. To generate the full elastic reflection wavefields, we derive an equivalent stress source made up by the inverted model perturbations and incident wavefields. We update both the perturbation and propagation parts of the velocity models in a nested fashion. Applications on synthetic isotropic models and field data show that our method can efficiently update the low and high wavenumber parts of the models.

  17. Multiparameter Elastic Full Waveform Inversion with Facies-based Constraints

    Science.gov (United States)

    Zhang, Zhen-dong; Alkhalifah, Tariq; Naeini, Ehsan Zabihi; Sun, Bingbing

    2018-03-01

    Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize full waveform inversion beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a prior information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.

  18. Elastic reflection based waveform inversion with a nonlinear approach

    KAUST Repository

    Guo, Qiang

    2017-08-16

    Full waveform inversion (FWI) is a highly nonlinear problem due to the complex reflectivity of the Earth, and this nonlinearity only increases under the more expensive elastic assumption. In elastic media, we need a good initial P-wave velocity and even a better initial S-wave velocity models with accurate representation of the low model wavenumbers for FWI to converge. However, inverting for the low wavenumber components of P- and S-wave velocities using reflection waveform inversion (RWI) with an objective to fit the reflection shape, rather than produce reflections, may mitigate the limitations of FWI. Because FWI, performing as a migration operator, is in preference of the high wavenumber updates along reflectors. We propose a nonlinear elastic RWI that inverts for both the low wavenumber and perturbation components of the P- and S-wave velocities. To generate the full elastic reflection wavefields, we derive an equivalent stress source made up by the inverted model perturbations and incident wavefields. We update both the perturbation and propagation parts of the velocity models in a nested fashion. Applications on synthetic isotropic models and field data show that our method can efficiently update the low and high wavenumber parts of the models.

  19. Multiparameter Elastic Full Waveform Inversion With Facies Constraints

    KAUST Repository

    Zhang, Zhendong

    2017-08-17

    Full waveform inversion (FWI) aims fully benefit from all the data characteristics to estimate the parameters describing the assumed physics of the subsurface. However, current efforts to utilize full waveform inversion as a tool beyond acoustic imaging applications, for example in reservoir analysis, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Adding rock physics constraints does help to mitigate these issues, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a boundary condition for the whole area. Since certain rock formations inside the Earth admit consistent elastic properties and relative values of elastic and anisotropic parameters (facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel confidence map based approach to utilize the facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such a confidence map using Bayesian theory, in which the confidence map is updated at each iteration of the inversion using both the inverted models and a prior information. The numerical examples show that the proposed method can reduce the trade-offs and also can improve the resolution of the inverted elastic and anisotropic properties.

  20. Full waveform inversion using envelope-based global correlation norm

    Science.gov (United States)

    Oh, Ju-Won; Alkhalifah, Tariq

    2018-05-01

    To increase the feasibility of full waveform inversion on real data, we suggest a new objective function, which is defined as the global correlation of the envelopes of modelled and observed data. The envelope-based global correlation norm has the advantage of the envelope inversion that generates artificial low-frequency information, which provides the possibility to recover long-wavelength structure in an early stage. In addition, the envelope-based global correlation norm maintains the advantage of the global correlation norm, which reduces the sensitivity of the misfit to amplitude errors so that the performance of inversion on real data can be enhanced when the exact source wavelet is not available and more complex physics are ignored. Through the synthetic example for 2-D SEG/EAGE overthrust model with inaccurate source wavelet, we compare the performance of four different approaches, which are the least-squares waveform inversion, least-squares envelope inversion, global correlation norm and envelope-based global correlation norm. Finally, we apply the envelope-based global correlation norm on the 3-D Ocean Bottom Cable (OBC) data from the North Sea. The envelope-based global correlation norm captures the strong reflections from the high-velocity caprock and generates artificial low-frequency reflection energy that helps us recover long-wavelength structure of the model domain in the early stages. From this long-wavelength model, the conventional global correlation norm is sequentially applied to invert for higher-resolution features of the model.

  1. Expanding the frontiers of waveform imaging with Salvus

    Science.gov (United States)

    Afanasiev, M.; Boehm, C.; van Driel, M.; Krischer, L.; Fichtner, A.

    2017-12-01

    Mechanical waves are natural harbingers of information. From medical ultrasound to the normal modes of Sun, wave motion is often our best window into the character of some underlying continuum. For over a century, geophysicists have been using this window to peer deep into the Earth, developing techniques that have gone on to underlie much of world's energy economy. As computers and numerical techniques have become more powerful over the last several decades, seismologists have begun to scale back classical simplifying approximations of wave propagation physics. As a result, we are now approaching the ideal of `full-waveform inversion'; maximizing the aperture of our window by taking the full complexity of wave motion into account.Salvus is a modern high-performance software suite which aims to bring recent developments in geophysical waveform inversion to new and exciting domains. In this short presentation we will look at the connections between these applications, with examples from non-destructive testing, medical imaging, seismic exploration, and (extra-) planetary seismology.

  2. Individual Biometric Identification Using Multi-Cycle Electrocardiographic Waveform Patterns

    Directory of Open Access Journals (Sweden)

    Wonki Lee

    2018-03-01

    Full Text Available The electrocardiogram (ECG waveform conveys information regarding the electrical property of the heart. The patterns vary depending on the individual heart characteristics. ECG features can be potentially used for biometric recognition. This study presents a new method using the entire ECG waveform pattern for matching and demonstrates that the approach can potentially be employed for individual biometric identification. Multi-cycle ECG signals were assessed using an ECG measuring circuit, and three electrodes can be patched on the wrists or fingers for considering various measurements. For biometric identification, our-fold cross validation was used in the experiments for assessing how the results of a statistical analysis will generalize to an independent data set. Four different pattern matching algorithms, i.e., cosine similarity, cross correlation, city block distance, and Euclidean distances, were tested to compare the individual identification performances with a single channel of ECG signal (3-wire ECG. To evaluate the pattern matching for biometric identification, the ECG recordings for each subject were partitioned into training and test set. The suggested method obtained a maximum performance of 89.9% accuracy with two heartbeats of ECG signals measured on the wrist and 93.3% accuracy with three heartbeats for 55 subjects. The performance rate with ECG signals measured on the fingers improved up to 99.3% with two heartbeats and 100% with three heartbeats of signals for 20 subjects.

  3. Observation of 45 GHz current waveforms using HTS sampler

    International Nuclear Information System (INIS)

    Maruyama, M.; Suzuki, H.; Hato, T.; Wakana, H.; Nakayama, K.; Ishimaru, Y.; Horibe, O.; Adachi, S.; Kamitani, A.; Suzuki, K.; Oshikubo, Y.; Tarutani, Y.; Tanabe, K.

    2005-01-01

    We succeeded in observing high-frequency current waveforms up to 45 GHz using a high-temperature superconducting (HTS) sampler. In this experiment, we used a sampler circuit with a superconducting pickup coil, which magnetically detects current signals flowing through a micro-strip line on a printed board placed outside the cryochamber. This type of measurement enables non-contact current-waveform observation that seems useful for analyses of EMI, defects in LSI, etc. Computer simulation reveals that one of our latest versions of HTS sampler circuits having Josephson transmission lines with optimized biases as buffers has a potential of sampling high-frequency signals with a bandwidth above 100 GHz. To realize the circuit parameters required in the simulations, we developed an HTS circuit fabrication process employing a lower ground plane structure with SrSnO 3 insulating layers. We consider that improvement of the circuit fabrication process and optimization of the pickup coil lead to much higher signal frequency observable by the sampler

  4. Multiparameter Elastic Full Waveform Inversion with Facies-based Constraints

    KAUST Repository

    Zhang, Zhendong

    2018-03-20

    Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize full waveform inversion beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a prior information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.

  5. Individual Biometric Identification Using Multi-Cycle Electrocardiographic Waveform Patterns.

    Science.gov (United States)

    Lee, Wonki; Kim, Seulgee; Kim, Daeeun

    2018-03-28

    The electrocardiogram (ECG) waveform conveys information regarding the electrical property of the heart. The patterns vary depending on the individual heart characteristics. ECG features can be potentially used for biometric recognition. This study presents a new method using the entire ECG waveform pattern for matching and demonstrates that the approach can potentially be employed for individual biometric identification. Multi-cycle ECG signals were assessed using an ECG measuring circuit, and three electrodes can be patched on the wrists or fingers for considering various measurements. For biometric identification, our-fold cross validation was used in the experiments for assessing how the results of a statistical analysis will generalize to an independent data set. Four different pattern matching algorithms, i.e., cosine similarity, cross correlation, city block distance, and Euclidean distances, were tested to compare the individual identification performances with a single channel of ECG signal (3-wire ECG). To evaluate the pattern matching for biometric identification, the ECG recordings for each subject were partitioned into training and test set. The suggested method obtained a maximum performance of 89.9% accuracy with two heartbeats of ECG signals measured on the wrist and 93.3% accuracy with three heartbeats for 55 subjects. The performance rate with ECG signals measured on the fingers improved up to 99.3% with two heartbeats and 100% with three heartbeats of signals for 20 subjects.

  6. Continuous-waveform constant-current isolated physiological stimulator

    Science.gov (United States)

    Holcomb, Mark R.; Devine, Jack M.; Harder, Rene; Sidorov, Veniamin Y.

    2012-04-01

    We have developed an isolated continuous-waveform constant-current physiological stimulator that is powered and controlled by universal serial bus (USB) interface. The stimulator is composed of a custom printed circuit board (PCB), 16-MHz MSP430F2618 microcontroller with two integrated 12-bit digital to analog converters (DAC0, DAC1), high-speed H-Bridge, voltage-controlled current source (VCCS), isolated USB communication and power circuitry, two isolated transistor-transistor logic (TTL) inputs, and a serial 16 × 2 character liquid crystal display. The stimulators are designed to produce current stimuli in the range of ±15 mA indefinitely using a 20V source and to be used in ex vivo cardiac experiments, but they are suitable for use in a wide variety of research or student experiments that require precision control of continuous waveforms or synchronization with external events. The device was designed with customization in mind and has features that allow it to be integrated into current and future experimental setups. Dual TTL inputs allow replacement by two or more traditional stimulators in common experimental configurations. The MSP430 software is written in C++ and compiled with IAR Embedded Workbench 5.20.2. A control program written in C++ runs on a Windows personal computer and has a graphical user interface that allows the user to control all aspects of the device.

  7. Acquisition of L2 Japanese Geminates: Training with Waveform Displays

    Directory of Open Access Journals (Sweden)

    Miki Motohashi-Saigo

    2009-06-01

    Full Text Available The value of waveform displays as visual feedback was explored in a training study involving perception and production of L2 Japanese by beginning-level L1 English learners. A pretest-posttest design compared auditory-visual (AV and auditory-only (A-only Web-based training. Stimuli were singleton and geminate /t,k,s/ followed by /a,u/ in two conditions (isolated words, carrier sentences. Fillers with long vowels were included. Participants completed a forced-choice identification task involving minimal triplets: singletons, geminates, long vowels (e.g., sasu, sassu, saasu. Results revealed a significant improvement in geminate identification following training, especially for AV; b significant effect of geminate (lowest scores for /s/; c no significant effect of condition; and d no significant improvement for the control group. Most errors were misperceptions of geminates as long vowels. Test of generalization revealed 5% decline in accuracy for AV and 14% for A-only. Geminate production improved significantly (especially for AV based on rater judgments; improvement was greatest for /k/ and smallest for /s/. Most production errors involved substitution of a singleton for a geminate. Post-study interviews produced positive comments on Web-based training. Waveforms increased awareness of durational differences. Results support the effectiveness of auditory-visual input in L2 perception training with transfer to novel stimuli and improved production.

  8. Changes of brachial arterial doppler waveform during immersion of the hand of young men in ice-cold water

    International Nuclear Information System (INIS)

    Kim, Young Goo

    1994-01-01

    To evaluate the changes of brachial arterial Doppler waveform during immersion of the hand of young men in ice-cold water. Doppler waveforms of brachial arteries in 11 young male patients were recorded before and during immersion of ipsilateral hand in ice-cold water(4-5 .deg. C). The procedure was repeated on separate days. Patterns of waveform during immersion were compared with the changes of pulsatility index. Four men showed high impedance waveforms, and 5 men showed low impedance waveforms during immersion both at the first and at the second study. Two men, however, showed high impedance waveforms at the first study and tow impedance waveforms at the second study. The pulsatility index rose and fell in high and low impedance waveforms, respectively. The changes of brachial arterial Doppler waveforms could be classified into high and low impedance patterns, probably reflecting the acute changes in downstream impedance during immersion of hand in ice-cold water

  9. Hepatic vein Doppler waveform in patients with diffuse fatty infiltration of the liver

    International Nuclear Information System (INIS)

    Oguzkurt, Levent; Yildirim, Tulin; Torun, Dilek; Tercan, Fahri; Kizilkilic, Osman; Niron, E. Alp

    2005-01-01

    Objective: To determine the incidence of abnormal hepatic vein Doppler waveform in patients with diffuse fatty infiltration of the liver (FIL). Materials and methods: In this prospective study, 40 patients with diffuse FIL and 50 normal healthy adults who served as control group underwent hepatic vein (HV) Doppler ultrasonography. The patients with the diagnosis of FIL were 23 men (57.5%) and 17 women aged 30-62 years (mean age ± S.D., 42 ± 12 years). Subjects in the control group were 27 men (54%) and 23 women aged 34-65 years (mean age ± S.D., 45 ± 14 years). The diagnosis of FIL was confirmed with computed tomography density measurements. The waveforms of HV were classified into three groups: regular triphasic waveform, biphasic waveform without a reverse flow, and monophasic or flat waveform. Etiological factors for FIL were diabetes mellitus (DM), hyperlipidemia and obesity (body mass index > 25). Serum lipid profile was obtained from all the patients with FIL. Results: Seventeen of the 40 patients (43%) with FIL had an abnormal HV Doppler waveform, whereas only one of the 50 (2%) healthy subjects had an abnormal waveform. The difference in the distribution of normal Doppler waveform pattern between the patients and the control group was significant (P 0.05). There was not any correlation between the degree of fat infiltration and the hepatic vein waveform pattern (P = 0.60). Conclusion: Patients with fatty liver has a high rate of an abnormal hepatic vein Doppler waveform pattern which can be biphasic or monophasic. We could not find a relation between the etiological factors for FIL and the occurrence of an abnormal HV Doppler waveform

  10. The effect of inlet waveforms on computational hemodynamics of patient-specific intracranial aneurysms.

    Science.gov (United States)

    Xiang, J; Siddiqui, A H; Meng, H

    2014-12-18

    Due to the lack of patient-specific inlet flow waveform measurements, most computational fluid dynamics (CFD) simulations of intracranial aneurysms usually employ waveforms that are not patient-specific as inlet boundary conditions for the computational model. The current study examined how this assumption affects the predicted hemodynamics in patient-specific aneurysm geometries. We examined wall shear stress (WSS) and oscillatory shear index (OSI), the two most widely studied hemodynamic quantities that have been shown to predict aneurysm rupture, as well as maximal WSS (MWSS), energy loss (EL) and pressure loss coefficient (PLc). Sixteen pulsatile CFD simulations were carried out on four typical saccular aneurysms using 4 different waveforms and an identical inflow rate as inlet boundary conditions. Our results demonstrated that under the same mean inflow rate, different waveforms produced almost identical WSS distributions and WSS magnitudes, similar OSI distributions but drastically different OSI magnitudes. The OSI magnitude is correlated with the pulsatility index of the waveform. Furthermore, there is a linear relationship between aneurysm-averaged OSI values calculated from one waveform and those calculated from another waveform. In addition, different waveforms produced similar MWSS, EL and PLc in each aneurysm. In conclusion, inlet waveform has minimal effects on WSS, OSI distribution, MWSS, EL and PLc and a strong effect on OSI magnitude, but aneurysm-averaged OSI from different waveforms has a strong linear correlation with each other across different aneurysms, indicating that for the same aneurysm cohort, different waveforms can consistently stratify (rank) OSI of aneurysms. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Depths of Intraplate Indian Ocean Earthquakes from Waveform Modeling

    Science.gov (United States)

    Baca, A. J.; Polet, J.

    2014-12-01

    The Indian Ocean is a region of complex tectonics and anomalous seismicity. The ocean floor in this region exhibits many bathymetric features, most notably the multiple inactive fracture zones within the Wharton Basin and the Ninetyeast Ridge. The 11 April 2012 MW 8.7 and 8.2 strike-slip events that took place in this area are unique because their rupture appears to have extended to a depth where brittle failure, and thus seismic activity, was considered to be impossible. We analyze multiple intraplate earthquakes that have occurred throughout the Indian Ocean to better constrain their focal depths in order to enhance our understanding of how deep intraplate events are occurring and more importantly determine if the ruptures are originating within a ductile regime. Selected events are located within the Indian Ocean away from major plate boundaries. A majority are within the deforming Indo-Australian tectonic plate. Events primarily display thrust mechanisms with some strike-slip or a combination of the two. All events are between MW5.5-6.5. Event selections were handled this way in order to facilitate the analysis of teleseismic waveforms using a point source approximation. From these criteria we gathered a suite of 15 intraplate events. Synthetic seismograms of direct P-waves and depth phases are computed using a 1-D propagator matrix approach and compared with global teleseismic waveform data to determine a best depth for each event. To generate our synthetic seismograms we utilized the CRUST1.0 software, a global crustal model that generates velocity values at the hypocenter of our events. Our waveform analysis results reveal that our depths diverge from the Global Centroid Moment Tensor (GCMT) depths, which underestimate our deep lithosphere events and overestimate our shallow depths by as much as 17 km. We determined a depth of 45km for our deepest event. We will show a comparison of our final earthquake depths with the lithospheric thickness based on

  12. An Algorithm-Independent Analysis of the Quality of Images Produced Using Multi-Frame Blind Deconvolution Algorithms--Conference Proceedings (Postprint)

    National Research Council Canada - National Science Library

    Matson, Charles; Haji, Alim

    2007-01-01

    Multi-frame blind deconvolution (MFBD) algorithms can be used to generate a deblurred image of an object from a sequence of short-exposure and atmospherically-blurred images of the object by jointly estimating the common object...

  13. On the square arc voltage waveform model in magnetic discharge lamp studies

    OpenAIRE

    Molina, Julio; Sainz Sapera, Luis; Mesas García, Juan José

    2011-01-01

    The current number of magnetic and electronic ballast discharge lamps in power distribution systems is increasing because they perform better than incandescent lamps. This paper studies the magnetic discharge lamp modeling. In particular, the arc voltage waveform is analyzed and the limitations of the square waveform model are revealed from experimental measurements.

  14. Auto-correlation based intelligent technique for complex waveform presentation and measurement

    International Nuclear Information System (INIS)

    Rana, K P S; Singh, R; Sayann, K S

    2009-01-01

    Waveform acquisition and presentation forms the heart of many measurement systems. Particularly, data acquisition and presentation of repeating complex signals like sine sweep and frequency-modulated signals introduces the challenge of waveform time period estimation and live waveform presentation. This paper presents an intelligent technique, for waveform period estimation of both the complex and simple waveforms, based on the normalized auto-correlation method. The proposed technique is demonstrated using LabVIEW based intensive simulations on several simple and complex waveforms. Implementation of the technique is successfully demonstrated using LabVIEW based virtual instrumentation. Sine sweep vibration waveforms are successfully presented and measured for electrodynamic shaker system generated vibrations. The proposed method is also suitable for digital storage oscilloscope (DSO) triggering, for complex signals acquisition and presentation. This intelligence can be embodied into the DSO, making it an intelligent measurement system, catering wide varieties of the waveforms. The proposed technique, simulation results, robustness study and implementation results are presented in this paper.

  15. Screening for aortoiliac lesions by visual interpretation of the common femoral Doppler waveform

    DEFF Research Database (Denmark)

    Eiberg, J P; Jensen, F; Grønvall Rasmussen, J B

    2001-01-01

    to study the accuracy of simple visual interpretation of the common femoral artery Doppler waveform for screening the aorto-iliac segment for significant occlusive disease.......to study the accuracy of simple visual interpretation of the common femoral artery Doppler waveform for screening the aorto-iliac segment for significant occlusive disease....

  16. Waveform measurement in mocrowave device characterization: impact on power amplifiers design

    Directory of Open Access Journals (Sweden)

    Roberto Quaglia

    2016-07-01

    Full Text Available This paper describes an example of a measurement setup enabling waveform measurements during the load-pull characterization of a microwave power device. The significance of this measurement feature is highlighted showing how waveform engineering can be exploited to design high efficiency microwave power amplifiers.

  17. Use of the Kalman Filter for Aortic Pressure Waveform Noise Reduction.

    Science.gov (United States)

    Lam, Frank; Lu, Hsiang-Wei; Wu, Chung-Che; Aliyazicioglu, Zekeriya; Kang, James S

    2017-01-01

    Clinical applications that require extraction and interpretation of physiological signals or waveforms are susceptible to corruption by noise or artifacts. Real-time hemodynamic monitoring systems are important for clinicians to assess the hemodynamic stability of surgical or intensive care patients by interpreting hemodynamic parameters generated by an analysis of aortic blood pressure (ABP) waveform measurements. Since hemodynamic parameter estimation algorithms often detect events and features from measured ABP waveforms to generate hemodynamic parameters, noise and artifacts integrated into ABP waveforms can severely distort the interpretation of hemodynamic parameters by hemodynamic algorithms. In this article, we propose the use of the Kalman filter and the 4-element Windkessel model with static parameters, arterial compliance C , peripheral resistance R , aortic impedance r , and the inertia of blood L , to represent aortic circulation for generating accurate estimations of ABP waveforms through noise and artifact reduction. Results show the Kalman filter could very effectively eliminate noise and generate a good estimation from the noisy ABP waveform based on the past state history. The power spectrum of the measured ABP waveform and the synthesized ABP waveform shows two similar harmonic frequencies.

  18. Influence of crystal orientation on magnetostriction waveform in grain orientated electrical steel

    Energy Technology Data Exchange (ETDEWEB)

    Kijima, Gou, E-mail: g-kijima@jfe-steel.co.jp [Steel Research Laboratory, JFE Steel Corporation, Kawasaki, 210-0855 (Japan); Yamaguchi, Hiroi; Senda, Kunihiro; Hayakawa, Yasuyuki [Steel Research Laboratory, JFE Steel Corporation, Kurashiki, 712-8511 (Japan)

    2014-08-01

    Aiming to gain insight into the mechanisms of grain-oriented electrical steel sheet magnetostriction waveforms, we investigated the influence of crystal orientations. An increase in the β angle results in an increase in the amplitude of magnetostriction waveform, but does not affect the waveform itself. By slanting the excitation direction to simulate the change of the α angle, change in the magnetostriction waveform and a constriction–extension transition point in the steel plate was observed. The amplitude, however, was not significantly affected. We explained the nature of constriction–extension transition point in the magnetostriction waveform by considering the magnetization rotation. We speculated that the change of waveform resulting from the increase in the coating tensile stress can be attributed to the phenomenon of the magnetization rotation becoming hard to be generated due to the increase of magnetic anisotropy toward [001] axis. - Highlights: • β angle is related with the amplitude of magnetostriction waveform. • α angle is related with the magnetostriction waveform itself. • The effect of α angle can be controlled by the effect of coating tensile stress.

  19. Effects of waveform model systematics on the interpretation of GW150914

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.T.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K.M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, R.D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, M.J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, A.L.S.; Bock, O.; Boer, M.; Bogaert, J.G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y; Cheng, H. -P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Qian; Chua, A. J. K.; Chua, S. S. Y.; Chung, E.S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, A.C.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, Laura; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; Debra, D.; Debreczeni, G.; Degallaix, J.; De laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.A.; Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Giovanni, M. Di; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H. -B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, T. M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.M.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M; Fong, H.; Forsyth, S. S.; Fournier, J. -D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.P.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.M.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Buffoni-Hall, R.; Hall, E. D.; Hammond, G.L.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, P.J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C. -J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.A.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J. -M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W.; Jones, I.D.; Jones, R.; Jonker, R. J.G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.H.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.E.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan., S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C.H.; Lee, K.H.; Lee, M.H.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G.F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGrath Hoareau, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A. L.; Miller, B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, Brian C J; Moraru, D.; Gutierrez Moreno, M.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P.G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Gutierrez-Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton-Howes, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J.; Oh, S. H.; Ohme, F.; Oliver, M. B.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pace, A. E.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Castro-Perez, J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, D.M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.A.; Sachdev, Perminder S; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J; Schmidt, P.; Schnabel, R.B.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, M.S.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, António Dias da; Singer, A; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, R. J. E.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson-Moore, P.; Stone, J.R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.D.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, W.R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifir, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Van Bakel, N.; Van Beuzekom, Martin; Van Den Brand, J. F.J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P.J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.M.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J.L.; Wu, D.S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J. -P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S.J.; Zhu, X. J.; Zucker, M. E.; Zweizig, J.; Boyle, M.; Chu, I.W.T.; Hemberger, D.; Hinder, I.; Kidder, L. E.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano-Vinuales, A.

    2017-01-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions,

  20. WaveformECG: A Platform for Visualizing, Annotating, and Analyzing ECG Data.

    Science.gov (United States)

    Winslow, Raimond L; Granite, Stephen; Jurado, Christian

    2016-01-01

    The electrocardiogram (ECG) is the most commonly collected data in cardiovascular research because of the ease with which it can be measured and because changes in ECG waveforms reflect underlying aspects of heart disease. Accessed through a browser, WaveformECG is an open source platform supporting interactive analysis, visualization, and annotation of ECGs.