Interferogram analysis using the Abel inversion technique
International Nuclear Information System (INIS)
Yusof Munajat; Mohamad Kadim Suaidi
2000-01-01
High speed and high resolution optical detection system were used to capture the image of acoustic waves propagation. The freeze image in the form of interferogram was analysed to calculate the transient pressure profile of the acoustic waves. The interferogram analysis was based on the fringe shift and the application of the Abel inversion technique. An easier approach was made by mean of using MathCAD program as a tool in the programming; yet powerful enough to make such calculation, plotting and transfer of file. (Author)
INVERSE FILTERING TECHNIQUES IN SPEECH ANALYSIS
African Journals Online (AJOL)
Dr Obe
domain or in the frequency domain. However their .... computer to speech analysis led to important elaborations ... tool for the estimation of formant trajectory (10), ... prediction Linear prediction In effect determines the filter .... Radio Res. Lab.
Thermal measurements and inverse techniques
Orlande, Helcio RB; Maillet, Denis; Cotta, Renato M
2011-01-01
With its uncommon presentation of instructional material regarding mathematical modeling, measurements, and solution of inverse problems, Thermal Measurements and Inverse Techniques is a one-stop reference for those dealing with various aspects of heat transfer. Progress in mathematical modeling of complex industrial and environmental systems has enabled numerical simulations of most physical phenomena. In addition, recent advances in thermal instrumentation and heat transfer modeling have improved experimental procedures and indirect measurements for heat transfer research of both natural phe
Source-jerk analysis using a semi-explicit inverse kinetic technique
International Nuclear Information System (INIS)
Spriggs, G.D.; Pederson, R.A.
1985-01-01
A method is proposed for measuring the effective reproduction factor, k, in subcritical systems. The method uses the transient response of a subcritical system to the sudden removal of an extraneous neutron source (i.e., a source jerk). The response is analyzed using an inverse kinetic technique that least-squares fits the exact analytical solution corresponding to a source-jerk transient as derived from the point-reactor model. It has been found that the technique can provide an accurate means of measuring k in systems that are close to critical (i.e., 0.95 < k < 1.0). As a system becomes more subcritical (i.e., k << 1.0) spatial effects can introduce significant biases depending on the source and detector positions. However, methods are available that can correct for these biases and, hence, can allow measuring subcriticality in systems with k as low as 0.5. 12 refs., 3 figs
Source-jerk analysis using a semi-explicit inverse kinetic technique
International Nuclear Information System (INIS)
Spriggs, G.D.; Pederson, R.A.
1985-01-01
A method is proposed for measuring the effective reproduction factor, k, in subcritical systems. The method uses the transient responses of a subcritical system to the sudden removal of an extraneous neutron source (i.e., a source jerk). The response is analyzed using an inverse kinetic technique that least-squares fits the exact analytical solution corresponding to a source-jerk transient as derived from the point-reactor model. It has been found that the technique can provide an accurate means of measuring k in systems that are close to critical (i.e., 0.95 < k < 1.0). As a system becomes more subcritical (i.e., k << 1.0) spatial effects can introduce significant biases depending on the source and detector positions. However, methods are available that can correct for these biases and, hence, can allow measuring subcriticality in systems with k as low as 0.5
Point-source inversion techniques
Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.
1982-11-01
A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.
Analog fault diagnosis by inverse problem technique
Ahmed, Rania F.
2011-12-01
A novel algorithm for detecting soft faults in linear analog circuits based on the inverse problem concept is proposed. The proposed approach utilizes optimization techniques with the aid of sensitivity analysis. The main contribution of this work is to apply the inverse problem technique to estimate the actual parameter values of the tested circuit and so, to detect and diagnose single fault in analog circuits. The validation of the algorithm is illustrated through applying it to Sallen-Key second order band pass filter and the results show that the detecting percentage efficiency was 100% and also, the maximum error percentage of estimating the parameter values is 0.7%. This technique can be applied to any other linear circuit and it also can be extended to be applied to non-linear circuits. © 2011 IEEE.
Inverse Raman effect: applications and detection techniques
International Nuclear Information System (INIS)
Hughes, L.J. Jr.
1980-08-01
The processes underlying the inverse Raman effect are qualitatively described by comparing it to the more familiar phenomena of conventional and stimulated Raman scattering. An experession is derived for the inverse Raman absorption coefficient, and its relationship to the stimulated Raman gain is obtained. The power requirements of the two fields are examined qualitatively and quantitatively. The assumption that the inverse Raman absorption coefficient is constant over the interaction length is examined. Advantages of the technique are discussed and a brief survey of reported studies is presented
Inverse Raman effect: applications and detection techniques
Energy Technology Data Exchange (ETDEWEB)
Hughes, L.J. Jr.
1980-08-01
The processes underlying the inverse Raman effect are qualitatively described by comparing it to the more familiar phenomena of conventional and stimulated Raman scattering. An experession is derived for the inverse Raman absorption coefficient, and its relationship to the stimulated Raman gain is obtained. The power requirements of the two fields are examined qualitatively and quantitatively. The assumption that the inverse Raman absorption coefficient is constant over the interaction length is examined. Advantages of the technique are discussed and a brief survey of reported studies is presented.
Trimming and procrastination as inversion techniques
Backus, George E.
1996-12-01
By examining the processes of truncating and approximating the model space (trimming it), and by committing to neither the objectivist nor the subjectivist interpretation of probability (procrastinating), we construct a formal scheme for solving linear and non-linear geophysical inverse problems. The necessary prior information about the correct model xE can be either a collection of inequalities or a probability measure describing where xE was likely to be in the model space X before the data vector y0 was measured. The results of the inversion are (1) a vector z0 that estimates some numerical properties zE of xE; (2) an estimate of the error δz = z0 - zE. As y0 is finite dimensional, so is z0, and hence in principle inversion cannot describe all of xE. The error δz is studied under successively more specialized assumptions about the inverse problem, culminating in a complete analysis of the linear inverse problem with a prior quadratic bound on xE. Our formalism appears to encompass and provide error estimates for many of the inversion schemes current in geomagnetism, and would be equally applicable in geodesy and seismology if adequate prior information were available there. As an idealized example we study the magnetic field at the core-mantle boundary, using satellite measurements of field elements at sites assumed to be almost uniformly distributed on a single spherical surface. Magnetospheric currents are neglected and the crustal field is idealized as a random process with rotationally invariant statistics. We find that an appropriate data compression diagonalizes the variance matrix of the crustal signal and permits an analytic trimming of the idealized problem.
International Nuclear Information System (INIS)
Rey Silva, D.V.F.M.; Oliveira, A.P.; Macacini, J.F.; Da Silva, N.C.; Cipriani, M.; Quinelato, A.L.
2005-01-01
Full text of publication follows: The study of the dispersion of radioactive materials in soils and in engineering barriers plays an important role in the safety analysis of nuclear waste repositories. In order to proceed with such kind of study the involved physical properties must be determined with precision, including the apparent mass diffusion coefficient, which is defined as the ratio between the effective mass diffusion coefficient and the retardation factor. Many different experimental and estimation techniques are available on the literature for the identification of the diffusion coefficient and this work describes the implementation of that developed by Pereira et al [1]. This technique is based on non-intrusive radiation measurements and the experimental setup consists of a cylindrical column filled with compacted media saturated with water. A radioactive contaminant is mixed with a portion of the media and then placed in the bottom of the column. Therefore, the contaminant will diffuse through the uncontaminated media due to the concentration gradient. A radiation detector is used to measure the number of counts, which is associated to the contaminant concentration, at several positions along the column during the experiment. Such measurements are then used to estimate the apparent diffusion coefficient of the contaminant in the porous media by inverse analysis. The inverse problem of parameter estimation is solved with the Levenberg-Marquart Method of minimization of the least-square norm. The experiment was optimized with respect to the number of measurement locations, frequency of measurements and duration of the experiment through the analysis of the sensitivity coefficients and by using a D-optimum approach. This setup is suitable for studying a great number of combinations of diverse contaminants and porous media varying in composition and compacting, with considerable easiness and reliable results, and it was chosen because that is the
International Nuclear Information System (INIS)
Zimmerman, D.A.; Gallegos, D.P.
1993-10-01
The groundwater flow pathway in the Culebra Dolomite aquifer at the Waste Isolation Pilot Plant (WIPP) has been identified as a potentially important pathway for radionuclide migration to the accessible environment. Consequently, uncertainties in the models used to describe flow and transport in the Culebra need to be addressed. A ''Geostatistics Test Problem'' is being developed to evaluate a number of inverse techniques that may be used for flow calculations in the WIPP performance assessment (PA). The Test Problem is actually a series of test cases, each being developed as a highly complex synthetic data set; the intent is for the ensemble of these data sets to span the range of possible conceptual models of groundwater flow at the WIPP site. The Test Problem analysis approach is to use a comparison of the probabilistic groundwater travel time (GWTT) estimates produced by each technique as the basis for the evaluation. Participants are given observations of head and transmissivity (possibly including measurement error) or other information such as drawdowns from pumping wells, and are asked to develop stochastic models of groundwater flow for the synthetic system. Cumulative distribution functions (CDFs) of groundwater flow (computed via particle tracking) are constructed using the head and transmissivity data generated through the application of each technique; one semi-analytical method generates the CDFs of groundwater flow directly. This paper describes the results from Test Case No. 1
Integrated intensities in inverse time-of-flight technique
International Nuclear Information System (INIS)
Dorner, Bruno
2006-01-01
In traditional data analysis a model function, convoluted with the resolution, is fitted to the measured data. In case that integrated intensities of signals are of main interest, one can use an approach which does not require a model function for the signal nor detailed knowledge of the resolution. For inverse TOF technique, this approach consists of two steps: (i) Normalisation of the measured spectrum with the help of a monitor, with 1/k sensitivity, which is positioned in front of the sample. This means at the same time a conversion of the data from time of flight to energy transfer. (ii) A Jacobian [I. Waller, P.O. Froeman, Ark. Phys. 4 (1952) 183] transforms data collected at constant scattering angle into data as if measured at constant momentum transfer Q. This Jacobian works correctly for signals which have a constant width at different Q along the trajectory of constant scattering angle. The approach has been tested on spectra of Compton scattering with neutrons, having epithermal energies, obtained on the inverse TOF spectrometer VESUVIO/ISIS. In this case the width of the signal is increasing proportional to Q and in consequence the application of the Jacobian leads to integrated intensities slightly too high. The resulting integrated intensities agree very well with results derived in the traditional way. Thus this completely different approach confirms the observation that signals from recoil by H-atoms at large momentum transfers are weaker than expected
Resolution analysis in full waveform inversion
Fichtner, A.; Trampert, J.
2011-01-01
We propose a new method for the quantitative resolution analysis in full seismic waveform inversion that overcomes the limitations of classical synthetic inversions while being computationally more efficient and applicable to any misfit measure. The method rests on (1) the local quadratic
Hedland, D. A.; Degonia, P. K.
1974-01-01
The RAE-1 spacecraft inversion performed October 31, 1972 is described based upon the in-orbit dynamical data in conjunction with results obtained from previously developed computer simulation models. The computer simulations used are predictive of the satellite dynamics, including boom flexing, and are applicable during boom deployment and retraction, inter-phase coast periods, and post-deployment operations. Attitude data, as well as boom tip data, were analyzed in order to obtain a detailed description of the dynamical behavior of the spacecraft during and after the inversion. Runs were made using the computer model and the results were analyzed and compared with the real time data. Close agreement between the actual recorded spacecraft attitude and the computer simulation results was obtained.
A conditioning technique for matrix inversion for Wilson fermions
International Nuclear Information System (INIS)
DeGrand, T.A.
1988-01-01
I report a simple technique for conditioning conjugate gradient or conjugate residue matrix inversion as applied to the lattice gauge theory problem of computing the propagator of Wilson fermions. One form of the technique provides about a factor of three speedup over an unconditioned algorithm while running at the same speed as an unconditioned algorithm. I illustrate the method as it is applied to a conjugate residue algorithm. (orig.)
Directory of Open Access Journals (Sweden)
María Fernanda Garcés
2017-04-01
Conclusions: Inversions of intron 22 and 1 were found in half of this group of patients. These results are reproducible and useful to identify the two most frequent mutations in severe hemophilia A patients.
One-dimensional nonlinear inverse heat conduction technique
International Nuclear Information System (INIS)
Hills, R.G.; Hensel, E.C. Jr.
1986-01-01
The one-dimensional nonlinear problem of heat conduction is considered. A noniterative space-marching finite-difference algorithm is developed to estimate the surface temperature and heat flux from temperature measurements at subsurface locations. The trade-off between resolution and variance of the estimates of the surface conditions is discussed quantitatively. The inverse algorithm is stabilized through the use of digital filters applied recursively. The effect of the filters on the resolution and variance of the surface estimates is quantified. Results are presented which indicate that the technique is capable of handling noisy measurement data
Inverse Kinematic Analysis Of A Quadruped Robot
Directory of Open Access Journals (Sweden)
Muhammed Arif Sen
2017-09-01
Full Text Available This paper presents an inverse kinematics program of a quadruped robot. The kinematics analysis is main problem in the manipulators and robots. Dynamic and kinematic structures of quadruped robots are very complex compared to industrial and wheeled robots. In this study inverse kinematics solutions for a quadruped robot with 3 degrees of freedom on each leg are presented. Denavit-Hartenberg D-H method are used for the forward kinematic. The inverse kinematic equations obtained by the geometrical and mathematical methods are coded in MATLAB. And thus a program is obtained that calculate the legs joint angles corresponding to desired various orientations of robot and endpoints of legs. Also the program provides the body orientations of robot in graphical form. The angular positions of joints obtained corresponding to desired different orientations of robot and endpoints of legs are given in this study.
SQUIDs and inverse problem techniques in nondestructive evaluation of metals
Bruno, A C
2001-01-01
Superconducting Quantum Interference Devices coupled to gradiometers were used to defect flaws in metals. We detected flaws in aluminium samples carrying current, measuring fields at lift-off distances up to one order of magnitude larger than the size of the flaw. Configured as a susceptometer we detected surface-braking flaws in steel samples, measuring the distortion on the applied magnetic field. We also used spatial filtering techniques to enhance the visualization of the magnetic field due to the flaws. In order to assess its severity, we used the generalized inverse method and singular value decomposition to reconstruct small spherical inclusions in steel. In addition, finite elements and optimization techniques were used to image complex shaped flaws.
International Nuclear Information System (INIS)
Choi, C. Y.
1997-01-01
A geometrical inverse heat conduction problem is solved for the infrared scanning cavity detection by the boundary element method using minimal energy technique. By minimizing the kinetic energy of temperature field, boundary element equations are converted to the quadratic programming problem. A hypothetical inner boundary is defined such that the actual cavity is located interior to the domain. Temperatures at hypothetical inner boundary are determined to meet the constraints of measurement error of surface temperature obtained by infrared scanning, and then boundary element analysis is performed for the position of an unknown boundary (cavity). Cavity detection algorithm is provided, and the effects of minimal energy technique on the inverse solution method are investigated by means of numerical analysis
Utility of natural generalised inverse technique in the interpretation of dyke structures
Digital Repository Service at National Institute of Oceanography (India)
Rao, M.M.M.; Murty, T.V.R.; Rao, P.R.; Lakshminarayana, S.; Subrahmanyam, A.S.; Murthy, K.S.R.
environs along the central west coast of India: analysis using EOF, J. Geophys.Res.,91(1986) 8523 -8526. 9 Marquardt D W, An algorithm for least-squares estimation of non-linear parameters, J. Soc. Indust. Appl. Math, 11 (1963) 431-441. INDIAN J. MAR... technique in reconstruction of gravity anomalies due to a fault, Indian J. Pure. Appl. Math., 34 (2003) 31-47. 16 Ramana Murty T V, Somayajulu Y K & Murty C S, Reconstruction of sound speed profile through natural generalised inverse technique, Indian J...
Reconstruction of sound speed profile through natural generalized inverse technique
Digital Repository Service at National Institute of Oceanography (India)
Murty, T.V.R.; Somayajulu, Y.K.; Murty, C.S.
An acoustic model has been developed for reconstruction of vertical sound speed in a near stable or stratified ocean. Generalized inverse method is utilised in the model development. Numerical experiments have been carried out to account...
Brown, Malcolm
2009-01-01
Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…
Review on solving the inverse problem in EEG source analysis
Directory of Open Access Journals (Sweden)
Fabri Simon G
2008-11-01
Full Text Available Abstract In this primer, we give a review of the inverse problem for EEG source localization. This is intended for the researchers new in the field to get insight in the state-of-the-art techniques used to find approximate solutions of the brain sources giving rise to a scalp potential recording. Furthermore, a review of the performance results of the different techniques is provided to compare these different inverse solutions. The authors also include the results of a Monte-Carlo analysis which they performed to compare four non parametric algorithms and hence contribute to what is presently recorded in the literature. An extensive list of references to the work of other researchers is also provided. This paper starts off with a mathematical description of the inverse problem and proceeds to discuss the two main categories of methods which were developed to solve the EEG inverse problem, mainly the non parametric and parametric methods. The main difference between the two is to whether a fixed number of dipoles is assumed a priori or not. Various techniques falling within these categories are described including minimum norm estimates and their generalizations, LORETA, sLORETA, VARETA, S-MAP, ST-MAP, Backus-Gilbert, LAURA, Shrinking LORETA FOCUSS (SLF, SSLOFO and ALF for non parametric methods and beamforming techniques, BESA, subspace techniques such as MUSIC and methods derived from it, FINES, simulated annealing and computational intelligence algorithms for parametric methods. From a review of the performance of these techniques as documented in the literature, one could conclude that in most cases the LORETA solution gives satisfactory results. In situations involving clusters of dipoles, higher resolution algorithms such as MUSIC or FINES are however preferred. Imposing reliable biophysical and psychological constraints, as done by LAURA has given superior results. The Monte-Carlo analysis performed, comparing WMN, LORETA, sLORETA and SLF
Inverse Function: Pre-Service Teachers' Techniques and Meanings
Paoletti, Teo; Stevens, Irma E.; Hobson, Natalie L. F.; Moore, Kevin C.; LaForest, Kevin R.
2018-01-01
Researchers have argued teachers and students are not developing connected meanings for function inverse, thus calling for a closer examination of teachers' and students' inverse function meanings. Responding to this call, we characterize 25 pre-service teachers' inverse function meanings as inferred from our analysis of clinical interviews. After…
Inverse analysis of turbidites by machine learning
Naruse, H.; Nakao, K.
2017-12-01
This study aims to propose a method to estimate paleo-hydraulic conditions of turbidity currents from ancient turbidites by using machine-learning technique. In this method, numerical simulation was repeated under various initial conditions, which produces a data set of characteristic features of turbidites. Then, this data set of turbidites is used for supervised training of a deep-learning neural network (NN). Quantities of characteristic features of turbidites in the training data set are given to input nodes of NN, and output nodes are expected to provide the estimates of initial condition of the turbidity current. The optimization of weight coefficients of NN is then conducted to reduce root-mean-square of the difference between the true conditions and the output values of NN. The empirical relationship with numerical results and the initial conditions is explored in this method, and the discovered relationship is used for inversion of turbidity currents. This machine learning can potentially produce NN that estimates paleo-hydraulic conditions from data of ancient turbidites. We produced a preliminary implementation of this methodology. A forward model based on 1D shallow-water equations with a correction of density-stratification effect was employed. This model calculates a behavior of a surge-like turbidity current transporting mixed-size sediment, and outputs spatial distribution of volume per unit area of each grain-size class on the uniform slope. Grain-size distribution was discretized 3 classes. Numerical simulation was repeated 1000 times, and thus 1000 beds of turbidites were used as the training data for NN that has 21000 input nodes and 5 output nodes with two hidden-layers. After the machine learning finished, independent simulations were conducted 200 times in order to evaluate the performance of NN. As a result of this test, the initial conditions of validation data were successfully reconstructed by NN. The estimated values show very small
Relevance vector machine technique for the inverse scattering problem
International Nuclear Information System (INIS)
Wang Fang-Fang; Zhang Ye-Rong
2012-01-01
A novel method based on the relevance vector machine (RVM) for the inverse scattering problem is presented in this paper. The nonlinearity and the ill-posedness inherent in this problem are simultaneously considered. The nonlinearity is embodied in the relation between the scattered field and the target property, which can be obtained through the RVM training process. Besides, rather than utilizing regularization, the ill-posed nature of the inversion is naturally accounted for because the RVM can produce a probabilistic output. Simulation results reveal that the proposed RVM-based approach can provide comparative performances in terms of accuracy, convergence, robustness, generalization, and improved performance in terms of sparse property in comparison with the support vector machine (SVM) based approach. (general)
Application of a numerical Laplace transform inversion technique to a problem in reactor dynamics
International Nuclear Information System (INIS)
Ganapol, B.D.; Sumini, M.
1990-01-01
A newly developed numerical technique for the Laplace transform inversion is applied to a classical time-dependent problem of reactor physics. The dynamic behaviour of a multiplying system has been analyzed through a continuous slowing down model, taking into account a finite slowing down time, the presence of several groups of neutron precursors and simplifying the spatial analysis using the space asymptotic approximation. The results presented, show complete agreement with analytical ones previously obtained and allow a deeper understanding of the model features. (author)
Parameterization analysis and inversion for orthorhombic media
Masmoudi, Nabil
2018-01-01
Accounting for azimuthal anisotropy is necessary for the processing and inversion of wide-azimuth and wide-aperture seismic data because wave speeds naturally depend on the wave propagation direction. Orthorhombic anisotropy is considered the most
Inverse Analysis and Modeling for Tunneling Thrust on Shield Machine
Directory of Open Access Journals (Sweden)
Qian Zhang
2013-01-01
Full Text Available With the rapid development of sensor and detection technologies, measured data analysis plays an increasingly important role in the design and control of heavy engineering equipment. The paper proposed a method for inverse analysis and modeling based on mass on-site measured data, in which dimensional analysis and data mining techniques were combined. The method was applied to the modeling of the tunneling thrust on shield machines and an explicit expression for thrust prediction was established. Combined with on-site data from a tunneling project in China, the inverse identification of model coefficients was carried out using the multiple regression method. The model residual was analyzed by statistical methods. By comparing the on-site data and the model predicted results in the other two projects with different tunneling conditions, the feasibility of the model was discussed. The work may provide a scientific basis for the rational design and control of shield tunneling machines and also a new way for mass on-site data analysis of complex engineering systems with nonlinear, multivariable, time-varying characteristics.
Development of high-energy resolution inverse photoemission technique
International Nuclear Information System (INIS)
Asakura, D.; Fujii, Y.; Mizokawa, T.
2005-01-01
We developed a new inverse photoemission (IPES) machine based on a new idea to improve the energy resolution: off-plane Eagle mounting of the optical system in combination with dispersion matching between incoming electron and outgoing photon. In order to achieve dispersion matching, we have employed a parallel plate electron source and have investigated whether the electron beam is obtained as expected. In this paper, we present the principle and design of the new IPES method and report the current status of the high-energy resolution IPES machine
Solving Inverse Kinematics – A New Approach to the Extended Jacobian Technique
Directory of Open Access Journals (Sweden)
M. Šoch
2005-01-01
Full Text Available This paper presents a brief summary of current numerical algorithms for solving the Inverse Kinematics problem. Then a new approach based on the Extended Jacobian technique is compared with the current Jacobian Inversion method. The presented method is intended for use in the field of computer graphics for animation of articulated structures.
Recovery of material parameters of soft hyperelastic tissue by an inverse spectral technique
Gou, Kun; Joshi, Sunnie; Walton, Jay R.
2012-01-01
An inverse spectral method is developed for recovering a spatially inhomogeneous shear modulus for soft tissue. The study is motivated by a novel use of the intravascular ultrasound technique to image arteries. The arterial wall is idealized as a
Parameterization analysis and inversion for orthorhombic media
Masmoudi, Nabil
2018-05-01
Accounting for azimuthal anisotropy is necessary for the processing and inversion of wide-azimuth and wide-aperture seismic data because wave speeds naturally depend on the wave propagation direction. Orthorhombic anisotropy is considered the most effective anisotropic model that approximates the azimuthal anisotropy we observe in seismic data. In the framework of full wave form inversion (FWI), the large number of parameters describing orthorhombic media exerts a considerable trade-off and increases the non-linearity of the inversion problem. Choosing a suitable parameterization for the model, and identifying which parameters in that parameterization could be well resolved, are essential to a successful inversion. In this thesis, I derive the radiation patterns for different acoustic orthorhombic parameterization. Analyzing the angular dependence of the scattering of the parameters of different parameterizations starting with the conventionally used notation, I assess the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. In order to build practical inversion strategies, I suggest new parameters (called deviation parameters) for a new parameterization style in orthorhombic media. The novel parameters denoted ∈d, ƞd and δd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. The main feature of the deviation parameters consists of keeping the scattering of the vertical transversely isotropic (VTI) parameters stationary with azimuth. Using these scattering features, we can condition FWI to invert for the parameters which the data are sensitive to, at different stages, scales, and locations in the model. With this parameterization, the data are mainly sensitive to the scattering of 3 parameters (out of six that describe an acoustic orthorhombic medium): the horizontal velocity in the x1 direction, ∈1 which provides scattering mainly near
Inverse Analysis of Cavitation Impact Phenomena on Structures
National Research Council Canada - National Science Library
Lambrakos, S. G; Tran, N. E
2007-01-01
A general methodology is presented for in situ detection of cavitation impact phenomena on structures based on inverse analysis of luminescent emissions resulting from the collapsing of bubbles onto surfaces...
Magnetic resonance separation imaging using a divided inversion recovery technique (DIRT).
Goldfarb, James W
2010-04-01
The divided inversion recovery technique is an MRI separation method based on tissue T(1) relaxation differences. When tissue T(1) relaxation times are longer than the time between inversion pulses in a segmented inversion recovery pulse sequence, longitudinal magnetization does not pass through the null point. Prior to additional inversion pulses, longitudinal magnetization may have an opposite polarity. Spatial displacement of tissues in inversion recovery balanced steady-state free-precession imaging has been shown to be due to this magnetization phase change resulting from incomplete magnetization recovery. In this paper, it is shown how this phase change can be used to provide image separation. A pulse sequence parameter, the time between inversion pulses (T180), can be adjusted to provide water-fat or fluid separation. Example water-fat and fluid separation images of the head, heart, and abdomen are presented. The water-fat separation performance was investigated by comparing image intensities in short-axis divided inversion recovery technique images of the heart. Fat, blood, and fluid signal was suppressed to the background noise level. Additionally, the separation performance was not affected by main magnetic field inhomogeneities.
A Highly Efficient Shannon Wavelet Inverse Fourier Technique for Pricing European Options
Ortiz-Gracia, Luis; Oosterlee, C.W.
2016-01-01
In the search for robust, accurate, and highly efficient financial option valuation techniques, we here present the SWIFT method (Shannon wavelets inverse Fourier technique), based on Shannon wavelets. SWIFT comes with control over approximation errors made by means of sharp quantitative error
A highly efficient Shannon wavelet inverse Fourier technique for pricing European options
L. Ortiz Gracia (Luis); C.W. Oosterlee (Cornelis)
2016-01-01
htmlabstractIn the search for robust, accurate, and highly efficient financial option valuation techniques, we here present the SWIFT method (Shannon wavelets inverse Fourier technique), based on Shannon wavelets. SWIFT comes with control over approximation errors made by means of
Berger, B. S.; Duangudom, S.
1973-01-01
A technique is introduced which extends the range of useful approximation of numerical inversion techniques to many cycles of an oscillatory function without requiring either the evaluation of the image function for many values of s or the computation of higher-order terms. The technique consists in reducing a given initial value problem defined over some interval into a sequence of initial value problems defined over a set of subintervals. Several numerical examples demonstrate the utility of the method.
International Nuclear Information System (INIS)
Castaneda M, V. H.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Leon P, A. A.; Hernandez P, C. F.; Espinoza G, J. G.; Ortiz R, J. M.; Vega C, H. R.; Mendez, R.; Gallego, E.; Sousa L, M. A.
2016-10-01
The Taguchi methodology has proved to be highly efficient to solve inverse problems, in which the values of some parameters of the model must be obtained from the observed data. There are intrinsic mathematical characteristics that make a problem known as inverse. Inverse problems appear in many branches of science, engineering and mathematics. To solve this type of problem, researches have used different techniques. Recently, the use of techniques based on Artificial Intelligence technology is being explored by researches. This paper presents the use of a software tool based on artificial neural networks of generalized regression in the solution of inverse problems with application in high energy physics, specifically in the solution of the problem of neutron spectrometry. To solve this problem we use a software tool developed in the Mat Lab programming environment, which employs a friendly user interface, intuitive and easy to use for the user. This computational tool solves the inverse problem involved in the reconstruction of the neutron spectrum based on measurements made with a Bonner spheres spectrometric system. Introducing this information, the neural network is able to reconstruct the neutron spectrum with high performance and generalization capability. The tool allows that the end user does not require great training or technical knowledge in development and/or use of software, so it facilitates the use of the program for the resolution of inverse problems that are in several areas of knowledge. The techniques of Artificial Intelligence present singular veracity to solve inverse problems, given the characteristics of artificial neural networks and their network topology, therefore, the tool developed has been very useful, since the results generated by the Artificial Neural Network require few time in comparison to other techniques and are correct results comparing them with the actual data of the experiment. (Author)
Inverse thermal analysis method to study solidification in cast iron
DEFF Research Database (Denmark)
Dioszegi, Atilla; Hattel, Jesper
2004-01-01
Solidification modelling of cast metals is widely used to predict final properties in cast components. Accurate models necessitate good knowledge of the solidification behaviour. The present study includes a re-examination of the Fourier thermal analysis method. This involves an inverse numerical...... solution of a 1-dimensional heat transfer problem connected to solidification of cast alloys. In the analysis, the relation between the thermal state and the fraction solid of the metal is evaluated by a numerical method. This method contains an iteration algorithm controlled by an under relaxation term...... inverse thermal analysis was tested on both experimental and simulated data....
Forensic analysis of explosions: Inverse calculation of the charge mass
Voort, M.M. van der; Wees, R.M.M. van; Brouwer, S.D.; Jagt-Deutekom, M.J. van der; Verreault, J.
2015-01-01
Forensic analysis of explosions consists of determining the point of origin, the explosive substance involved, and the charge mass. Within the EU fP7 project Hyperion, TNO developed the Inverse Explosion Analysis (TNO-IEA) tool to estïmate the charge mass and point of origin based on observed damage
Justiniano, A.; Jaya, Y.; Diephuis, G.; Veenhof, R.; Pringle, T.
2015-01-01
The objective of the study is to characterise the Triassic massive stacked sandstone deposits of the Main Buntsandstein Subgroup at Block Q16 located in the West Netherlands Basin. The characterisation was carried out through combining rock-physics modelling and seismic inversion techniques. The
Evaluation of inverse modeling techniques for pinpointing water leakages at building constructions
Schijndel, van A.W.M.
2015-01-01
The location and nature of the moisture leakages are sometimes difficult to detect. Moreover, the relation between observed inside surface moisture patterns and where the moisture enters the construction is often not clear. The objective of this paper is to investigate inverse modeling techniques as
INTERNAL ENVIRONMENT ANALYSIS TECHNIQUES
Directory of Open Access Journals (Sweden)
Caescu Stefan Claudiu
2011-12-01
Full Text Available Theme The situation analysis, as a separate component of the strategic planning, involves collecting and analysing relevant types of information on the components of the marketing environment and their evolution on the one hand and also on the organization’s resources and capabilities on the other. Objectives of the Research The main purpose of the study of the analysis techniques of the internal environment is to provide insight on those aspects that are of strategic importance to the organization. Literature Review The marketing environment consists of two distinct components, the internal environment that is made from specific variables within the organization and the external environment that is made from variables external to the organization. Although analysing the external environment is essential for corporate success, it is not enough unless it is backed by a detailed analysis of the internal environment of the organization. The internal environment includes all elements that are endogenous to the organization, which are influenced to a great extent and totally controlled by it. The study of the internal environment must answer all resource related questions, solve all resource management issues and represents the first step in drawing up the marketing strategy. Research Methodology The present paper accomplished a documentary study of the main techniques used for the analysis of the internal environment. Results The special literature emphasizes that the differences in performance from one organization to another is primarily dependant not on the differences between the fields of activity, but especially on the differences between the resources and capabilities and the ways these are capitalized on. The main methods of analysing the internal environment addressed in this paper are: the analysis of the organizational resources, the performance analysis, the value chain analysis and the functional analysis. Implications Basically such
A comparative study of surface waves inversion techniques at strong motion recording sites in Greece
Panagiotis C. Pelekis,; Savvaidis, Alexandros; Kayen, Robert E.; Vlachakis, Vasileios S.; Athanasopoulos, George A.
2015-01-01
Surface wave method was used for the estimation of Vs vs depth profile at 10 strong motion stations in Greece. The dispersion data were obtained by SASW method, utilizing a pair of electromechanical harmonic-wave source (shakers) or a random source (drop weight). In this study, three inversion techniques were used a) a recently proposed Simplified Inversion Method (SIM), b) an inversion technique based on a neighborhood algorithm (NA) which allows the incorporation of a priori information regarding the subsurface structure parameters, and c) Occam's inversion algorithm. For each site constant value of Poisson's ratio was assumed (ν=0.4) since the objective of the current study is the comparison of the three inversion schemes regardless the uncertainties resulting due to the lack of geotechnical data. A penalty function was introduced to quantify the deviations of the derived Vs profiles. The Vs models are compared as of Vs(z), Vs30 and EC8 soil category, in order to show the insignificance of the existing variations. The comparison results showed that the average variation of SIM profiles is 9% and 4.9% comparing with NA and Occam's profiles respectively whilst the average difference of Vs30 values obtained from SIM is 7.4% and 5.0% compared with NA and Occam's.
International Nuclear Information System (INIS)
Stieler, Florian; Yan, Hui; Lohr, Frank; Wenz, Frederik; Yin, Fang-Fang
2009-01-01
Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT) is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI) guided system was developed and examined. The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS). Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be 'translated' to a set of 'if-then rules' for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS), was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints). The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02%) and membership functions (3.9%), thus suggesting that the 'behavior' of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. The study demonstrated a feasible way
Directory of Open Access Journals (Sweden)
Wenz Frederik
2009-09-01
Full Text Available Abstract Background Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI guided system was developed and examined. Methods The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS. Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be "translated" to a set of "if-then rules" for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS, was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints. The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Results Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02% and membership functions (3.9%, thus suggesting that the "behavior" of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. Conclusion The
Nguyen, Quynh C.; Osypuk, Theresa L.; Schmidt, Nicole M.; Glymour, M. Maria; Tchetgen Tchetgen, Eric J.
2015-01-01
Despite the recent flourishing of mediation analysis techniques, many modern approaches are difficult to implement or applicable to only a restricted range of regression models. This report provides practical guidance for implementing a new technique utilizing inverse odds ratio weighting (IORW) to estimate natural direct and indirect effects for mediation analyses. IORW takes advantage of the odds ratio's invariance property and condenses information on the odds ratio for the relationship be...
Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data
Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.
2017-10-01
The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.
Directory of Open Access Journals (Sweden)
Hammad Dabo Baba
2014-01-01
Full Text Available One of the most significant step in building structure maintenance decision is the physical inspection of the facility to be maintained. The physical inspection involved cursory assessment of the structure and ratings of the identified defects based on expert evaluation. The objective of this paper is to describe present a novel approach to prioritizing the criticality of physical defects in a residential building system using multi criteria decision analysis approach. A residential building constructed in 1985 was considered in this study. Four criteria which includes; Physical Condition of the building system (PC, Effect on Asset (EA, effect on Occupants (EO and Maintenance Cost (MC are considered in the inspection. The building was divided in to nine systems regarded as alternatives. Expert's choice software was used in comparing the importance of the criteria against the main objective, whereas structured Proforma was used in quantifying the defects observed on all building systems against each criteria. The defects severity score of each building system was identified and later multiplied by the weight of the criteria and final hierarchy was derived. The final ranking indicates that, electrical system was considered the most critical system with a risk value of 0.134 while ceiling system scored the lowest risk value of 0.066. The technique is often used in prioritizing mechanical equipment for maintenance planning. However, result of this study indicates that the technique could be used in prioritizing building systems for maintenance planning
Application of optical deformation analysis system on wedge splitting test and its inverse analysis
DEFF Research Database (Denmark)
Skocek, Jan; Stang, Henrik
2010-01-01
. Results of the inverse analysis are compared with traditional inverse analysis based on clip gauge data. Then the optically measured crack profile and crack tip position are compared with predictions done by the non-linear hinge model and a finite element analysis. It is shown that the inverse analysis...... based on the optically measured data can provide material parameters of the fictitious crack model matching favorably those obtained by classical inverse analysis based on the clip gauge data. Further advantages of using of the optical deformation analysis lie in identification of such effects...
Recovery of material parameters of soft hyperelastic tissue by an inverse spectral technique
Gou, Kun
2012-07-01
An inverse spectral method is developed for recovering a spatially inhomogeneous shear modulus for soft tissue. The study is motivated by a novel use of the intravascular ultrasound technique to image arteries. The arterial wall is idealized as a nonlinear isotropic cylindrical hyperelastic body. A boundary value problem is formulated for the response of the arterial wall within a specific class of quasistatic deformations reflective of the response due to imposed blood pressure. Subsequently, a boundary value problem is developed via an asymptotic construction modeling intravascular ultrasound interrogation which generates small amplitude, high frequency time harmonic vibrations superimposed on the static finite deformation. This leads to a system of second order ordinary Sturm-Liouville boundary value problems that are then employed to reconstruct the shear modulus through a nonlinear inverse spectral technique. Numerical examples are demonstrated to show the viability of the method. © 2012 Elsevier Ltd. All rights reserved.
An approach to quantum-computational hydrologic inverse analysis.
O'Malley, Daniel
2018-05-02
Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.
Uncertainty analysis techniques
International Nuclear Information System (INIS)
Marivoet, J.; Saltelli, A.; Cadelli, N.
1987-01-01
The origin of the uncertainty affecting Performance Assessments, as well as their propagation to dose and risk results is discussed. The analysis is focused essentially on the uncertainties introduced by the input parameters, the values of which may range over some orders of magnitude and may be given as probability distribution function. The paper briefly reviews the existing sampling techniques used for Monte Carlo simulations and the methods for characterizing the output curves, determining their convergence and confidence limits. Annual doses, expectation values of the doses and risks are computed for a particular case of a possible repository in clay, in order to illustrate the significance of such output characteristics as the mean, the logarithmic mean and the median as well as their ratios. The report concludes that provisionally, due to its better robustness, such estimation as the 90th percentile may be substituted to the arithmetic mean for comparison of the estimated doses with acceptance criteria. In any case, the results obtained through Uncertainty Analyses must be interpreted with caution as long as input data distribution functions are not derived from experiments reasonably reproducing the situation in a well characterized repository and site
Directory of Open Access Journals (Sweden)
Marcelo Ribeiro dos Santos
2014-01-01
Full Text Available During machining energy is transformed into heat due to plastic deformation of the workpiece surface and friction between tool and workpiece. High temperatures are generated in the region of the cutting edge, which have a very important influence on wear rate of the cutting tool and on tool life. This work proposes the estimation of heat flux at the chip-tool interface using inverse techniques. Factors which influence the temperature distribution at the AISI M32C high speed steel tool rake face during machining of a ABNT 12L14 steel workpiece were also investigated. The temperature distribution was predicted using finite volume elements. A transient 3D numerical code using irregular and nonstaggered mesh was developed to solve the nonlinear heat diffusion equation. To validate the software, experimental tests were made. The inverse problem was solved using the function specification method. Heat fluxes at the tool-workpiece interface were estimated using inverse problems techniques and experimental temperatures. Tests were performed to study the effect of cutting parameters on cutting edge temperature. The results were compared with those of the tool-work thermocouple technique and a fair agreement was obtained.
A gEUD-based inverse planning technique for HDR prostate brachytherapy: Feasibility study
Energy Technology Data Exchange (ETDEWEB)
Giantsoudi, D. [Department of Radiological Sciences, University of Texas Health Sciences Center, San Antonio, Texas 78229 (United States); Department of Radiation Oncology, Francis H. Burr Proton Therapy Center, Boston, Massachusetts 02114 (United States); Baltas, D. [Department of Medical Physics and Engineering, Strahlenklinik, Klinikum Offenbach GmbH, 63069 Offenbach (Germany); Nuclear and Particle Physics Section, Physics Department, University of Athens, 15701 Athens (Greece); Karabis, A. [Pi-Medical Ltd., Athens 10676 (Greece); Mavroidis, P. [Department of Radiological Sciences, University of Texas Health Sciences Center, San Antonio, Texas 78299 and Department of Medical Radiation Physics, Karolinska Institutet and Stockholm University, 17176 (Sweden); Zamboglou, N.; Tselis, N. [Strahlenklinik, Klinikum Offenbach GmbH, 63069 Offenbach (Germany); Shi, C. [St. Vincent' s Medical Center, 2800 Main Street, Bridgeport, Connecticut 06606 (United States); Papanikolaou, N. [Department of Radiological Sciences, University of Texas Health Sciences Center, San Antonio, Texas 78299 (United States)
2013-04-15
Purpose: The purpose of this work was to study the feasibility of a new inverse planning technique based on the generalized equivalent uniform dose for image-guided high dose rate (HDR) prostate cancer brachytherapy in comparison to conventional dose-volume based optimization. Methods: The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO (Hybrid Inverse Planning Optimization) is compared with alternative plans, which were produced through inverse planning using the generalized equivalent uniform dose (gEUD). All the common dose-volume indices for the prostate and the organs at risk were considered together with radiobiological measures. The clinical effectiveness of the different dose distributions was investigated by comparing dose volume histogram and gEUD evaluators. Results: Our results demonstrate the feasibility of gEUD-based inverse planning in HDR brachytherapy implants for prostate. A statistically significant decrease in D{sub 10} or/and final gEUD values for the organs at risk (urethra, bladder, and rectum) was found while improving dose homogeneity or dose conformity of the target volume. Conclusions: Following the promising results of gEUD-based optimization in intensity modulated radiation therapy treatment optimization, as reported in the literature, the implementation of a similar model in HDR brachytherapy treatment plan optimization is suggested by this study. The potential of improved sparing of organs at risk was shown for various gEUD-based optimization parameter protocols, which indicates the ability of this method to adapt to the user's preferences.
A robust spatial filtering technique for multisource localization and geoacoustic inversion.
Stotts, S A
2005-07-01
Geoacoustic inversion and source localization using beamformed data from a ship of opportunity has been demonstrated with a bottom-mounted array. An alternative approach, which lies within a class referred to as spatial filtering, transforms element level data into beam data, applies a bearing filter, and transforms back to element level data prior to performing inversions. Automation of this filtering approach is facilitated for broadband applications by restricting the inverse transform to the degrees of freedom of the array, i.e., the effective number of elements, for frequencies near or below the design frequency. A procedure is described for nonuniformly spaced elements that guarantees filter stability well above the design frequency. Monitoring energy conservation with respect to filter output confirms filter stability. Filter performance with both uniformly spaced and nonuniformly spaced array elements is discussed. Vertical (range and depth) and horizontal (range and bearing) ambiguity surfaces are constructed to examine filter performance. Examples that demonstrate this filtering technique with both synthetic data and real data are presented along with comparisons to inversion results using beamformed data. Examinations of cost functions calculated within a simulated annealing algorithm reveal the efficacy of the approach.
Inverse kinetics technique for reactor shutdown measurement: an experimental assessment. [AGR
Energy Technology Data Exchange (ETDEWEB)
Lewis, T. A.; McDonald, D.
1975-09-15
It is proposed to use the Inverse Kinetics Technique to measure the subcritical reactivity as a function of time during the testing of the nitrogen injection systems on AGRs. A description is given of an experimental assessment of the technique by investigating known transients created by control rod movements on a small experimental reactor, (2m high, 1m radius). Spatial effects were observed close to the moving rods but otherwise derived reactivities were independent of detector position and agreed well with the existing calibrations. This prompted the suggestion that data from installed reactor instrumentation could be used to calibrate CAGR control rods.
Lag profile inversion method for EISCAT data analysis
Directory of Open Access Journals (Sweden)
I. I. Virtanen
2008-03-01
Full Text Available The present standard EISCAT incoherent scatter experiments are based on alternating codes that are decoded in power domain by simple summation and subtraction operations. The signal is first digitised and then different lagged products are calculated and decoded in real time. Only the decoded lagged products are saved for further analysis so that both the original data samples and the undecoded lagged products are lost. A fit of plasma parameters can be later performed using the recorded lagged products. In this paper we describe a different analysis method, which makes use of statistical inversion in removing range ambiguities from the lag profiles. An analysis program carrying out both the lag profile inversion and the fit of the plasma parameters has been constructed. Because recording the received signal itself instead of the lagged products allows very flexible data analysis, the program is constructed to use raw data, i.e. IQ-sampled signal recorded from an IF stage of the radar. The program is now capable of analysing standard alternating-coded EISCAT experiments as well as experiments with any other kind of radar modulation if raw data is available. The program calculates the ambiguous lag profiles and is capable of inverting them as such but, for analysis in real time, time integration is needed before inversion. We demonstrate the method using alternating code experiments in the EISCAT UHF radar and specific hardware connected to the second IF stage of the receiver. This method produces a data stream of complex samples, which are stored for later processing. The raw data is analysed with lag profile inversion and the results are compared to those given by the standard method.
Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory
2017-04-01
The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.
Exergy analysis for combined regenerative Brayton and inverse Brayton cycles
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zelong; Chen, Lingen; Sun, Fengrui [College of Naval Architecture and Power, Naval University of Engineering, Wuhan 430033 (China)
2012-07-01
This paper presents the study of exergy analysis of combined regenerative Brayton and inverse Brayton cycles. The analytical formulae of exergy loss and exergy efficiency are derived. The largest exergy loss location is determined. By taking the maximum exergy efficiency as the objective, the choice of bottom cycle pressure ratio is optimized by detailed numerical examples, and the corresponding optimal exergy efficiency is obtained. The influences of various parameters on the exergy efficiency and other performances are analyzed by numerical calculations.
Interpretation and inverse analysis of the wedge splitting test
DEFF Research Database (Denmark)
Østergaard, Lennart; Stang, Henrik
2002-01-01
to the wedge splitting test and that it is well suited for the interpretation of test results in terms of s(w). A fine agreement between the hinge and FEM-models has been found. It has also been found that the test and the hinge model form a solid basis for inverse analysis. The paper also discusses possible...... three dimensional problems in the experiment as well as the influence of specimen size....
Exergy analysis for combined regenerative Brayton and inverse Brayton cycles
Zelong Zhang, Lingen Chen, Fengrui Sun
2012-01-01
This paper presents the study of exergy analysis of combined regenerative Brayton and inverse Brayton cycles. The analytical formulae of exergy loss and exergy efficiency are derived. The largest exergy loss location is determined. By taking the maximum exergy efficiency as the objective, the choice of bottom cycle pressure ratio is optimized by detailed numerical examples, and the corresponding optimal exergy efficiency is obtained. The influences of various parameters on the exergy efficien...
Determining the metallicity of the solar envelope using seismic inversion techniques
Buldgen, G.; Salmon, S. J. A. J.; Noels, A.; Scuflaire, R.; Dupret, M. A.; Reese, D. R.
2017-11-01
The solar metallicity issue is a long-lasting problem of astrophysics, impacting multiple fields and still subject to debate and uncertainties. While spectroscopy has mostly been used to determine the solar heavy elements abundance, helioseismologists attempted providing a seismic determination of the metallicity in the solar convective envelope. However, the puzzle remains since two independent groups provided two radically different values for this crucial astrophysical parameter. We aim at providing an independent seismic measurement of the solar metallicity in the convective envelope. Our main goal is to help provide new information to break the current stalemate amongst seismic determinations of the solar heavy element abundance. We start by presenting the kernels, the inversion technique and the target function of the inversion we have developed. We then test our approach in multiple hare-and-hounds exercises to assess its reliability and accuracy. We then apply our technique to solar data using calibrated solar models and determine an interval of seismic measurements for the solar metallicity. We show that our inversion can indeed be used to estimate the solar metallicity thanks to our hare-and-hounds exercises. However, we also show that further dependencies in the physical ingredients of solar models lead to a low accuracy. Nevertheless, using various physical ingredients for our solar models, we determine metallicity values between 0.008 and 0.014.
International Nuclear Information System (INIS)
Lopez, C.; Koski, J.A.; Razani, A.
2000-01-01
A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively
Directory of Open Access Journals (Sweden)
J. S. de Villiers
2014-10-01
Full Text Available This research focuses on the inversion of geomagnetic variation field measurement to obtain source currents in the ionosphere. During a geomagnetic disturbance, the ionospheric currents create magnetic field variations that induce geoelectric fields, which drive geomagnetically induced currents (GIC in power systems. These GIC may disturb the operation of power systems and cause damage to grounded power transformers. The geoelectric fields at any location of interest can be determined from the source currents in the ionosphere through a solution of the forward problem. Line currents running east–west along given surface position are postulated to exist at a certain height above the Earth's surface. This physical arrangement results in the fields on the ground having the magnetic north and down components, and the electric east component. Ionospheric currents are modelled by inverting Fourier integrals (over the wavenumber of elementary geomagnetic fields using the Levenberg–Marquardt technique. The output parameters of the inversion model are the current strength, height and surface position of the ionospheric current system. A ground conductivity structure with five layers from Quebec, Canada, based on the Layered-Earth model is used to obtain the complex skin depth at a given angular frequency. This paper presents preliminary and inversion results based on these structures and simulated geomagnetic fields. The results show some interesting features in the frequency domain. Model parameters obtained through inversion are within 2% of simulated values. This technique has applications for modelling the currents of electrojets at the equator and auroral regions, as well as currents in the magnetosphere.
Inversion methods for analysis of neutron brightness measurements in tokamaks
International Nuclear Information System (INIS)
Gorini, G.; Gottardi, N.
1990-02-01
The problem of determining neutron emissivity from neutron brightness measurements in magnetic fusion plasmas is addressed. In the case of two-dimensional measurements with two orthogonal cameras, a complete, tomographic analysis of the data can in principle be performed. The results depend critically on the accuracy of the measurements and alternative solutions can be sought under the assumption of a known emissivity topology (Generalized Abel Inversion). In this work, neutron brightness data from the JET tokamak have been studied with both methods. We find that with the present experimental uncertainty (levels 10-20%) the Abel inversion method works best, while two-dimensional information cannot in general be deduced. This is confirmed by studies of the error propagation in the inversion using artificial data, which are also presented here. An important application of emissivity profile information is the determination of the plasma deuterium temperature profile, T D (R). Results are presented here from the analysis of JET data and the errors in T D (R) are discussed in some detail. It is found that, for typical JET plasma conditions, the dominant source of uncertainty arises from the high plasma impurity level and the fact that it is poorly known; these problems can be expected to be remedied and neutron brightness measurements would be expected to be very effective (especially in high density plasmas) as a T D (R) diagnostics. (author)
Analysis and analytical techniques
Energy Technology Data Exchange (ETDEWEB)
Batuecas Rodriguez, T [Department of Chemistry and Isotopes, Junta de Energia Nuclear, Madrid (Spain)
1967-01-01
The technology associated with the use of organic coolants in nuclear reactors depends to a large extent on the determination and control of their physical and chemical properties, and particularly on the viability, speed, sensitivity, precision and accuracy (depending on the intended usage) of the methods employed in detection and analytical determination. This has led to the study and development of numerous techniques, some specially designed for the extreme conditions involved in working with the types of product in question and others adapted from existing techniques. In the specific case of polyphenyl and hydropolyphenyl mixtures, which have been the principal subjects of study to date and offer greatest promise, the analytical problems are broadly as follows: Composition of initial product or virgin coolant composition of macro components and amounts of organic and inorganic impurities; Coolant during and after operation. Determination of gases and organic compounds produced by pyrolysis and radiolysis (degradation and polymerization products); Control of systems for purifying and regenerating the coolant after use. Dissolved pressurization gases; Detection of intermediate products during decomposition; these are generally very unstable (free radicals); Degree of fouling and film formation. Tests to determine potential formation of films; Corrosion of structural elements and canning materials; Health and safety. Toxicity, inflammability and impurities that can be activated. Although some of the above problems are closely interrelated and entail similar techniques, they vary as to degree of difficulty. Another question is the difficulty of distinguishing clearly between techniques for determining physical and physico-chemical properties, on one hand, and analytical techniques on the other. Any classification is therefore somewhat arbitrary (for example, in the case of dosimetry and techniques for determining mean molecular weights or electrical conductivity
Uncertainty estimates of a GRACE inversion modelling technique over Greenland using a simulation
Bonin, Jennifer; Chambers, Don
2013-07-01
The low spatial resolution of GRACE causes leakage, where signals in one location spread out into nearby regions. Because of this leakage, using simple techniques such as basin averages may result in an incorrect estimate of the true mass change in a region. A fairly simple least squares inversion technique can be used to more specifically localize mass changes into a pre-determined set of basins of uniform internal mass distribution. However, the accuracy of these higher resolution basin mass amplitudes has not been determined, nor is it known how the distribution of the chosen basins affects the results. We use a simple `truth' model over Greenland as an example case, to estimate the uncertainties of this inversion method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We determine that an appropriate level of smoothing (300-400 km) and process noise (0.30 cm2 of water) gets the best results. The trends of the Greenland internal basins and Iceland can be reasonably estimated with this method, with average systematic errors of 3.5 cm yr-1 per basin. The largest mass losses found from GRACE RL04 occur in the coastal northwest (-19.9 and -33.0 cm yr-1) and southeast (-24.2 and -27.9 cm yr-1), with small mass gains (+1.4 to +7.7 cm yr-1) found across the northern interior. Acceleration of mass change is measurable at the 95 per cent confidence level in four northwestern basins, but not elsewhere in Greenland. Due to an insufficiently detailed distribution of basins across internal Canada, the trend estimates of Baffin and Ellesmere Islands are expected to be incorrect due to systematic errors caused by the inversion technique.
International Nuclear Information System (INIS)
Kravtsov, Y.A.; Kravtsov, Y.A.; Chrzanowski, J.; Mazon, D.
2011-01-01
New procedure for plasma polarimetry data inversion is suggested, which fits two parameter knowledge-based plasma model to the measured parameters (azimuthal and ellipticity angles) of the polarization ellipse. The knowledge-based model is supposed to use the magnetic field and electron density profiles, obtained from magnetic measurements and LIDAR data on the Thomson scattering. In distinction to traditional polarimetry, polarization evolution along the ray is determined on the basis of angular variables technique (AVT). The paper contains a few examples of numerical solutions of these equations, which are applicable in conditions, when Faraday and Cotton-Mouton effects are simultaneously strong. (authors)
A new recoil distance technique using low energy coulomb excitation in inverse kinematics
Energy Technology Data Exchange (ETDEWEB)
Rother, W., E-mail: wolfram.rother@googlemail.com [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Dewald, A.; Pascovici, G.; Fransen, C.; Friessner, G.; Hackstein, M. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Ilie, G. [Wright Nuclear Structure Laboratory, Yale University, New Haven, CT 06520 (United States); National Institute of Physics and Nuclear Engineering, P.O. Box MG-6, Bucharest-Magurele (Romania); Iwasaki, H. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Jolie, J. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Melon, B. [Dipartimento di Fisica, Universita di Firenze and INFN Sezione di Firenze, Sesto Fiorentino (Firenze) I-50019 (Italy); Petkov, P. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); INRNE-BAS, Sofia (Bulgaria); Pfeiffer, M. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Pissulla, Th. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Bundesumweltministerium, Robert-Schuman-Platz 3, D - 53175 Bonn (Germany); Zell, K.-O. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Jakobsson, U.; Julin, R.; Jones, P.; Ketelhut, S.; Nieminen, P.; Peura, P. [Department of Physics, University of Jyvaeskylae, P.O. Box 35, FI-40014 (Finland); and others
2011-10-21
We report on the first experiment combining the Recoil Distance Doppler Shift technique and multistep Coulomb excitation in inverse kinematics at beam energies of 3-10 A MeV. The setup involves a standard plunger device equipped with a degrader foil instead of the normally used stopper foil. An array of particle detectors is positioned at forward angles to detect target-like recoil nuclei which are used as a trigger to discriminate against excitations in the degrader foil. The method has been successfully applied to measure lifetimes in {sup 128}Xe and is suited to be a useful tool for experiments with radioactive ion beams.
International Nuclear Information System (INIS)
Ganapol, B.D.; Sumini, M.
1990-01-01
The time dependent space second order discrete form of the monokinetic transport equation is given an analytical solution, within the Laplace transform domain. Th A n dynamic model is presented and the general resolution procedure is worked out. The solution in the time domain is then obtained through the application of a numerical transform inversion technique. The justification of the research relies in the need to produce reliable and physically meaningful transport benchmarks for dynamic calculations. The paper is concluded by a few results followed by some physical comments
The application of neural network techniques to magnetic and optical inverse problems
International Nuclear Information System (INIS)
Jones, H.V.
2000-12-01
The processing power of the computer has increased at unimaginable rates over the last few decades. However, even today's fastest computer can take several hours to find solutions to some mathematical problems; and there are instances where a high powered supercomputer may be impractical, with the need for near instant solutions just as important (such as in an on-line testing system). This led us to believe that such complex problems could be solved using a novel approach, whereby the system would have prior knowledge about the expected solutions through a process of learning. One method of approaching this kind of problem is through the use of machine learning. Just as a human can be trained and is able to learn from past experiences, a machine is can do just the same. This is the concept of neural networks. The research which was conducted involves the investigation of various neural network techniques, and their applicability to solve some known complex inverse problems in the field of magnetic and optical recording. In some cases a comparison is also made to more conventional methods of solving the problems, from which it was possible to outline some key advantages of using a neural network approach. We initially investigated the application of neural networks to transverse susceptibility data in order to determine anisotropy distributions. This area of research is proving to be very important, as it gives us information about the switching field distribution, which then determines the minimum transition width achievable in a medium, and affects the overwrite characteristics of the media. Secondly, we investigated a similar situation, but applied to an optical problem. This involved the determination of important compact disc parameters from the diffraction pattern of a laser from a disc. This technique was then intended for use in an on-line testing system. Finally we investigated another area of neural networks with the analysis of magnetisation maps and
IMPROVED SEARCH OF PRINCIPAL COMPONENT ANALYSIS DATABASES FOR SPECTRO-POLARIMETRIC INVERSION
International Nuclear Information System (INIS)
Casini, R.; Lites, B. W.; Ramos, A. Asensio; Ariste, A. López
2013-01-01
We describe a simple technique for the acceleration of spectro-polarimetric inversions based on principal component analysis (PCA) of Stokes profiles. This technique involves the indexing of the database models based on the sign of the projections (PCA coefficients) of the first few relevant orders of principal components of the four Stokes parameters. In this way, each model in the database can be attributed a distinctive binary number of 2 4n bits, where n is the number of PCA orders used for the indexing. Each of these binary numbers (indices) identifies a group of ''compatible'' models for the inversion of a given set of observed Stokes profiles sharing the same index. The complete set of the binary numbers so constructed evidently determines a partition of the database. The search of the database for the PCA inversion of spectro-polarimetric data can profit greatly from this indexing. In practical cases it becomes possible to approach the ideal acceleration factor of 2 4n as compared to the systematic search of a non-indexed database for a traditional PCA inversion. This indexing method relies on the existence of a physical meaning in the sign of the PCA coefficients of a model. For this reason, the presence of model ambiguities and of spectro-polarimetric noise in the observations limits in practice the number n of relevant PCA orders that can be used for the indexing
Basis set expansion for inverse problems in plasma diagnostic analysis
Jones, B.; Ruiz, C. L.
2013-07-01
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)], 10.1063/1.1482156 is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20-25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
Basis set expansion for inverse problems in plasma diagnostic analysis
Energy Technology Data Exchange (ETDEWEB)
Jones, B.; Ruiz, C. L. [Sandia National Laboratories, PO Box 5800, Albuquerque, New Mexico 87185 (United States)
2013-07-15
A basis set expansion method [V. Dribinski, A. Ossadtchi, V. A. Mandelshtam, and H. Reisler, Rev. Sci. Instrum. 73, 2634 (2002)] is applied to recover physical information about plasma radiation sources from instrument data, which has been forward transformed due to the nature of the measurement technique. This method provides a general approach for inverse problems, and we discuss two specific examples relevant to diagnosing fast z pinches on the 20–25 MA Z machine [M. E. Savage, L. F. Bennett, D. E. Bliss, W. T. Clark, R. S. Coats, J. M. Elizondo, K. R. LeChien, H. C. Harjes, J. M. Lehr, J. E. Maenchen, D. H. McDaniel, M. F. Pasik, T. D. Pointon, A. C. Owen, D. B. Seidel, D. L. Smith, B. S. Stoltzfus, K. W. Struve, W. A. Stygar, L. K. Warne, J. R. Woodworth, C. W. Mendel, K. R. Prestwich, R. W. Shoup, D. L. Johnson, J. P. Corley, K. C. Hodge, T. C. Wagoner, and P. E. Wakeland, in Proceedings of the Pulsed Power Plasma Sciences Conference (IEEE, 2007), p. 979]. First, Abel inversion of time-gated, self-emission x-ray images from a wire array implosion is studied. Second, we present an approach for unfolding neutron time-of-flight measurements from a deuterium gas puff z pinch to recover information about emission time history and energy distribution. Through these examples, we discuss how noise in the measured data limits the practical resolution of the inversion, and how the method handles discontinuities in the source function and artifacts in the projected image. We add to the method a propagation of errors calculation for estimating uncertainties in the inverted solution.
International Nuclear Information System (INIS)
Boaga, J; Vignoli, G; Cassiani, G
2011-01-01
Inversion is a critical step in all geophysical techniques, and is generally fraught with ill-posedness. In the case of seismic surface wave studies, the inverse problem can lead to different equivalent subsoil models and consequently to different local seismic response analyses. This can have a large impact on an earthquake engineering design. In this paper, we discuss the consequences of non-uniqueness of surface wave inversion on seismic responses, with both numerical and experimental data. Our goal is to evaluate the consequences on common seismic response analysis in the case of different impedance contrast conditions. We verify the implications of inversion uncertainty, and consequently of data information content, on realistic local site responses. A stochastic process is used to generate a set of 1D shear wave velocity profiles from several specific subsurface models. All these profiles are characterized as being equivalent, i.e. their responses, in terms of a dispersion curve, are compatible with the uncertainty in the same surface wave data. The generated 1D shear velocity models are then subjected to a conventional one-dimensional seismic ground response analysis using a realistic input motion. While recent analyses claim that the consequences of surface wave inversion uncertainties are very limited, our test points out that a relationship exists between inversion confidence and seismic responses in different subsoils. In the case of regular and relatively smooth increase of shear wave velocities with depth, as is usual in sedimentary plains, our results show that the choice of a specific model among equivalent solutions strongly influences the seismic response. On the other hand, when the shallow subsoil is characterized by a strong impedance contrast (thus revealing a characteristic soil resonance period), as is common in the presence of a shallow bedrock, equivalent solutions provide practically the same seismic amplification, especially in the
International Nuclear Information System (INIS)
Arnold, Alexander; Bruhns, Otto T; Reichling, Stefan; Mosler, Joern
2010-01-01
This paper is concerned with an efficient implementation suitable for the elastography inverse problem. More precisely, the novel algorithm allows us to compute the unknown stiffness distribution in soft tissue by means of the measured displacement field by considerably reducing the numerical cost compared to previous approaches. This is realized by combining and further elaborating variational mesh adaption with a clustering technique similar to those known from digital image compression. Within the variational mesh adaption, the underlying finite element discretization is only locally refined if this leads to a considerable improvement of the numerical solution. Additionally, the numerical complexity is reduced by the aforementioned clustering technique, in which the parameters describing the stiffness of the respective soft tissue are sorted according to a predefined number of intervals. By doing so, the number of unknowns associated with the elastography inverse problem can be chosen explicitly. A positive side effect of this method is the reduction of artificial noise in the data (smoothing of the solution). The performance and the rate of convergence of the resulting numerical formulation are critically analyzed by numerical examples.
Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique
Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi
2013-09-01
According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.
A forward model and conjugate gradient inversion technique for low-frequency ultrasonic imaging.
van Dongen, Koen W A; Wright, William M D
2006-10-01
Emerging methods of hyperthermia cancer treatment require noninvasive temperature monitoring, and ultrasonic techniques show promise in this regard. Various tomographic algorithms are available that reconstruct sound speed or contrast profiles, which can be related to temperature distribution. The requirement of a high enough frequency for adequate spatial resolution and a low enough frequency for adequate tissue penetration is a difficult compromise. In this study, the feasibility of using low frequency ultrasound for imaging and temperature monitoring was investigated. The transient probing wave field had a bandwidth spanning the frequency range 2.5-320.5 kHz. The results from a forward model which computed the propagation and scattering of low-frequency acoustic pressure and velocity wave fields were used to compare three imaging methods formulated within the Born approximation, representing two main types of reconstruction. The first uses Fourier techniques to reconstruct sound-speed profiles from projection or Radon data based on optical ray theory, seen as an asymptotical limit for comparison. The second uses backpropagation and conjugate gradient inversion methods based on acoustical wave theory. The results show that the accuracy in localization was 2.5 mm or better when using low frequencies and the conjugate gradient inversion scheme, which could be used for temperature monitoring.
Multivariate analysis techniques
Energy Technology Data Exchange (ETDEWEB)
Bendavid, Josh [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Fisher, Wade C. [Michigan State Univ., East Lansing, MI (United States); Junk, Thomas R. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
2016-01-01
The end products of experimental data analysis are designed to be simple and easy to understand: hypothesis tests and measurements of parameters. But, the experimental data themselves are voluminous and complex. Furthermore, in modern collider experiments, many petabytes of data must be processed in search of rare new processes which occur together with much more copious background processes that are of less interest to the task at hand. The systematic uncertainties on the background may be larger than the expected signal in many cases. The statistical power of an analysis and its sensitivity to systematic uncertainty can therefore usually both be improved by separating signal events from background events with higher efficiency and purity.
Energy Technology Data Exchange (ETDEWEB)
Kawamura, S [Nippon Geophysical Prospecting Co. Ltd., Tokyo (Japan)
1996-10-01
Smoothness-constrained least-squares technique with ABIC minimization was applied to the inversion of phase velocity of surface waves during geophysical exploration, to confirm its usefulness. Since this study aimed mainly at the applicability of the technique, Love wave was used which is easier to treat theoretically than Rayleigh wave. Stable successive approximation solutions could be obtained by the repeated improvement of velocity model of S-wave, and an objective model with high reliability could be determined. While, for the inversion with simple minimization of the residuals squares sum, stable solutions could be obtained by the repeated improvement, but the judgment of convergence was very hard due to the smoothness-constraint, which might make the obtained model in a state of over-fitting. In this study, Love wave was used to examine the applicability of the smoothness-constrained least-squares technique with ABIC minimization. Applicability of this to Rayleigh wave will be investigated. 8 refs.
Seismic Imaging and Velocity Analysis Using a Pseudo Inverse to the Extended Born Approximation
Alali, Abdullah A.
2018-05-01
Prestack depth migration requires an accurate kinematic velocity model to image the subsurface correctly. Wave equation migration velocity analysis techniques aim to update the background velocity model by minimizing image residuals to achieve the correct model. The most commonly used technique is differential semblance optimization (DSO), which depends on applying an image extension and penalizing the energy in the non-physical extension. However, studies show that the conventional DSO gradient is contaminated with artifact noise and unwanted oscillations which might lead to local minima. To deal with this issue and improve the stability of DSO, recent studies proposed to use an inversion formula rather than migration to obtain the image. Migration is defined as the adjoint of Born modeling. Since the inversion is complicated and expensive, a pseudo inverse is used instead. A pseudo inverse formula has been developed recently for the horizontal space shift extended Born. This formula preserves the true amplitude and reduces the artifact noise even when an incorrect velocity is used. Although the theory for such an inverse is well developed, it has only been derived and tested on laterally homogeneous models. This is because the formula contains a derivative of the image with respect to a vertical extension evaluated at zero offset. Implementing the vertical extension is computationally expensive, which means this derivative needs to be computed without applying the additional extension. For laterally invariant models, the inverse is simplified and this derivative is eliminated. I implement the full asymptotic inverse to the extended Born to account for laterally heterogeneity. I compute the derivative of the image with respect to a vertical extension without performing any additional shift. This is accomplished by applying the derivative to the imaging condition and utilizing the chain rule. The fact that this derivative is evaluated at zero offset vertical
The modified inverse hockey stick technique for adjuvant irradiation after mastectomy
International Nuclear Information System (INIS)
Kukolowicz, P.; Selerski, B.; Kuszewski, T.; Wieczorek, A.
2004-01-01
To present the technique of irradiation of post-mastectomy patients used in the Holycross Cancer Centre in Kielce.The paper presents a detailed description of the technique which is referred to as the 'modified inverse hockey stick technique (MIHS)'. The dosimetric characteristic of dose distribution for the MIHS technique is presented basing on dose distributions calculated for 40 patients. The measurements used to evaluate dose distribution included standard deviation of the dose in the Planning Target Volume (PTV) and the percentage of the PTV volume receiving a dose larger than 110% and smaller than 90%; the lung volume received at least 20 Gy (LV20) and the heart volume received at least 30 Gy (HV30). The distribution of the electron beam energy is also presented. The standard deviation of the dose in the PTV was approx. 10% in a majority of patients. About 12% of the PTV volume received a dose more than 10% smaller than intended and about 10% of the PTV volume received a dose more than 10% greater than intended. For patients irradiated on the left side of the chest wall the LV20 was always lesser than 25% and for patients irradiated on the right side of the chest wall - always less than 35%, except for one patient, in whom it reached 37%. The HV30 was always below 8%. The MIHS technique is a safe and reliable modality. The main advantages of the technique include very convenient and easily repeated positioning of the patient and small doses applied to the organs at risk. The individually calculated bolus plays an important role in diminishing the dose to the lung and heart. The disadvantages of the technique include poor dose homogeneity within the PTV and long matching lines of the electron and photon beams. (author)
Microlocal analysis of a seismic linearized inverse problem
Stolk, C.C.
1999-01-01
The seismic inverse problem is to determine the wavespeed c x in the interior of a medium from measurements at the boundary In this paper we analyze the linearized inverse problem in general acoustic media The problem is to nd a left inverse of the linearized forward map F or equivalently to nd the
New analysis indicates no thermal inversion in the atmosphere of HD 209458b
International Nuclear Information System (INIS)
Diamond-Lowe, Hannah; Stevenson, Kevin B.; Bean, Jacob L.; Line, Michael R.; Fortney, Jonathan J.
2014-01-01
An important focus of exoplanet research is the determination of the atmospheric temperature structure of strongly irradiated gas giant planets, or hot Jupiters. HD 209458b is the prototypical exoplanet for atmospheric thermal inversions, but this assertion does not take into account recently obtained data or newer data reduction techniques. We reexamine this claim by investigating all publicly available Spitzer Space Telescope secondary-eclipse photometric data of HD 209458b and performing a self-consistent analysis. We employ data reduction techniques that minimize stellar centroid variations, apply sophisticated models to known Spitzer systematics, and account for time-correlated noise in the data. We derive new secondary-eclipse depths of 0.119% ± 0.007%, 0.123% ± 0.006%, 0.134% ± 0.035%, and 0.215% ± 0.008% in the 3.6, 4.5, 5.8, and 8.0 μm bandpasses, respectively. We feed these results into a Bayesian atmospheric retrieval analysis and determine that it is unnecessary to invoke a thermal inversion to explain our secondary-eclipse depths. The data are well fitted by a temperature model that decreases monotonically between pressure levels of 1 and 0.01 bars. We conclude that there is no evidence for a thermal inversion in the atmosphere of HD 209458b.
Soil analysis. Modern instrumental technique
International Nuclear Information System (INIS)
Smith, K.A.
1993-01-01
This book covers traditional methods of analysis and specialist monographs on individual instrumental techniques, which are usually not written with soil or plant analysis specifically in mind. The principles of the techniques are combined with discussions of sample preparation and matrix problems, and critical reviews of applications in soil science and related disciplines. Individual chapters are processed separately for inclusion in the appropriate data bases
Inverse odds ratio-weighted estimation for causal mediation analysis.
Tchetgen Tchetgen, Eric J
2013-11-20
An important scientific goal of studies in the health and social sciences is increasingly to determine to what extent the total effect of a point exposure is mediated by an intermediate variable on the causal pathway between the exposure and the outcome. A causal framework has recently been proposed for mediation analysis, which gives rise to new definitions, formal identification results and novel estimators of direct and indirect effects. In the present paper, the author describes a new inverse odds ratio-weighted approach to estimate so-called natural direct and indirect effects. The approach, which uses as a weight the inverse of an estimate of the odds ratio function relating the exposure and the mediator, is universal in that it can be used to decompose total effects in a number of regression models commonly used in practice. Specifically, the approach may be used for effect decomposition in generalized linear models with a nonlinear link function, and in a number of other commonly used models such as the Cox proportional hazards regression for a survival outcome. The approach is simple and can be implemented in standard software provided a weight can be specified for each observation. An additional advantage of the method is that it easily incorporates multiple mediators of a categorical, discrete or continuous nature. Copyright © 2013 John Wiley & Sons, Ltd.
Inverse bifurcation analysis: application to simple gene systems
Directory of Open Access Journals (Sweden)
Schuster Peter
2006-07-01
Full Text Available Abstract Background Bifurcation analysis has proven to be a powerful method for understanding the qualitative behavior of gene regulatory networks. In addition to the more traditional forward problem of determining the mapping from parameter space to the space of model behavior, the inverse problem of determining model parameters to result in certain desired properties of the bifurcation diagram provides an attractive methodology for addressing important biological problems. These include understanding how the robustness of qualitative behavior arises from system design as well as providing a way to engineer biological networks with qualitative properties. Results We demonstrate that certain inverse bifurcation problems of biological interest may be cast as optimization problems involving minimal distances of reference parameter sets to bifurcation manifolds. This formulation allows for an iterative solution procedure based on performing a sequence of eigen-system computations and one-parameter continuations of solutions, the latter being a standard capability in existing numerical bifurcation software. As applications of the proposed method, we show that the problem of maximizing regions of a given qualitative behavior as well as the reverse engineering of bistable gene switches can be modelled and efficiently solved.
Surface analysis the principal techniques
Vickerman, John C
2009-01-01
This completely updated and revised second edition of Surface Analysis: The Principal Techniques, deals with the characterisation and understanding of the outer layers of substrates, how they react, look and function which are all of interest to surface scientists. Within this comprehensive text, experts in each analysis area introduce the theory and practice of the principal techniques that have shown themselves to be effective in both basic research and in applied surface analysis. Examples of analysis are provided to facilitate the understanding of this topic and to show readers how they c
Directory of Open Access Journals (Sweden)
Xiaochao Tang
2013-03-01
Full Text Available With the movement towards the implementation of mechanistic-empirical pavement design guide (MEPDG, an accurate determination of pavement layer moduli is vital for predicting pavement critical mechanistic responses. A backcalculation procedure is commonly used to estimate the pavement layer moduli based on the non-destructive falling weight deflectometer (FWD tests. Backcalculation of flexible pavement layer properties is an inverse problem with known input and output signals based upon which unknown parameters of the pavement system are evaluated. In this study, an inverse analysis procedure that combines the finite element analysis and a population-based optimization technique, Genetic Algorithm (GA has been developed to determine the pavement layer structural properties. A lightweight deflectometer (LWD was used to infer the moduli of instrumented three-layer scaled flexible pavement models. While the common practice in backcalculating pavement layer properties still assumes a static FWD load and uses only peak values of the load and deflections, dynamic analysis was conducted to simulate the impulse LWD load. The recorded time histories of the LWD load were used as the known inputs into the pavement system while the measured time-histories of surface central deflections and subgrade deflections measured with a linear variable differential transformers (LVDT were considered as the outputs. As a result, consistent pavement layer moduli can be obtained through this inverse analysis procedure.
Inverse Optimization and Forecasting Techniques Applied to Decision-making in Electricity Markets
DEFF Research Database (Denmark)
Saez Gallego, Javier
patterns that the load traditionally exhibited. On the other hand, this thesis is motivated by the decision-making processes of market players. In response to these challenges, this thesis provides mathematical models for decision-making under uncertainty in electricity markets. Demand-side bidding refers......This thesis deals with the development of new mathematical models that support the decision-making processes of market players. It addresses the problems of demand-side bidding, price-responsive load forecasting and reserve determination. From a methodological point of view, we investigate a novel...... approach to model the response of aggregate price-responsive load as a constrained optimization model, whose parameters are estimated from data by using inverse optimization techniques. The problems tackled in this dissertation are motivated, on one hand, by the increasing penetration of renewable energy...
Sodium ion conducting polymer electrolyte membrane prepared by phase inversion technique
Harshlata, Mishra, Kuldeep; Rai, D. K.
2018-04-01
A mechanically stable porous polymer membrane of Poly(vinylidene fluoride-hexafluoropropylene) has been prepared by phase inversion technique using steam as a non-solvent. The membrane possesses semicrystalline network with enhanced amorphicity as observed by X-ray diffraction. The membrane has been soaked in an electrolyte solution of 0.5M NaPF6 in Ethylene Carbonate/Propylene Carbonate (1:1) to obtain the gel polymer electrolyte. The porosity and electrolyte uptake of the membrane have been found to be 67% and 220% respectively. The room temperature ionic conductivity of the membrane has been obtained as ˜ 0.3 mS cm-1. The conductivity follows Arrhenius behavior with temperature and gives activation energy as 0.8 eV. The membrane has been found to possess significantly large electrochemical stability window of 5.0 V.
International Nuclear Information System (INIS)
Desesquelles, P.
1997-01-01
Computer Monte Carlo simulations occupy an increasingly important place between theory and experiment. This paper introduces a global protocol for the comparison of model simulations with experimental results. The correlated distributions of the model parameters are determined using an original recursive inversion procedure. Multivariate analysis techniques are used in order to optimally synthesize the experimental information with a minimum number of variables. This protocol is relevant in all fields if physics dealing with event generators and multi-parametric experiments. (authors)
Data analysis of x-ray fluorescence holography by subtracting normal component from inverse hologram
International Nuclear Information System (INIS)
Happo, Naohisa; Hayashi, Kouichi; Hosokawa, Shinya
2010-01-01
X-ray fluorescence holography (XFH) is a powerful technique for determining three-dimensional local atomic arrangements around a specific fluorescing element. However, the raw experimental hologram is predominantly a mixed hologram, i.e., a mixture of hologram generated in both normal and inverse modes, which produces unreliable atomic images. In this paper, we propose a practical subtraction method of the normal component from the inverse XFH data by a Fourier transform for the calculated hologram of a model ZnTe cluster. Many spots originating from the normal components could be properly removed using a mask function, and clear atomic images were reconstructed at adequate positions of the model cluster. This method was successfully applied to the analysis of experimental ZnTe single crystal XFH data. (author)
Towards the mechanical characterization of abdominal wall by inverse analysis.
Simón-Allué, R; Calvo, B; Oberai, A A; Barbone, P E
2017-02-01
The aim of this study is to characterize the passive mechanical behaviour of abdominal wall in vivo in an animal model using only external cameras and numerical analysis. The main objective lies in defining a methodology that provides in vivo information of a specific patient without altering mechanical properties. It is demonstrated in the mechanical study of abdomen for hernia purposes. Mechanical tests consisted on pneumoperitoneum tests performed on New Zealand rabbits, where inner pressure was varied from 0mmHg to 12mmHg. Changes in the external abdominal surface were recorded and several points were tracked. Based on their coordinates we reconstructed a 3D finite element model of the abdominal wall, considering an incompressible hyperelastic material model defined by two parameters. The spatial distributions of these parameters (shear modulus and non linear parameter) were calculated by inverse analysis, using two different types of regularization: Total Variation Diminishing (TVD) and Tikhonov (H 1 ). After solving the inverse problem, the distribution of the material parameters were obtained along the abdominal surface. Accuracy of the results was evaluated for the last level of pressure. Results revealed a higher value of the shear modulus in a wide stripe along the craneo-caudal direction, associated with the presence of linea alba in conjunction with fascias and rectus abdominis. Non linear parameter distribution was smoother and the location of higher values varied with the regularization type. Both regularizations proved to yield in an accurate predicted displacement field, but H 1 obtained a smoother material parameter distribution while TVD included some discontinuities. The methodology here presented was able to characterize in vivo the passive non linear mechanical response of the abdominal wall. Copyright © 2016 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Popov, V.P.; Semenov, A.L.
1987-01-01
The calibration technique is described, and the metrological characteristics of a high-voltage generator of the inverse-quadratic function (HGF), being a functional unit of the diagnostic system of an electrodynamic analyser of a ionic component of a laser plasma, is analysed. The results of HGF testing in the range of time constants of the τ=(5-25)μs function are given. Analysis of metrologic and experimental characteristics shows, that HGF with automatic calibration has quite high accurate parameters. The high accuracy of function generation is provided with the possibility of calibration and adjustment conduction under experimental working conditions. Increase of the generated pulse amplitude to several tens of kilovelts is possible. Besides, the possibility of timely function adjustment to the necessary parameter (τ) increases essentially the HGF functional possibilities
Inverse Opal Photonic Crystals as an Optofluidic Platform for Fast Analysis of Hydrocarbon Mixtures.
Xu, Qiwei; Mahpeykar, Seyed Milad; Burgess, Ian B; Wang, Xihua
2018-06-13
Most of the reported optofluidic devices analyze liquid by measuring its refractive index. Recently, the wettability of liquid on various substrates has also been used as a key sensing parameter in optofluidic sensors. However, the above-mentioned techniques face challenges in the analysis of the relative concentration of components in an alkane hydrocarbon mixture, as both refractive indices and wettabilities of alkane hydrocarbons are very close. Here, we propose to apply volatility of liquid as the key sensing parameter, correlate it to the optical property of liquid inside inverse opal photonic crystals, and construct powerful optofluidic sensors for alkane hydrocarbon identification and analysis. We have demonstrated that via evaporation of hydrocarbons inside the periodic structure of inverse opal photonic crystals and observation of their reflection spectra, an inverse opal film could be used as a fast-response optofluidic sensor to accurately differentiate pure hydrocarbon liquids and relative concentrations of their binary and ternary mixtures in tens of seconds. In these 3D photonic crystals, pure chemicals with different volatilities would have different evaporation rates and can be easily identified via the total drying time. For multicomponent mixtures, the same strategy is applied to determine the relative concentration of each component simply by measuring drying time under different temperatures. Using this optofluidic sensing platform, we have determined the relative concentrations of ternary hydrocarbon mixtures with the difference of only one carbon between alkane hydrocarbons, which is a big step toward detailed hydrocarbon analysis for practical use.
Directory of Open Access Journals (Sweden)
Lei Zhang
2015-01-01
Full Text Available In the concrete dam construction, it is very necessary to strengthen the real-time monitoring and scientific management of concrete temperature control. This paper constructs the analysis and inverse analysis system of temperature stress simulation, which is based on various useful data collected in real time in the process of concrete construction. The system can produce automatically data file of temperature and stress calculation and then achieve the remote real-time simulation calculation of temperature stress by using high performance computing techniques, so the inverse analysis can be carried out based on a basis of monitoring data in the database; it fulfills the automatic feedback calculation according to the error requirement and generates the corresponding curve and chart after the automatic processing and analysis of corresponding results. The system realizes the automation and intellectualization of complex data analysis and preparation work in simulation process and complex data adjustment in the inverse analysis process, which can facilitate the real-time tracking simulation and feedback analysis of concrete temperature stress in construction process and enable you to discover problems timely, take measures timely, and adjust construction scheme and can well instruct you how to ensure project quality.
Investigation of inversion polymorphisms in the human genome using principal components analysis.
Ma, Jianzhong; Amos, Christopher I
2012-01-01
Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct "populations" of inversion homozygotes of different orientations and their 1:1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases.
Inversion, error analysis, and validation of GPS/MET occultation data
Directory of Open Access Journals (Sweden)
A. K. Steiner
Full Text Available The global positioning system meteorology (GPS/MET experiment was the first practical demonstration of global navigation satellite system (GNSS-based active limb sounding employing the radio occultation technique. This method measures, as principal observable and with millimetric accuracy, the excess phase path (relative to propagation in vacuum of GNSS-transmitted radio waves caused by refraction during passage through the Earth's neutral atmosphere and ionosphere in limb geometry. It shows great potential utility for weather and climate system studies in providing an unique combination of global coverage, high vertical resolution and accuracy, long-term stability, and all-weather capability. We first describe our GPS/MET data processing scheme from excess phases via bending angles to the neutral atmospheric parameters refractivity, density, pressure and temperature. Special emphasis is given to ionospheric correction methodology and the inversion of bending angles to refractivities, where we introduce a matrix inversion technique (instead of the usual integral inversion. The matrix technique is shown to lead to identical results as integral inversion but is more directly extendable to inversion by optimal estimation. The quality of GPS/MET-derived profiles is analyzed with an error estimation analysis employing a Monte Carlo technique. We consider statistical errors together with systematic errors due to upper-boundary initialization of the retrieval by a priori bending angles. Perfect initialization and properly smoothed statistical errors allow for better than 1 K temperature retrieval accuracy up to the stratopause. No initialization and statistical errors yield better than 1 K accuracy up to 30 km but less than 3 K accuracy above 40 km. Given imperfect initialization, biases >2 K propagate down to below 30 km height in unfavorable realistic cases. Furthermore, results of a statistical validation of GPS/MET profiles through comparison
Digital Repository Service at National Institute of Oceanography (India)
Murty, T.V.R.; Rao, M.M.M.; Sadhuram, Y.; Sridevi, B.; Maneesha, K.; SujithKumar, S.; Prasanna, P.L.; Murthy, K.S.R.
of Bengal during south-west monsoon season and explore possibility to reconstruct the acoustic profile of the eddy by Stochastic Inverse Technique. A simulation experiment on forward and inverse problems for observed sound velocity perturbation field has...
Optical coherence tomography signal analysis: LIDAR like equation and inverse methods
International Nuclear Information System (INIS)
Amaral, Marcello Magri
2012-01-01
Optical Coherence Tomography (OCT) is based on the media backscattering properties in order to obtain tomographic images. In a similar way, LIDAR (Light Detection and Range) technique uses these properties to determine atmospheric characteristics, specially the signal extinction coefficient. Exploring this similarity allowed the application of signal inversion methods to the OCT images, allowing to construct images based in the extinction coefficient, original result until now. The goal of this work was to study, propose, develop and implement algorithms based on OCT signal inversion methodologies with the aim of determine the extinction coefficient as a function of depth. Three inversion methods were used and implemented in LABView R : slope, boundary point and optical depth. Associated errors were studied and real samples (homogeneous and stratified) were used for two and three dimension analysis. The extinction coefficient images obtained from the optical depth method were capable to differentiate air from the sample. The images were studied applying PCA and cluster analysis that established the methodology strength in determining the sample's extinction coefficient value. Moreover, the optical depth methodology was applied to study the hypothesis that there is some correlation between signal extinction coefficient and the enamel teeth demineralization during a cariogenic process. By applying this methodology, it was possible to observe the variation of the extinction coefficient as depth function and its correlation with microhardness variation, showing that in deeper layers its values tends to a healthy tooth values, behaving as the same way that the microhardness. (author)
Children's strategies to solving additive inverse problems: a preliminary analysis
Ding, Meixia; Auxter, Abbey E.
2017-03-01
Prior studies show that elementary school children generally "lack" formal understanding of inverse relations. This study goes beyond lack to explore what children might "have" in their existing conception. A total of 281 students, kindergarten to third grade, were recruited to respond to a questionnaire that involved both contextual and non-contextual tasks on inverse relations, requiring both computational and explanatory skills. Results showed that children demonstrated better performance in computation than explanation. However, many students' explanations indicated that they did not necessarily utilize inverse relations for computation. Rather, they appeared to possess partial understanding, as evidenced by their use of part-whole structure, which is a key to understanding inverse relations. A close inspection of children's solution strategies further revealed that the sophistication of children's conception of part-whole structure varied in representation use and unknown quantity recognition, which suggests rich opportunities to develop students' understanding of inverse relations in lower elementary classrooms.
Directory of Open Access Journals (Sweden)
K. Verbist
2009-10-01
Full Text Available In arid and semi-arid zones, runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Nevertheless, few efforts were observed to quantify the water harvesting processes of these techniques and to evaluate their efficiency. In this study, a combination of detailed field measurements and modelling with the HYDRUS-2D software package was used to visualize the effect of an infiltration trench on the soil water content of a bare slope in northern Chile. Rainfall simulations were combined with high spatial and temporal resolution water content monitoring in order to construct a useful dataset for inverse modelling purposes. Initial estimates of model parameters were provided by detailed infiltration and soil water retention measurements. Four different measurement techniques were used to determine the saturated hydraulic conductivity (K_{sat} independently. The tension infiltrometer measurements proved a good estimator of the K_{sat} value and a proxy for those measured under simulated rainfall, whereas the pressure and constant head well infiltrometer measurements showed larger variability. Six different parameter optimization functions were tested as a combination of soil-water content, water retention and cumulative infiltration data. Infiltration data alone proved insufficient to obtain high model accuracy, due to large scatter on the data set, and water content data were needed to obtain optimized effective parameter sets with small confidence intervals. Correlation between the observed soil water content and the simulated values was as high as R^{2}=0.93 for ten selected observation points used in the model calibration phase, with overall correlation for the 22 observation points equal to 0.85. The model results indicate that the infiltration trench has a
Directory of Open Access Journals (Sweden)
S. Fonna
2018-06-01
Full Text Available Evaluation of rebar/reinforcing-steel corrosion for the 2004 tsunami-affected reinforced concrete (RC buildings in Aceh was conducted using half-cell potential mapping technique. However, the results only show qualitative meaning as corrosion risk rather than the corrosion itself, such as the size and location of corrosion. In this study, boundary element inverse analysis was proposed to be performed to detect rebar corrosion of the 2004 tsunami-affected structure in Aceh, using several electrical potential measurement data on the concrete surface. One RC structure in Peukan Bada, an area heavily damaged by the tsunami, was selected for the study. In 2004 the structure was submerged more than 5 m by the tsunami. Boundary element inverse analysis was developed by combining the boundary element method (BEM and particle swarm optimization (PSO. The corrosion was detected by evaluating measured and calculated electrical potential data. The measured and calculated electrical potential on the concrete surface was obtained by using a half-cell potential meter and by performing BEM, respectively. The solution candidates were evaluated by employing PSO. Simulation results show that boundary element inverse analysis successfully detected the size and location of corrosion for the case study. Compared with the actual corrosion, the error of simulation result was less than 5%. Hence, it shows that boundary element inverse analysis is very promising for further development to detect rebar corrosion. Keywords: Inverse analysis, Boundary element method, PSO, Corrosion, Reinforced concrete
Bulk analysis using nuclear techniques
International Nuclear Information System (INIS)
Borsaru, M.; Holmes, R.J.; Mathew, P.J.
1983-01-01
Bulk analysis techniques developed for the mining industry are reviewed. Using penetrating neutron and #betta#-radiations, measurements are obtained directly from a large volume of sample (3-30 kg) #betta#-techniques were used to determine the grade of iron ore and to detect shale on conveyor belts. Thermal neutron irradiation was developed for the simultaneous determination of iron and aluminium in iron ore on a conveyor belt. Thermal-neutron activation analysis includes the determination of alumina in bauxite, and manganese and alumina in manganese ore. Fast neutron activation analysis is used to determine silicon in iron ores, and alumina and silica in bauxite. Fast and thermal neutron activation has been used to determine the soil in shredded sugar cane. (U.K.)
Inverse kinematics technique for the study of fission-fragment isotopic yields at GANIL energies
International Nuclear Information System (INIS)
Delaune, O.
2012-01-01
The characteristics of the fission-products distributions result of dynamical and quantum properties of the deformation process of the fissioning nucleus. These distributions have also an interest for the conception of new nuclear power plants or for the transmutation of the nuclear wastes. Up to now, our understanding of the nuclear fission remains restricted because of experimental limitations. In particular, yields of the heavy fission products are difficult to get with precision. In this work, an innovative experimental technique is presented. It is based on the use of inverse kinematics coupled to the use of a spectrometer, in which a 238 U beam at 6 or 24 A MeV impinges on light targets. Several actinides, from 238 U to 250 Cf, are produced by transfer or fusion reactions, with an excitation energy ranges from ten to few hundreds MeV depending on the reaction and the beam energy. The fission fragments of these actinides are detected by the VAMOS spectrometer or the LISE separator. The isotopic yields of fission products are completely measured for different fissioning systems. The neutron excess of the fragments is used to characterise the isotopic distributions. Its evolution with excitation energy gives important insights on the mechanisms of the compound-nucleus formation and its deexcitation. Neutron excess is also used to determine the multiplicity of neutrons evaporated by the fragments. The role of the proton and neutron shell effects into the formation of fission fragments is also discussed. (author) [fr
Inverse Analysis to Formability Design in a Deep Drawing Process
Buranathiti, Thaweepat; Cao, Jian
Deep drawing process is an important process adding values to flat sheet metals in many industries. An important concern in the design of a deep drawing process generally is formability. This paper aims to present the connection between formability and inverse analysis (IA), which is a systematical means for determining an optimal blank configuration for a deep drawing process. In this paper, IA is presented and explored by using a commercial finite element software package. A number of numerical studies on the effect of blank configurations to the quality of a part produced by a deep drawing process were conducted and analyzed. The quality of the drawing processes is numerically analyzed by using an explicit incremental nonlinear finite element code. The minimum distance between elemental principal strains and the strain-based forming limit curve (FLC) is defined as tearing margin to be the key performance index (KPI) implying the quality of the part. The initial blank configuration has shown that it plays a highly important role in the quality of the product via the deep drawing process. In addition, it is observed that if a blank configuration is not greatly deviated from the one obtained from IA, the blank can still result a good product. The strain history around the bottom fillet of the part is also observed. The paper concludes that IA is an important part of the design methodology for deep drawing processes.
Inverse dynamic analysis of general n-link robot manipulators
International Nuclear Information System (INIS)
Yih, T.C.; Wang, T.Y.; Burks, B.L.; Babcock, S.M.
1996-01-01
In this paper, a generalized matrix approach is derived to analyze the dynamic forces and moments (torques) required by the joint actuators. This method is general enough to solve the problems of any n-link open-chain robot manipulators with joint combinations of R(revolute), P(prismatic), and S(spherical). On the other hand, the proposed matrix solution is applicable to both nonredundant and redundant robotic systems. The matrix notation is formulated based on the Newton-Euler equations under the condition of quasi-static equilibrium. The 4 x 4 homogeneous cylindrical coordinates-Bryant angles (C-B) notation is applied to model the robotic systems. Displacements, velocities, and accelerations of each joint and link center of gravity (CG) are calculated through kinematic analysis. The resultant external forces and moments exerted on the CG of each link are considered as known inputs. Subsequently, a 6n x 6n displacement coefficient matrix and a 6n x 1 external force/moment vector can be established. At last, the joint forces and moments needed for the joint actuators to control the robotic system are determined through matrix inversion. Numerical examples will be illustrated for the nonredundant industrial robots: Bendix AA/CNC (RRP/RRR) and Unimate 2000 spherical (SP/RRR) robots; and the redundant light duty utility arm (LDUA), modified LDUA, and tank waste retrieval manipulator system
Objective quantification of perturbations produced with a piecewise PV inversion technique
Directory of Open Access Journals (Sweden)
L. Fita
2007-11-01
Full Text Available PV inversion techniques have been widely used in numerical studies of severe weather cases. These techniques can be applied as a way to study the sensitivity of the responsible meteorological system to changes in the initial conditions of the simulations. Dynamical effects of a collection of atmospheric features involved in the evolution of the system can be isolated. However, aspects, such as the definition of the atmospheric features or the amount of change in the initial conditions, are largely case-dependent and/or subjectively defined. An objective way to calculate the modification of the initial fields is proposed to alleviate this problem. The perturbations are quantified as the mean absolute variations of the total energy between the original and modified fields, and an unique energy variation value is fixed for all the perturbations derived from different PV anomalies. Thus, PV features of different dimensions and characteristics introduce the same net modification of the initial conditions from an energetic point of view. The devised quantification method is applied to study the high impact weather case of 9–11 November 2001 in the Western Mediterranean basin, when a deep and strong cyclone was formed. On the Balearic Islands 4 people died, and sustained winds of 30 ms−1 and precipitation higher than 200 mm/24 h were recorded. Moreover, 700 people died in Algiers during the first phase of the event. The sensitivities to perturbations in the initial conditions of a deep upper level trough, the anticyclonic system related to the North Atlantic high and the surface thermal anomaly related to the baroclinicity of the environment are determined. Results reveal a high influence of the upper level trough and the surface thermal anomaly and a minor role of the North Atlantic high during the genesis of the cyclone.
Tuning of Block Copolymer Membrane Morphology through Water Induced Phase Inversion Technique
Madhavan, Poornima
2016-06-01
surface and pore walls of PS-b-P4VP block copolymer membranes and then investigated the biocidal activity of the silver nanoparticles grown membranes. Finally, a novel photoresponsive nanostructured triblock copolymer membranes were developed by phase inversion technique. In addition, the photoresponsive behavior on irradiation with light and their membrane flux and retention properties were studied.
Application of stepwise multiple regression techniques to inversion of Nimbus 'IRIS' observations.
Ohring, G.
1972-01-01
Exploratory studies with Nimbus-3 infrared interferometer-spectrometer (IRIS) data indicate that, in addition to temperature, such meteorological parameters as geopotential heights of pressure surfaces, tropopause pressure, and tropopause temperature can be inferred from the observed spectra with the use of simple regression equations. The technique of screening the IRIS spectral data by means of stepwise regression to obtain the best radiation predictors of meteorological parameters is validated. The simplicity of application of the technique and the simplicity of the derived linear regression equations - which contain only a few terms - suggest usefulness for this approach. Based upon the results obtained, suggestions are made for further development and exploitation of the stepwise regression analysis technique.
Energy Technology Data Exchange (ETDEWEB)
Gupta, S. C.P.; Khan, A. A.; Dass, L. L.; Sahay, P. N.; Jha, G. J.
1985-07-01
Single layer end-to-end inverted and everted techniques of entero-anastomosis were evaluated in sixteen male buffalo calves using silk and catgut sutures. All the animals of everting group showed areas of adhesion grossly, whereas it was only in three animals of inverting group. Histological evidences revealed a more uniform healing pattern in inversion group and radiography suggested comparatively greater degree of stenosis, but without functional impairment of intestinal lumen, than everting anastomosis. Connective tissue proliferation and mononuclear cell infiltrations were very minimal with silk suture whereas these were pronounced with catgut, irrespective of anastomotic technique. Thus inversion technique of anastomosis accomplished by single layer suturing with silk thread was ideal for enteroanastomosis in cattle.
Hamim, Salah Uddin Ahmed
Nanoindentation involves probing a hard diamond tip into a material, where the load and the displacement experienced by the tip is recorded continuously. This load-displacement data is a direct function of material's innate stress-strain behavior. Thus, theoretically it is possible to extract mechanical properties of a material through nanoindentation. However, due to various nonlinearities associated with nanoindentation the process of interpreting load-displacement data into material properties is difficult. Although, simple elastic behavior can be characterized easily, a method to characterize complicated material behavior such as nonlinear viscoelasticity is still lacking. In this study, a nanoindentation-based material characterization technique is developed to characterize soft materials exhibiting nonlinear viscoelasticity. Nanoindentation experiment was modeled in finite element analysis software (ABAQUS), where a nonlinear viscoelastic behavior was incorporated using user-defined subroutine (UMAT). The model parameters were calibrated using a process called inverse analysis. In this study, a surrogate model-based approach was used for the inverse analysis. The different factors affecting the surrogate model performance are analyzed in order to optimize the performance with respect to the computational cost.
Sun, J.; Li, Y.
2017-12-01
Magnetic data contain important information about the subsurface rocks that were magnetized in the geological history, which provides an important avenue to the study of the crustal heterogeneities associated with magmatic and hydrothermal activities. Interpretation of magnetic data has been widely used in mineral exploration, basement characterization and large scale crustal studies for several decades. However, interpreting magnetic data has been often complicated by the presence of remanent magnetizations with unknown magnetization directions. Researchers have developed different methods to deal with the challenges posed by remanence. We have developed a new and effective approach to inverting magnetic data for magnetization vector distributions characterized by region-wise consistency in the magnetization directions. This approach combines the classical Tikhonov inversion scheme with fuzzy C-means clustering algorithm, and constrains the estimated magnetization vectors to a specified small number of possible directions while fitting the observed magnetic data to within noise level. Our magnetization vector inversion recovers both the magnitudes and the directions of the magnetizations in the subsurface. Magnetization directions reflect the unique geological or hydrothermal processes applied to each geological unit, and therefore, can potentially be used for the purpose of differentiating various geological units. We have developed a practically convenient and effective way of assessing the uncertainty associated with the inverted magnetization directions (Figure 1), and investigated how geological differentiation results might be affected (Figure 2). The algorithm and procedures we have developed for magnetization vector inversion and uncertainty analysis open up new possibilities of extracting useful information from magnetic data affected by remanence. We will use a field data example from exploration of an iron-oxide-copper-gold (IOCG) deposit in Brazil to
Advanced Techniques of Stress Analysis
Directory of Open Access Journals (Sweden)
Simion TATARU
2013-12-01
Full Text Available This article aims to check the stress analysis technique based on 3D models also making a comparison with the traditional technique which utilizes a model built directly into the stress analysis program. This comparison of the two methods will be made with reference to the rear fuselage of IAR-99 aircraft, structure with a high degree of complexity which allows a meaningful evaluation of both approaches. Three updated databases are envisaged: the database having the idealized model obtained using ANSYS and working directly on documentation, without automatic generation of nodes and elements (with few exceptions, the rear fuselage database (performed at this stage obtained with Pro/ ENGINEER and the one obtained by using ANSYS with the second database. Then, each of the three databases will be used according to arising necessities.The main objective is to develop the parameterized model of the rear fuselage using the computer aided design software Pro/ ENGINEER. A review of research regarding the use of virtual reality with the interactive analysis performed by the finite element method is made to show the state- of- the-art achieved in this field.
Techniques for Automated Performance Analysis
Energy Technology Data Exchange (ETDEWEB)
Marcus, Ryan C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-09-02
The performance of a particular HPC code depends on a multitude of variables, including compiler selection, optimization flags, OpenMP pool size, file system load, memory usage, MPI configuration, etc. As a result of this complexity, current predictive models have limited applicability, especially at scale. We present a formulation of scientific codes, nodes, and clusters that reduces complex performance analysis to well-known mathematical techniques. Building accurate predictive models and enhancing our understanding of scientific codes at scale is an important step towards exascale computing.
Robust 1D inversion and analysis of helicopter electromagnetic (HEM) data
DEFF Research Database (Denmark)
Tølbøll, R.J.; Christensen, N.B.
2006-01-01
but can resolve layer boundary to a depth of more than 100 m. Modeling experiments also show that the effect of altimeter errors on the inversion results is serious. We suggest a new interpretation scheme for HEM data founded solely on full nonlinear 1D inversion and providing layered-earth models...... supported by datamisfit parameters and a quantitative model-parameter analysis. The backbone of the scheme is the removal of cultural coupling effects followed by a multilayer inversion that in turn provides reliable starting models for a subsequent few-layer inversion. A new procedure for correlation...
Complexity analysis of accelerated MCMC methods for Bayesian inversion
International Nuclear Information System (INIS)
Hoang, Viet Ha; Schwab, Christoph; Stuart, Andrew M
2013-01-01
The Bayesian approach to inverse problems, in which the posterior probability distribution on an unknown field is sampled for the purposes of computing posterior expectations of quantities of interest, is starting to become computationally feasible for partial differential equation (PDE) inverse problems. Balancing the sources of error arising from finite-dimensional approximation of the unknown field, the PDE forward solution map and the sampling of the probability space under the posterior distribution are essential for the design of efficient computational Bayesian methods for PDE inverse problems. We study Bayesian inversion for a model elliptic PDE with an unknown diffusion coefficient. We provide complexity analyses of several Markov chain Monte Carlo (MCMC) methods for the efficient numerical evaluation of expectations under the Bayesian posterior distribution, given data δ. Particular attention is given to bounds on the overall work required to achieve a prescribed error level ε. Specifically, we first bound the computational complexity of ‘plain’ MCMC, based on combining MCMC sampling with linear complexity multi-level solvers for elliptic PDE. Our (new) work versus accuracy bounds show that the complexity of this approach can be quite prohibitive. Two strategies for reducing the computational complexity are then proposed and analyzed: first, a sparse, parametric and deterministic generalized polynomial chaos (gpc) ‘surrogate’ representation of the forward response map of the PDE over the entire parameter space, and, second, a novel multi-level Markov chain Monte Carlo strategy which utilizes sampling from a multi-level discretization of the posterior and the forward PDE. For both of these strategies, we derive asymptotic bounds on work versus accuracy, and hence asymptotic bounds on the computational complexity of the algorithms. In particular, we provide sufficient conditions on the regularity of the unknown coefficients of the PDE and on the
Energy Technology Data Exchange (ETDEWEB)
Parchevsky, K. V.; Zhao, J.; Hartlep, T.; Kosovichev, A. G., E-mail: akosovichev@solar.stanford.edu [Stanford University, HEPL, Stanford, CA 94305 (United States)
2014-04-10
We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agree well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.
International Nuclear Information System (INIS)
Parchevsky, K. V.; Zhao, J.; Hartlep, T.; Kosovichev, A. G.
2014-01-01
We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agree well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.
Wang, Feiyan; Morten, Jan Petter; Spitzer, Klaus
2018-05-01
In this paper, we present a recently developed anisotropic 3-D inversion framework for interpreting controlled-source electromagnetic (CSEM) data in the frequency domain. The framework integrates a high-order finite-element forward operator and a Gauss-Newton inversion algorithm. Conductivity constraints are applied using a parameter transformation. We discretize the continuous forward and inverse problems on unstructured grids for a flexible treatment of arbitrarily complex geometries. Moreover, an unstructured mesh is more desirable in comparison to a single rectilinear mesh for multisource problems because local grid refinement will not significantly influence the mesh density outside the region of interest. The non-uniform spatial discretization facilitates parametrization of the inversion domain at a suitable scale. For a rapid simulation of multisource EM data, we opt to use a parallel direct solver. We further accelerate the inversion process by decomposing the entire data set into subsets with respect to frequencies (and transmitters if memory requirement is affordable). The computational tasks associated with each data subset are distributed to different processes and run in parallel. We validate the scheme using a synthetic marine CSEM model with rough bathymetry, and finally, apply it to an industrial-size 3-D data set from the Troll field oil province in the North Sea acquired in 2008 to examine its robustness and practical applicability.
Nguyen, Quynh C; Osypuk, Theresa L; Schmidt, Nicole M; Glymour, M Maria; Tchetgen Tchetgen, Eric J
2015-03-01
Despite the recent flourishing of mediation analysis techniques, many modern approaches are difficult to implement or applicable to only a restricted range of regression models. This report provides practical guidance for implementing a new technique utilizing inverse odds ratio weighting (IORW) to estimate natural direct and indirect effects for mediation analyses. IORW takes advantage of the odds ratio's invariance property and condenses information on the odds ratio for the relationship between the exposure (treatment) and multiple mediators, conditional on covariates, by regressing exposure on mediators and covariates. The inverse of the covariate-adjusted exposure-mediator odds ratio association is used to weight the primary analytical regression of the outcome on treatment. The treatment coefficient in such a weighted regression estimates the natural direct effect of treatment on the outcome, and indirect effects are identified by subtracting direct effects from total effects. Weighting renders treatment and mediators independent, thereby deactivating indirect pathways of the mediators. This new mediation technique accommodates multiple discrete or continuous mediators. IORW is easily implemented and is appropriate for any standard regression model, including quantile regression and survival analysis. An empirical example is given using data from the Moving to Opportunity (1994-2002) experiment, testing whether neighborhood context mediated the effects of a housing voucher program on obesity. Relevant Stata code (StataCorp LP, College Station, Texas) is provided. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Bragov, A. M.; Balandin, Vl. V.; Kotov, V. L.; Balandin, Vl. Vl.
2018-04-01
We present new experimental results on the investigation of the dynamic properties of sand soil on the basis of the inverse experiment technique using a measuring rod with a flat front-end face. A limited applicability has been shown of the method using the procedure for correcting the shape of the deformation pulse due to dispersion during its propagation in the measuring rod. Estimates of the pulse maximum have been obtained and the results of comparison of numerical calculations with experimental data are given. The sufficient accuracy in determining the drag force during the quasi-stationary stage of penetration has been established. The parameters of dynamic compressibility and resistance to shear of water-saturated sand have been determined in the course of the experimental-theoretical analysis of the maximum values of the drag force and its values at the quasi-stationary stage of penetration. It has been shown that with almost complete water saturation of sand its shear properties are reduced but remain significant in the practically important range of penetration rates.
Richey, Lauren; Gardner, John; Standing, Michael; Jorgensen, Matthew; Bartl, Michael
2010-10-01
Photonic crystals (PCs) are periodic structures that manipulate electromagnetic waves by defining allowed and forbidden frequency bands known as photonic band gaps. Despite production of PC structures operating at infrared wavelengths, visible counterparts are difficult to fabricate because periodicities must satisfy the diffraction criteria. As part of an ongoing search for naturally occurring PCs [1], a three-dimensional array of nanoscopic spheres in the iridescent scales of the Cerambycidae insects A. elegans and G. celestis has been found. Such arrays are similar to opal gemstones and self-assembled colloidal spheres which can be chemically inverted to create a lattice-like PC. Through a chemical replication process [2], scanning electron microscopy analysis, sequential focused ion beam slicing and three-dimensional modeling, we analyzed the structural arrangement of the nanoscopic spheres. The study of naturally occurring structures and their inversing techniques into PCs allows for diversity in optical PC fabrication. [1] J.W. Galusha et al., Phys. Rev. E 77 (2008) 050904. [2] J.W. Galusha et al., J. Mater. Chem. 20 (2010) 1277.
Revil, A.
2015-12-01
Geological expertise and petrophysical relationships can be brought together to provide prior information while inverting multiple geophysical datasets. The merging of such information can result in more realistic solution in the distribution of the model parameters, reducing ipse facto the non-uniqueness of the inverse problem. We consider two level of heterogeneities: facies, described by facies boundaries and heteroegenities inside each facies determined by a correlogram. In this presentation, we pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion of the geophysical data is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case for which we perform a joint inversion of gravity and galvanometric resistivity data with the stations located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to perform such deformation preserving prior topological properties of the facies throughout the inversion. With the help of prior facies petrophysical relationships and topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The method is applied to a second synthetic case showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries using the 2D joint inversion of
Identifying Isotropic Events using an Improved Regional Moment Tensor Inversion Technique
Energy Technology Data Exchange (ETDEWEB)
Dreger, Douglas S. [Univ. of California, Berkeley, CA (United States); Ford, Sean R. [Univ. of California, Berkeley, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Walter, William R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-12-08
Research was carried out investigating the feasibility of using a regional distance seismic waveform moment tensor inverse procedure to estimate source parameters of nuclear explosions and to use the source inversion results to develop a source-type discrimination capability. The results of the research indicate that it is possible to robustly determine the seismic moment tensor of nuclear explosions, and when compared to natural seismicity in the context of the a Hudson et al. (1989) source-type diagram they are found to separate from populations of earthquakes and underground cavity collapse seismic sources.
An Inverse Kinematic Approach Using Groebner Basis Theory Applied to Gait Cycle Analysis
2013-03-01
AN INVERSE KINEMATIC APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS THESIS Anum Barki AFIT-ENP-13-M-02 DEPARTMENT OF THE AIR...copyright protection in the United States. AFIT-ENP-13-M-02 AN INVERSE KINEMATIC APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS THESIS...APPROACH USING GROEBNER BASIS THEORY APPLIED TO GAIT CYCLE ANALYSIS Anum Barki, BS Approved: Dr. Ronald F. Tuttle (Chairman) Date Dr. Kimberly Kendricks
Analysis of Inverse Kinamtics of an Anthropomorphic Robotic hand
Directory of Open Access Journals (Sweden)
Pramod Kumar Parida
2013-03-01
Full Text Available In this paper, a new method for solving the inverse kinematics of the fingers of an anthropomorphic hand is proposed. Solution of inverse kinematic equations is a complex problem, the complexity comes from the nonlinearity of joint space and Cartesian space mapping and having multiple solutions.This is a typical problem in robotics that needs to be solved to control the fingers of an anthropomorphic robotic hand to perform tasks it is designated to do. With more complex structures operating in a 3-dimensional space deducing a mathematical soluation for the inverse kinematics may prove challenging. In this paper, using the ability of ANFIS (Adaptive Neuro-Fuzzy Inference System to learn from training data, it is possible to create ANFIS network, an implementation of a representative fuzzy inference system using ANFIS structure, with limited mathematical representation of the system. The main advantages of this method with respect to the other methods are implementation is easy, very fast and shorter computation time and better response with acceptable error.
Source Identification in Structural Acoustics with an Inverse Frequency Response Function Technique
Visser, Rene
2002-01-01
Inverse source identification based on acoustic measurements is essential for the investigation and understanding of sound fields generated by structural vibrations of various devices and machinery. Acoustic pressure measurements performed on a grid in the nearfield of a surface can be used to
An analysis of AVO inversion for postcritical offsets in HTI media
Skopintseva, Lyubov; Alkhalifah, Tariq Ali
2013-01-01
Azimuthal variations of wavefield characteristics, such as traveltime or reflection amplitude, play an important role in the identification of fractured media. A transversely isotropic medium with a horizontal symmetry axis (HTI medium) is the simplest azimuthally anisotropic model typically used to describe one set of vertical fractures. There exist many techniques in industry to recover anisotropic parameters based on moveout equations and linearized reflection coefficients using such a model. However, most of the methods have limitations in defining properties of the fractures due to linearizations and physical approximations used in their development. Thus, azimuthal analysis of traveltimes based on normal moveout ellipses recovers a maximum of three medium parameters instead of the required five. Linearizations made in plane-wave reflection coefficients (PWRCs) limit the amplitude-versus-offset (AVO) analysis to small incident angles and weak-contrast interfaces. Inversion based on azimuthal AVO for small offsets encounters nonuniqueness in the resolving power of the anisotropy parameters. Extending the AVO analysis and inversion to and beyond the critical reflection angle increases the amount of information recovered from the medium. However, well-accepted PWRCs are not valid in the vicinity of the critical angle and beyond it, due to frequency and spherical wave effects. Recently derived spherical and effective reflection coefficient (ERC) methods overcome this problem. We extended the ERCs approach to HTI media to analyze the potential of near- and postcritical reflections in azimuthal AVO analysis. From the sensitivity analysis, we found that ERCs are sensitive to different sets of parameters prior to and beyond the critical angle, which is useful in enhancing our resolution of the anisotropy parameters. Additionally, the resolution of the parameters depends on a sufficient azimuthal coverage in the acquisition setup. The most stable AVO results for the
An analysis of AVO inversion for postcritical offsets in HTI media
Skopintseva, Lyubov
2013-04-12
Azimuthal variations of wavefield characteristics, such as traveltime or reflection amplitude, play an important role in the identification of fractured media. A transversely isotropic medium with a horizontal symmetry axis (HTI medium) is the simplest azimuthally anisotropic model typically used to describe one set of vertical fractures. There exist many techniques in industry to recover anisotropic parameters based on moveout equations and linearized reflection coefficients using such a model. However, most of the methods have limitations in defining properties of the fractures due to linearizations and physical approximations used in their development. Thus, azimuthal analysis of traveltimes based on normal moveout ellipses recovers a maximum of three medium parameters instead of the required five. Linearizations made in plane-wave reflection coefficients (PWRCs) limit the amplitude-versus-offset (AVO) analysis to small incident angles and weak-contrast interfaces. Inversion based on azimuthal AVO for small offsets encounters nonuniqueness in the resolving power of the anisotropy parameters. Extending the AVO analysis and inversion to and beyond the critical reflection angle increases the amount of information recovered from the medium. However, well-accepted PWRCs are not valid in the vicinity of the critical angle and beyond it, due to frequency and spherical wave effects. Recently derived spherical and effective reflection coefficient (ERC) methods overcome this problem. We extended the ERCs approach to HTI media to analyze the potential of near- and postcritical reflections in azimuthal AVO analysis. From the sensitivity analysis, we found that ERCs are sensitive to different sets of parameters prior to and beyond the critical angle, which is useful in enhancing our resolution of the anisotropy parameters. Additionally, the resolution of the parameters depends on a sufficient azimuthal coverage in the acquisition setup. The most stable AVO results for the
Vicente-Salvador, David; Puig, Marta; Gayà-Vidal, Magdalena; Pacheco, Sarai; Giner-Delgado, Carla; Noguera, Isaac; Izquierdo, David; Martínez-Fundichely, Alexander; Ruiz-Herrera, Aurora; Estivill, Xavier; Aguado, Cristina; Lucas-Lledó, José Ignacio; Cáceres, Mario
2017-02-01
The growing catalogue of structural variants in humans often overlooks inversions as one of the most difficult types of variation to study, even though they affect phenotypic traits in diverse organisms. Here, we have analysed in detail 90 inversions predicted from the comparison of two independently assembled human genomes: the reference genome (NCBI36/HG18) and HuRef. Surprisingly, we found that two thirds of these predictions (62) represent errors either in assembly comparison or in one of the assemblies, including 27 misassembled regions in HG18. Next, we validated 22 of the remaining 28 potential polymorphic inversions using different PCR techniques and characterized their breakpoints and ancestral state. In addition, we determined experimentally the derived allele frequency in Europeans for 17 inversions (DAF = 0.01-0.80), as well as the distribution in 14 worldwide populations for 12 of them based on the 1000 Genomes Project data. Among the validated inversions, nine have inverted repeats (IRs) at their breakpoints, and two show nucleotide variation patterns consistent with a recurrent origin. Conversely, inversions without IRs have a unique origin and almost all of them show deletions or insertions at the breakpoints in the derived allele mediated by microhomology sequences, which highlights the importance of mechanisms like FoSTeS/MMBIR in the generation of complex rearrangements in the human genome. Finally, we found several inversions located within genes and at least one candidate to be positively selected in Africa. Thus, our study emphasizes the importance of careful analysis and validation of large-scale genomic predictions to extract reliable biological conclusions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Maximum entropy technique in the doublet structure analysis
International Nuclear Information System (INIS)
Belashev, B.Z.; Panebrattsev, Yu.A.; Shakhaliev, Eh.I.; Soroko, L.M.
1998-01-01
The Maximum Entropy Technique (MENT) for solution of the inverse problems is explained. The effective computer program for resolution of the nonlinear equations system encountered in the MENT has been developed and tested. The possibilities of the MENT have been demonstrated on the example of the MENT in the doublet structure analysis of noisy experimental data. The comparison of the MENT results with results of the Fourier algorithm technique without regularization is presented. The tolerant noise level is equal to 30% for MENT and only 0.1% for the Fourier algorithm
Gallovic, Frantisek; Cirella, Antonella; Plicka, Vladimir; Piatanesi, Alessio
2013-04-01
retrieved models, we present a thorough analysis of the performance of the two inversion approaches. In fact, depending on the inversion strategy and the intrinsic 'non-uniqueness' of the inverse problem, the final slip maps and distribution of rupture onset times are generally different, sometimes even incompatible with each other. Great emphasis is devoted to the uncertainty estimate of both techniques. Thus we do not compare only the best fitting models, but their 'compatibility' in terms of the uncertainty limits.
Elastic orthorhombic anisotropic parameter inversion: An analysis of parameterization
Oh, Juwon; Alkhalifah, Tariq Ali
2016-01-01
The resolution of a multiparameter full-waveform inversion (FWI) is highly influenced by the parameterization used in the inversion algorithm, as well as the data quality and the sensitivity of the data to the elastic parameters because the scattering patterns of the partial derivative wavefields (PDWs) vary with parameterization. For this reason, it is important to identify an optimal parameterization for elastic orthorhombic FWI by analyzing the radiation patterns of the PDWs for many reasonable model parameterizations. We have promoted a parameterization that allows for the separation of the anisotropic properties in the radiation patterns. The central parameter of this parameterization is the horizontal P-wave velocity, with an isotropic scattering potential, influencing the data at all scales and directions. This parameterization decouples the influence of the scattering potential given by the P-wave velocity perturbation fromthe polar changes described by two dimensionless parameter perturbations and from the azimuthal variation given by three additional dimensionless parameters perturbations. In addition, the scattering potentials of the P-wave velocity perturbation are also decoupled from the elastic influences given by one S-wave velocity and two additional dimensionless parameter perturbations. The vertical S-wave velocity is chosen with the best resolution obtained from S-wave reflections and converted waves, little influence on P-waves in conventional surface seismic acquisition. The influence of the density on observed data can be absorbed by one anisotropic parameter that has a similar radiation pattern. The additional seven dimensionless parameters describe the polar and azimuth variations in the P- and S-waves that we may acquire, with some of the parameters having distinct influences on the recorded data on the earth's surface. These characteristics of the new parameterization offer the potential for a multistage inversion from high symmetry
Elastic orthorhombic anisotropic parameter inversion: An analysis of parameterization
Oh, Juwon
2016-09-15
The resolution of a multiparameter full-waveform inversion (FWI) is highly influenced by the parameterization used in the inversion algorithm, as well as the data quality and the sensitivity of the data to the elastic parameters because the scattering patterns of the partial derivative wavefields (PDWs) vary with parameterization. For this reason, it is important to identify an optimal parameterization for elastic orthorhombic FWI by analyzing the radiation patterns of the PDWs for many reasonable model parameterizations. We have promoted a parameterization that allows for the separation of the anisotropic properties in the radiation patterns. The central parameter of this parameterization is the horizontal P-wave velocity, with an isotropic scattering potential, influencing the data at all scales and directions. This parameterization decouples the influence of the scattering potential given by the P-wave velocity perturbation fromthe polar changes described by two dimensionless parameter perturbations and from the azimuthal variation given by three additional dimensionless parameters perturbations. In addition, the scattering potentials of the P-wave velocity perturbation are also decoupled from the elastic influences given by one S-wave velocity and two additional dimensionless parameter perturbations. The vertical S-wave velocity is chosen with the best resolution obtained from S-wave reflections and converted waves, little influence on P-waves in conventional surface seismic acquisition. The influence of the density on observed data can be absorbed by one anisotropic parameter that has a similar radiation pattern. The additional seven dimensionless parameters describe the polar and azimuth variations in the P- and S-waves that we may acquire, with some of the parameters having distinct influences on the recorded data on the earth\\'s surface. These characteristics of the new parameterization offer the potential for a multistage inversion from high symmetry
Angle-domain Migration Velocity Analysis using Wave-equation Reflection Traveltime Inversion
Zhang, Sanzong; Schuster, Gerard T.; Luo, Yi
2012-01-01
way as wave-equation transmission traveltime inversion. The residual movemout analysis in the angle-domain common image gathers provides a robust estimate of the depth residual which is converted to the reflection traveltime residual for the velocity
Serum adiponectin levels are inversely correlated with leukemia: A meta-analysis
Directory of Open Access Journals (Sweden)
Jun-Jie Ma
2016-01-01
Conclusion: Our meta-analysis suggested that serum ADPN levels may be inversely correlated with leukemia, and ADPN levels can be used as an effective biologic marker in early diagnosis and therapeutic monitoring of leukemia.
AIDA - from Airborne Data Inversion to In-Depth Analysis
Meyer, U.; Goetze, H.; Schroeder, M.; Boerner, R.; Tezkan, B.; Winsemann, J.; Siemon, B.; Alvers, M.; Stoll, J. B.
2011-12-01
The rising competition in land use especially between water economy, agriculture, forestry, building material economy and other industries often leads to irreversible deterioration in the water and soil system (as salinization and degradation) which results in a long term damage of natural resources. A sustainable exploitation of the near subsurface by industry, economy and private households is a fundamental demand of a modern society. To fulfill this demand, a sound and comprehensive knowledge on structures and processes of the near subsurface is an important prerequisite. A spatial survey of the usable underground by aerogeophysical means and a subsequent ground geophysics survey targeted at special locations will deliver essential contributions within short time that make it possible to gain the needed additional knowledge. The complementary use of airborne and ground geophysics as well as the validation, assimilation and improvement of current findings by geological and hydrogeological investigations and plausibility tests leads to the following key questions: a) Which new and/or improved automatic algorithms (joint inversion, data assimilation and such) are useful to describe the structural setting of the usable subsurface by user specific characteristics as i.e. water volume, layer thicknesses, porosities etc.? b) What are the physical relations of the measured parameters (as electrical conductivities, magnetic susceptibilities, densities, etc.)? c) How can we deduce characteristics or parameters from the observations which describe near subsurface structures as ground water systems, their charge, discharge and recharge, vulnerabilities and other quantities? d) How plausible and realistic are the numerically obtained results in relation to user specific questions and parameters? e) Is it possible to compile material flux balances that describe spatial and time dependent impacts of environmental changes on aquifers and soils by repeated airborne surveys? In
Advanced Multivariate Inversion Techniques for High Resolution 3D Geophysical Modeling
2011-09-01
2005). We implemented a method to increase the usefulness of gravity data by filtering the Bouguer anomaly map. Though commonly applied 40 km 30 35...remove the long-wavelength components from the Bouguer gravity map we follow Tessema and Antoine (2004), who use an upward continuation method and...inversion of group velocities and gravity. (a) Top: Group velocities from a representative cell in the model. Bottom: Filtered Bouguer anomalies. (b
Parlangeau, Camille; Lacombe, Olivier; Daniel, Jean-Marc; Schueller, Sylvie
2015-04-01
Inversion of calcite twin data are known to be a powerful tool to reconstruct the past-state of stress in carbonate rocks of the crust, especially in fold-and-thrust belts and sedimentary basins. This is of key importance to constrain results of geomechanical modelling. Without proposing a new inversion scheme, this contribution reports some recent improvements of the most efficient stress inversion technique to date (Etchecopar, 1984) that allows to reconstruct the 5 parameters of the deviatoric paleostress tensors (principal stress orientations and differential stress magnitudes) from monophase and polyphase twin data sets. The improvements consist in the search of the possible tensors that account for the twin data (twinned and untwinned planes) and the aid to the user to define the best stress tensor solution, among others. We perform a systematic exploration of an hypersphere in 4 dimensions by varying different parameters, Euler's angles and the stress ratio. We first record all tensors with a minimum penalization function accounting for 20% of the twinned planes. We then define clusters of tensors following a dissimilarity criterion based on the stress distance between the 4 parameters of the reduced stress tensors and a degree of disjunction of the related sets of twinned planes. The percentage of twinned data to be explained by each tensor is then progressively increased and tested using the standard Etchecopar procedure until the best solution that explains the maximum number of twinned planes and the whole set of untwinned planes is reached. This new inversion procedure is tested on monophase and polyphase numerically-generated as well as natural calcite twin data in order to more accurately define the ability of the technique to separate more or less similar deviatoric stress tensors applied in sequence on the samples, to test the impact of strain hardening through the change of the critical resolved shear stress for twinning as well as to evaluate the
Inverse Transient Analysis for Classification of Wall Thickness Variations in Pipelines
Directory of Open Access Journals (Sweden)
Jeffrey Tuck
2013-12-01
Full Text Available Analysis of transient fluid pressure signals has been investigated as an alternative method of fault detection in pipeline systems and has shown promise in both laboratory and field trials. The advantage of the method is that it can potentially provide a fast and cost effective means of locating faults such as leaks, blockages and pipeline wall degradation within a pipeline while the system remains fully operational. The only requirement is that high speed pressure sensors are placed in contact with the fluid. Further development of the method requires detailed numerical models and enhanced understanding of transient flow within a pipeline where variations in pipeline condition and geometry occur. One such variation commonly encountered is the degradation or thinning of pipe walls, which can increase the susceptible of a pipeline to leak development. This paper aims to improve transient-based fault detection methods by investigating how changes in pipe wall thickness will affect the transient behaviour of a system; this is done through the analysis of laboratory experiments. The laboratory experiments are carried out on a stainless steel pipeline of constant outside diameter, into which a pipe section of variable wall thickness is inserted. In order to detect the location and severity of these changes in wall conditions within the laboratory system an inverse transient analysis procedure is employed which considers independent variations in wavespeed and diameter. Inverse transient analyses are carried out using a genetic algorithm optimisation routine to match the response from a one-dimensional method of characteristics transient model to the experimental time domain pressure responses. The accuracy of the detection technique is evaluated and benefits associated with various simplifying assumptions and simulation run times are investigated. It is found that for the case investigated, changes in the wavespeed and nominal diameter of the
Inverse Transient Analysis for Classification of Wall Thickness Variations in Pipelines
Tuck, Jeffrey; Lee, Pedro
2013-01-01
Analysis of transient fluid pressure signals has been investigated as an alternative method of fault detection in pipeline systems and has shown promise in both laboratory and field trials. The advantage of the method is that it can potentially provide a fast and cost effective means of locating faults such as leaks, blockages and pipeline wall degradation within a pipeline while the system remains fully operational. The only requirement is that high speed pressure sensors are placed in contact with the fluid. Further development of the method requires detailed numerical models and enhanced understanding of transient flow within a pipeline where variations in pipeline condition and geometry occur. One such variation commonly encountered is the degradation or thinning of pipe walls, which can increase the susceptible of a pipeline to leak development. This paper aims to improve transient-based fault detection methods by investigating how changes in pipe wall thickness will affect the transient behaviour of a system; this is done through the analysis of laboratory experiments. The laboratory experiments are carried out on a stainless steel pipeline of constant outside diameter, into which a pipe section of variable wall thickness is inserted. In order to detect the location and severity of these changes in wall conditions within the laboratory system an inverse transient analysis procedure is employed which considers independent variations in wavespeed and diameter. Inverse transient analyses are carried out using a genetic algorithm optimisation routine to match the response from a one-dimensional method of characteristics transient model to the experimental time domain pressure responses. The accuracy of the detection technique is evaluated and benefits associated with various simplifying assumptions and simulation run times are investigated. It is found that for the case investigated, changes in the wavespeed and nominal diameter of the pipeline are both important
Castaldo, R.; Tizzani, P.; Lollino, P.; Calò, F.; Ardizzone, F.; Lanari, R.; Guzzetti, F.; Manunta, M.
2015-11-01
The aim of this paper is to propose a methodology to perform inverse numerical modelling of slow landslides that combines the potentialities of both numerical approaches and well-known remote-sensing satellite techniques. In particular, through an optimization procedure based on a genetic algorithm, we minimize, with respect to a proper penalty function, the difference between the modelled displacement field and differential synthetic aperture radar interferometry (DInSAR) deformation time series. The proposed methodology allows us to automatically search for the physical parameters that characterize the landslide behaviour. To validate the presented approach, we focus our analysis on the slow Ivancich landslide (Assisi, central Italy). The kinematical evolution of the unstable slope is investigated via long-term DInSAR analysis, by exploiting about 20 years of ERS-1/2 and ENVISAT satellite acquisitions. The landslide is driven by the presence of a shear band, whose behaviour is simulated through a two-dimensional time-dependent finite element model, in two different physical scenarios, i.e. Newtonian viscous flow and a deviatoric creep model. Comparison between the model results and DInSAR measurements reveals that the deviatoric creep model is more suitable to describe the kinematical evolution of the landslide. This finding is also confirmed by comparing the model results with the available independent inclinometer measurements. Our analysis emphasizes that integration of different data, within inverse numerical models, allows deep investigation of the kinematical behaviour of slow active landslides and discrimination of the driving forces that govern their deformation processes.
Cao, Pei; Qi, Shuai; Tang, J.
2018-03-01
The impedance/admittance measurements of a piezoelectric transducer bonded to or embedded in a host structure can be used as damage indicator. When a credible model of the healthy structure, such as the finite element model, is available, using the impedance/admittance change information as input, it is possible to identify both the location and severity of damage. The inverse analysis, however, may be under-determined as the number of unknowns in high-frequency analysis is usually large while available input information is limited. The fundamental challenge thus is how to find a small set of solutions that cover the true damage scenario. In this research we cast the damage identification problem into a multi-objective optimization framework to tackle this challenge. With damage locations and severities as unknown variables, one of the objective functions is the difference between impedance-based model prediction in the parametric space and the actual measurements. Considering that damage occurrence generally affects only a small number of elements, we choose the sparsity of the unknown variables as another objective function, deliberately, the l 0 norm. Subsequently, a multi-objective Dividing RECTangles (DIRECT) algorithm is developed to facilitate the inverse analysis where the sparsity is further emphasized by sigmoid transformation. As a deterministic technique, this approach yields results that are repeatable and conclusive. In addition, only one algorithmic parameter, the number of function evaluations, is needed. Numerical and experimental case studies demonstrate that the proposed framework is capable of obtaining high-quality damage identification solutions with limited measurement information.
A DSC analysis of inverse salt-pair explosive composition
Energy Technology Data Exchange (ETDEWEB)
Babu, E. Suresh; Kaur, Sukhminder [Central Forensic Science Laboratory, Explosives Division, Ramanthapur, Hyderabad 500013 (India)
2004-02-01
Alkali nitrates are used as an ingredient in low explosive compositions and pyrotechnics. It has been suggested that alkali nitrates can form inverse salt-pair explosives with the addition of ammonium chloride. Therefore, the thermal behavior of low explosive compositions containing potassium nitrate mixed with ammonium chloride has been studied using Differential Scanning Calorimetry (DSC). Results provide information about the ion exchange reaction between these two chemical substances and the temperature region at which the formation of a cloud of salt particles of potassium chloride takes place. Furthermore, the addition of ammonium chloride quenches the flame of deflagrating compositions and causes the mixture to undergo explosive decomposition at relatively low temperatures. (Abstract Copyright [2004], Wiley Periodicals, Inc.)
Note of non-destructive detection of voids by a high frequency inversion technique
International Nuclear Information System (INIS)
Cohen, J.K.; Bleistein, N.
1978-01-01
An inverse method for nondestructive detection of scatterers of high contrast, such as voids or strongly reflecting inclusions, is described. The phase and range normalized far field scattering amplitude is shown to be directly proportional to the Fourier transform of the characteristic function of the scatterer. The characteristic function is equal to unity inside the region occupied by the scatterer and is zero outside. Thus, knowledge of this function provides a description of the scatterer. The method is applied to flaws in a sphere
128Xe Lifetime Measurement Using the Coulex-Plunger Technique in Inverse Kinematics
International Nuclear Information System (INIS)
Konstantinopoulos, T.; Lagoyannis, A.; Harissopulos, S.; Dewald, A.; Rother, W.; Ilie, G.; Jones, P.; Rakhila, P.; Greenlees, P.; Grahn, T.; Julin, R.; Balabanski, D. L.
2008-01-01
The lifetimes of the lowest collective yrast and non-yrast states in 128 Xe were measured in a Coulomb excitation experiment using the recoil distance method (RDM) in inverse kinematics. Hereby, the Cologne plunger apparatus was employed together with the JUROGAM spectrometer. Excited states in 128 Xe were populated using a 128 Xe beam impinging on a nat Fe target with E( 128 Xe)≅525 MeV. Recoils were detected by means of an array of solar cells placed at forward angles. Recoil-gated γ-spectra were measured at different plunger distances
128Xe Lifetime Measurement Using the Coulex-Plunger Technique in Inverse Kinematics
Konstantinopoulos, T.; Lagoyannis, A.; Harissopulos, S.; Dewald, A.; Rother, W.; Ilie, G.; Jones, P.; Rakhila, P.; Greenlees, P.; Grahn, T.; Julin, R.; Balabanski, D. L.
2008-05-01
The lifetimes of the lowest collective yrast and non-yrast states in 128Xe were measured in a Coulomb excitation experiment using the recoil distance method (RDM) in inverse kinematics. Hereby, the Cologne plunger apparatus was employed together with the JUROGAM spectrometer. Excited states in 128Xe were populated using a 128Xe beam impinging on a natFe target with E(128Xe)~525 MeV. Recoils were detected by means of an array of solar cells placed at forward angles. Recoil-gated γ-spectra were measured at different plunger distances.
International Nuclear Information System (INIS)
Namatame, Hirofumi; Taniguchi, Masaki
1994-01-01
Photoelectron spectroscopy is regarded as the most powerful means since it can measure almost perfectly the occupied electron state. On the other hand, inverse photoelectron spectroscopy is the technique for measuring unoccupied electron state by using the inverse process of photoelectron spectroscopy, and in principle, the similar experiment to photoelectron spectroscopy becomes feasible. The development of the experimental technology for inverse photoelectron spectroscopy has been carried out energetically by many research groups so far. At present, the heightening of resolution of inverse photoelectron spectroscopy, the development of inverse photoelectron spectroscope in which light energy is variable and so on are carried out. But the inverse photoelectron spectroscope for vacuum ultraviolet region is not on the market. In this report, the principle of inverse photoelectron spectroscopy and the present state of the spectroscope are described, and the direction of the development hereafter is groped. As the experimental equipment, electron guns, light detectors and so on are explained. As the examples of the experiment, the inverse photoelectron spectroscopy of semimagnetic semiconductors and resonance inverse photoelectron spectroscopy are reported. (K.I.)
International Nuclear Information System (INIS)
Serai, Suraj; Towbin, Alexander J.; Podberesky, Daniel J.
2012-01-01
Abdominal contrast-enhanced MR angiography (CE-MRA) is routinely performed in children. CE-MRA is challenging in children because of patient motion, difficulty in obtaining intravenous access, and the inability of young patients to perform a breath-hold during imaging. The combination of pediatric-specific difficulties in imaging and the safety concerns regarding the risk of gadolinium-based contrast agents in patients with impaired renal function has renewed interest in the use of non-contrast (NC) MRA techniques. At our institution, we have optimized 3-D NC-MRA techniques for abdominal imaging. The purpose of this work is to demonstrate the utility of an inflow-enhanced, inversion recovery balanced steady-state free precession-based (b-SSFP) NC-MRA technique. (orig.)
Chow, V. Y.; Gerbig, C.; Longo, M.; Koch, F.; Nehrkorn, T.; Eluszkiewicz, J.; Ceballos, J. C.; Longo, K.; Wofsy, S. C.
2012-12-01
The Balanço Atmosférico Regional de Carbono na Amazônia (BARCA) aircraft program spanned the dry to wet and wet to dry transition seasons in November 2008 & May 2009 respectively. It resulted in ~150 vertical profiles covering the Brazilian Amazon Basin (BAB). With the data we attempt to estimate a carbon budget for the BAB, to determine if regional aircraft experiments can provide strong constraints for a budget, and to compare inversion frameworks when optimizing flux estimates. We use a LPDM to integrate satellite-, aircraft-, & surface-data with mesoscale meteorological fields to link bottom-up and top-down models to provide constraints and error bounds for regional fluxes. The Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by meteorological fields from BRAMS, ECMWF, and WRF are coupled to a biosphere model, the Vegetation Photosynthesis Respiration Model (VPRM), to determine regional CO2 fluxes for the BAB. The VPRM is a prognostic biosphere model driven by MODIS 8-day EVI and LSWI indices along with shortwave radiation and temperature from tower measurements and mesoscale meteorological data. VPRM parameters are tuned using eddy flux tower data from the Large-Scale Biosphere Atmosphere experiment. VPRM computes hourly CO2 fluxes by calculating Gross Ecosystem Exchange (GEE) and Respiration (R) for 8 different vegetation types. The VPRM fluxes are scaled up to the BAB by using time-averaged drivers (shortwave radiation & temperature) from high-temporal resolution runs of BRAMS, ECMWF, and WRF and vegetation maps from SYNMAP and IGBP2007. Shortwave radiation from each mesoscale model is validated using surface data and output from GL 1.2, a global radiation model based on GOES 8 visible imagery. The vegetation maps are updated to 2008 and 2009 using landuse scenarios modeled by Sim Amazonia 2 and Sim Brazil. A priori fluxes modeled by STILT-VPRM are optimized using data from BARCA, eddy covariance sites, and flask measurements. The
Karunakaran, Madhavan; Shevate, Rahul; Peinemann, Klaus-Viktor
2016-01-01
In this paper, we demonstrate the formation of nanostructured double hydrophobic poly(styrene-b-methyl methacrylate) (PS-b-PMMA) block copolymer membranes via state-of-the-art phase inversion technique. The nanostructured membrane morphologies are tuned by different solvent and block copolymer compositions. The membrane morphology has been investigated using FESEM, AFM and TEM. Morphological investigation shows the formation of both cylindrical and lamellar structures on the top surface of the block copolymer membranes. The PS-b-PMMA having an equal block length (PS160K-b-PMMA160K) exhibits both cylindrical and lamellar structures on the top layer of the asymmetric membrane. All membranes fabricated from PS160K-b-PMMA160K shows an incomplete pore formation in both cylindrical and lamellar morphologies during the phase inversion process. However, PS-b-PMMA (PS135K-b-PMMA19.5K) block copolymer having a short PMMA block allowed us to produce open pore structures with ordered hexagonal cylindrical pores during the phase inversion process. The resulting PS-b-PMMA nanostructured block copolymer membranes have pure water flux from 105-820 l/m2.h.bar and 95% retention of PEG50K
Karunakaran, Madhavan
2016-03-11
In this paper, we demonstrate the formation of nanostructured double hydrophobic poly(styrene-b-methyl methacrylate) (PS-b-PMMA) block copolymer membranes via state-of-the-art phase inversion technique. The nanostructured membrane morphologies are tuned by different solvent and block copolymer compositions. The membrane morphology has been investigated using FESEM, AFM and TEM. Morphological investigation shows the formation of both cylindrical and lamellar structures on the top surface of the block copolymer membranes. The PS-b-PMMA having an equal block length (PS160K-b-PMMA160K) exhibits both cylindrical and lamellar structures on the top layer of the asymmetric membrane. All membranes fabricated from PS160K-b-PMMA160K shows an incomplete pore formation in both cylindrical and lamellar morphologies during the phase inversion process. However, PS-b-PMMA (PS135K-b-PMMA19.5K) block copolymer having a short PMMA block allowed us to produce open pore structures with ordered hexagonal cylindrical pores during the phase inversion process. The resulting PS-b-PMMA nanostructured block copolymer membranes have pure water flux from 105-820 l/m2.h.bar and 95% retention of PEG50K
Directory of Open Access Journals (Sweden)
Qi Hong
2015-01-01
Full Text Available The particle size distribution (PSD plays an important role in environmental pollution detection and human health protection, such as fog, haze and soot. In this study, the Attractive and Repulsive Particle Swarm Optimization (ARPSO algorithm and the basic PSO were applied to retrieve the PSD. The spectral extinction technique coupled with the Anomalous Diffraction Approximation (ADA and the Lambert-Beer Law were employed to investigate the retrieval of the PSD. Three commonly used monomodal PSDs, i.e. the Rosin-Rammer (R-R distribution, the normal (N-N distribution, the logarithmic normal (L-N distribution were studied in the dependent model. Then, an optimal wavelengths selection algorithm was proposed. To study the accuracy and robustness of the inverse results, some characteristic parameters were employed. The research revealed that the ARPSO showed more accurate and faster convergence rate than the basic PSO, even with random measurement error. Moreover, the investigation also demonstrated that the inverse results of four incident laser wavelengths showed more accurate and robust than those of two wavelengths. The research also found that if increasing the interval of the selected incident laser wavelengths, inverse results would show more accurate, even in the presence of random error.
Bayesian inversion analysis of nonlinear dynamics in surface heterogeneous reactions.
Omori, Toshiaki; Kuwatani, Tatsu; Okamoto, Atsushi; Hukushima, Koji
2016-09-01
It is essential to extract nonlinear dynamics from time-series data as an inverse problem in natural sciences. We propose a Bayesian statistical framework for extracting nonlinear dynamics of surface heterogeneous reactions from sparse and noisy observable data. Surface heterogeneous reactions are chemical reactions with conjugation of multiple phases, and they have the intrinsic nonlinearity of their dynamics caused by the effect of surface-area between different phases. We adapt a belief propagation method and an expectation-maximization (EM) algorithm to partial observation problem, in order to simultaneously estimate the time course of hidden variables and the kinetic parameters underlying dynamics. The proposed belief propagation method is performed by using sequential Monte Carlo algorithm in order to estimate nonlinear dynamical system. Using our proposed method, we show that the rate constants of dissolution and precipitation reactions, which are typical examples of surface heterogeneous reactions, as well as the temporal changes of solid reactants and products, were successfully estimated only from the observable temporal changes in the concentration of the dissolved intermediate product.
Fong, Daniel Tik-Pui; Ha, Sophia Chui-Wai; Mok, Kam-Ming; Chan, Christie Wing-Long; Chan, Kai-Ming
2012-11-01
Ankle ligamentous sprain is common in sports. The most direct way to study the mechanism quantitatively is to study real injury cases; however, it is unethical and impractical to produce an injury in the laboratory. A recently developed, model-based image-matching motion analysis technique allows quantitative analysis of real injury incidents captured in televised events and gives important knowledge for the development of injury prevention protocols and equipment. To date, there have been only 4 reported cases, and there is a need to conduct more studies for a better understanding of the mechanism of ankle ligamentous sprain injury. This study presents 5 cases in tennis and a comparison with 4 previous cases for a better understanding of the mechanism of ankle ligamentous sprain injury. Case series; level of evidence, 4. Five sets of videos showing ankle sprain injuries in televised tennis competition with 2 camera views were collected. The videos were transformed, synchronized, and rendered to a 3-dimensional animation software. The dimensions of the tennis court in each case were obtained to build a virtual environment, and a skeleton model scaled to the injured athlete's height was used for the skeleton matching. Foot strike was determined visually, and the profiles of the ankle joint kinematics were individually presented. There was a pattern of sudden inversion and internal rotation at the ankle joint, with the peak values ranging from 48°-126° and 35°-99°, respectively. In the sagittal plane, the ankle joint fluctuated between plantar flexion and dorsiflexion within the first 0.50 seconds after foot strike. The peak inversion velocity ranged from 509 to 1488 deg/sec. Internal rotation at the ankle joint could be one of the causes of ankle inversion sprain injury, with a slightly inverted ankle joint orientation at landing as the inciting event. To prevent the foot from rolling over the edge to cause a sprain injury, tennis players who do lots of sideward
Subspace-based analysis of the ERT inverse problem
Ben Hadj Miled, Mohamed Khames; Miller, Eric L.
2004-05-01
In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.
Robust 1D inversion and analysis of helicopter electromagnetic (HEM) data
DEFF Research Database (Denmark)
Tølbøll, R.J.; Christensen, N.B.
2006-01-01
but can resolve layer boundary to a depth of more than 100 m. Modeling experiments also show that the effect of altimeter errors on the inversion results is serious. We suggest a new interpretation scheme for HEM data founded solely on full nonlinear 1D inversion and providing layered-earth models...... of test flights were performed using a frequency-domain, helicopter-borne electromagnetic (HEM) system. We perform a theoretical examination of the resolution capabilities of the applied system. Quantitative model parameter analyses show that the system only weakly resolves conductive, near-surface layers...... supported by datamisfit parameters and a quantitative model-parameter analysis. The backbone of the scheme is the removal of cultural coupling effects followed by a multilayer inversion that in turn provides reliable starting models for a subsequent few-layer inversion. A new procedure for correlation...
THE DIDACTIC ANALYSIS OF STUDIES ON THE INVERSE PROBLEMS FOR THE DIFFERENTIAL EQUATIONS
Directory of Open Access Journals (Sweden)
В С Корнилов
2017-12-01
Full Text Available In article results of the didactic analysis of the organization and carrying out seminar classes in the inverse problems for the differential equations for students of higher educational institutions of the physical and mathematical directions of preparation are discussed. Such analysis includes a general characteristic of mathematical content of seminar occupations, the analysis of structure of seminar occupation, the analysis of realization of the developing and educational purposes, allocation of didactic units and informative means which have to be acquired by students when training each section of content of training in the inverse problems and other important psychology and pedagogical aspects. The attention to establishment of compliance to those of seminar occupations to lecture material and identification of functions in teaching and educational process which are carried out at the solution of the inverse problems, and also is paid to need to show various mathematical receptions and methods of their decision. Such didactic analysis helps not only to reveal such inverse problems at which solution students can collectively join in creative process of search of their decision, but also effectively organize control of assimilation of knowledge and abilities of students on the inverse problems for the differential equations.
Reliability analysis techniques in power plant design
International Nuclear Information System (INIS)
Chang, N.E.
1981-01-01
An overview of reliability analysis techniques is presented as applied to power plant design. The key terms, power plant performance, reliability, availability and maintainability are defined. Reliability modeling, methods of analysis and component reliability data are briefly reviewed. Application of reliability analysis techniques from a design engineering approach to improving power plant productivity is discussed. (author)
Boonyasiriwat, Chaiwoot
2010-11-01
A recently developed time-domain multiscale waveform tomography (MWT) method is applied to synthetic and field marine data. Although the MWT method was already applied to synthetic data, the synthetic data application leads to a development of a hybrid method between waveform tomography and the salt flooding technique commonly use in subsalt imaging. This hybrid method can overcome a convergence problem encountered by inversion with a traveltime velocity tomogram and successfully provides an accurate and highly resolved velocity tomogram for the 2D SEG/EAGE salt model. In the application of MWT to the field data, the inversion process is carried out using a multiscale method with a dynamic early-arrival muting window to mitigate the local minima problem of waveform tomography and elastic effects. With the modified MWT method, reasonably accurate results as verified by comparison of migration images and common image gathers were obtained. The hybrid method with the salt flooding technique is not used in this field data example because there is no salt in the subsurface according to our interpretation. However, we believe it is applicable to field data applications. © 2010 Society of Exploration Geophysicists.
Jiang, Yi; Li, Guoyang; Qian, Lin-Xue; Liang, Si; Destrade, Michel; Cao, Yanping
2015-10-01
We use supersonic shear wave imaging (SSI) technique to measure not only the linear but also the nonlinear elastic properties of brain matter. Here, we tested six porcine brains ex vivo and measured the velocities of the plane shear waves induced by acoustic radiation force at different states of pre-deformation when the ultrasonic probe is pushed into the soft tissue. We relied on an inverse method based on the theory governing the propagation of small-amplitude acoustic waves in deformed solids to interpret the experimental data. We found that, depending on the subjects, the resulting initial shear modulus [Formula: see text] varies from 1.8 to 3.2 kPa, the stiffening parameter [Formula: see text] of the hyperelastic Demiray-Fung model from 0.13 to 0.73, and the third- [Formula: see text] and fourth-order [Formula: see text] constants of weakly nonlinear elasticity from [Formula: see text]1.3 to [Formula: see text]20.6 kPa and from 3.1 to 8.7 kPa, respectively. Paired [Formula: see text] test performed on the experimental results of the left and right lobes of the brain shows no significant difference. These values are in line with those reported in the literature on brain tissue, indicating that the SSI method, combined to the inverse analysis, is an efficient and powerful tool for the mechanical characterization of brain tissue, which is of great importance for computer simulation of traumatic brain injury and virtual neurosurgery.
On process capability and system availability analysis of the inverse Rayleigh distribution
Directory of Open Access Journals (Sweden)
Sajid Ali
2015-04-01
Full Text Available In this article, process capability and system availability analysis is discussed for the inverse Rayleigh lifetime distribution. Bayesian approach with a conjugate gamma distribution is adopted for the analysis. Different types of loss functions are considered to find Bayes estimates of the process capability and system availability. A simulation study is conducted for the comparison of different loss functions.
A structured approach to forensic study of explosions: The TNO Inverse Explosion Analysis tool
Voort, M.M. van der; Wees, R.M.M. van; Brouwer, S.D.; Jagt-Deutekom, M.J. van der; Verreault, J.
2015-01-01
Forensic analysis of explosions consists of determining the point of origin, the explosive substance involved, and the charge mass. Within the EU FP7 project Hyperion, TNO developed the Inverse Explosion Analysis (TNO-IEA) tool to estimate the charge mass and point of origin based on observed damage
Treating experimental data of inverse kinetic method by unitary linear regression analysis
International Nuclear Information System (INIS)
Zhao Yusen; Chen Xiaoliang
2009-01-01
The theory of treating experimental data of inverse kinetic method by unitary linear regression analysis was described. Not only the reactivity, but also the effective neutron source intensity could be calculated by this method. Computer code was compiled base on the inverse kinetic method and unitary linear regression analysis. The data of zero power facility BFS-1 in Russia were processed and the results were compared. The results show that the reactivity and the effective neutron source intensity can be obtained correctly by treating experimental data of inverse kinetic method using unitary linear regression analysis and the precision of reactivity measurement is improved. The central element efficiency can be calculated by using the reactivity. The result also shows that the effect to reactivity measurement caused by external neutron source should be considered when the reactor power is low and the intensity of external neutron source is strong. (authors)
Energy Technology Data Exchange (ETDEWEB)
Sakurai, K; Shima, H [OYO Corp., Tokyo (Japan)
1996-10-01
This paper proposes a modeling method of one-dimensional complex resistivity using linear filter technique which has been extended to the complex resistivity. In addition, a numerical test of inversion was conducted using the monitoring results, to discuss the measured frequency band. Linear filter technique is a method by which theoretical potential can be calculated for stratified structures, and it is widely used for the one-dimensional analysis of dc electrical exploration. The modeling can be carried out only using values of complex resistivity without using values of potential. In this study, a bipolar method was employed as a configuration of electrodes. The numerical test of one-dimensional complex resistivity inversion was conducted using the formulated modeling. A three-layered structure model was used as a numerical model. A multi-layer structure with a thickness of 5 m was analyzed on the basis of apparent complex resistivity calculated from the model. From the results of numerical test, it was found that both the chargeability and the time constant agreed well with those of the original model. A trade-off was observed between the chargeability and the time constant at the stage of convergence. 3 refs., 9 figs., 1 tab.
Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K
2010-12-01
Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.
Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry
Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.
2015-01-01
Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.
Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry
Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.
2014-01-01
Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.
Born reflection kernel analysis and wave-equation reflection traveltime inversion in elastic media
Wang, Tengfei
2017-08-17
Elastic reflection waveform inversion (ERWI) utilize the reflections to update the low and intermediate wavenumbers in the deeper part of model. However, ERWI suffers from the cycle-skipping problem due to the objective function of waveform residual. Since traveltime information relates to the background model more linearly, we use the traveltime residuals as objective function to update background velocity model using wave equation reflected traveltime inversion (WERTI). The reflection kernel analysis shows that mode decomposition can suppress the artifacts in gradient calculation. We design a two-step inversion strategy, in which PP reflections are firstly used to invert P wave velocity (Vp), followed by S wave velocity (Vs) inversion with PS reflections. P/S separation of multi-component seismograms and spatial wave mode decomposition can reduce the nonlinearity of inversion effectively by selecting suitable P or S wave subsets for hierarchical inversion. Numerical example of Sigsbee2A model validates the effectiveness of the algorithms and strategies for elastic WERTI (E-WERTI).
On an asymptotic technique of solution of the inverse problem of helioseismology
International Nuclear Information System (INIS)
Brodskij, M.A.; Vorontsov, S.V.
1987-01-01
The technique for the solution of the universe problem for the solar 5-min. oscillations is proposed, which provides an independent determination of the second speed as a function of depth in solar interior and the frequency dependence of the effective phase shift for the reflection of the trapped acoustic waves from the outer layers. The preliminary numerical results are presented
International Nuclear Information System (INIS)
Hey, Jonathan; Malloy, Adam C.; Martinez-Botas, Ricardo; Lamperth, Michael
2015-01-01
Highlights: • Conjugate heat transfer analysis of an electric machine. • Inverse identification method for estimating the model parameters. • Experimentally determined thermal properties and electromagnetic losses. • Coupling of inverse identification method with a numerical model. • Improved modeling accuracy through introduction of interface material. - Abstract: Energy conversion devices undergo thermal loading during their operation as a result of inefficiencies in the energy conversion process. This will eventually lead to degradation and possible failure of the device if the heat generated is not properly managed. The ability to accurately predict the thermal behavior of such a device during the initial developmental stage is an important requirement. However, accurate predictions of critical temperature is challenging due to the variation of heat transfer parameters from one device to another. The ability to determine the model parameters is key to accurately representing the heat transfer in such a device. This paper presents the use of an inverse identification technique to estimate the model parameters of an energy conversion device designed for vehicular applications. To simulate the imperfect contact and the presence of insulating materials in the permanent magnet electric machine, thin material are introduced at the component interface of the numerical model. The proposed inverse identification method is used to estimate the equivalent thermal conductance of the thin material. In addition, the electromagnetic losses generated in the permanent magnet is also derived indirectly from the temperature measurement using the same method. With the thermal properties and input parameters of the numerical model obtained from the inverse identification method, the critical temperature of the device can be predicted more accurately. The deviation between the maximum measured and predicted winding temperature is less than 2.4%
International Nuclear Information System (INIS)
Hu, Chengyao; Huang, Pei
2011-01-01
The importance of sugar and sugar-containing materials is well recognized nowadays, owing to their application in industrial processes, particularly in the food, pharmaceutical and cosmetic industries. Because of the large numbers of those compounds involved and the relatively small number of solubility and/or diffusion coefficient data for each compound available, it is highly desirable to measure the solubility and/or diffusion coefficient as efficiently as possible and to be able to improve the accuracy of the methods used. In this work, a new technique was developed for the measurement of the diffusion coefficient of a stationary solid solute in a stagnant solvent which simultaneously measures solubility based on an inverse measurement problem algorithm with the real-time dissolved amount profile as a function of time. This study differs from established techniques in both the experimental method and the data analysis. The experimental method was developed in which the dissolved amount of solid solute in quiescent solvent was investigated using a continuous weighing technique. In the data analysis, the hybrid genetic algorithm is used to minimize an objective function containing a calculated and a measured dissolved amount with time. This is measured on a cylindrical sample of amorphous glucose in methanol or ethanol. The calculated dissolved amount, that is a function of the unknown physical properties of the solid solute in the solvent, is calculated by the solution of the two-dimensional nonlinear inverse natural convection problem. The estimated values of the solubility of amorphous glucose in methanol and ethanol at 293 K were respectively 32.1 g/100 g methanol and 1.48 g/100 g ethanol, in agreement with the literature values, and support the validity of the simultaneously measured diffusion coefficient. These results show the efficiency and the stability of the developed technique to simultaneously estimate the solubility and diffusion coefficient. Also
Inversion kinematics at deep-seated gravity slope deformations revealed by trenching techniques
Pasquaré Mariotto, Federico; Tibaldi, Alessandro
2016-01-01
We compare data from three deep-seated gravitational slope deformations (DSGSDs) where palaeoseismological techniques were applied in artificial trenches. At all trenches, located in metamorphic rocks of the Italian Alps, there is evidence of extensional deformation given by normal movements along slip planes dipping downhill or uphill, and/or fissures, as expected in gravitational failure. However, we document and illustrate – with the aid of trenching – evidenc...
Inverse Dynamic Analysis for Various Drivings in Kinematic Systems
Energy Technology Data Exchange (ETDEWEB)
Lee, Byung Hoon [Pusan Nat’l Univ., Busan (Korea, Republic of)
2017-09-15
Analysis of actuating forces and joint reaction forces are essential to determine the capacity of actuators, to control the mechanical system and to design its components. This paper presents an algorithm that calculates actuating forces(or torques), depending on the various types of driving constraints, in order to produce a given system motion in the joint coordinate space. The joint coordinates are used as the generalized coordinates of a kinematic system. System equations of motion and constraint acceleration equations are transformed from the Cartesian coordinate space to the joint coordinate space using the velocity transformation method. A numerical example is carried out to verify the algorithm proposed.
Nuclear analysis techniques and environmental sciences
International Nuclear Information System (INIS)
1997-10-01
31 theses are collected in this book. It introduced molecular activation analysis micro-PIXE and micro-probe analysis, x-ray fluorescence analysis and accelerator mass spectrometry. The applications about these nuclear analysis techniques are presented and reviewed for environmental sciences
Analysis of archaeological pieces with nuclear techniques
International Nuclear Information System (INIS)
Tenorio, D.
2002-01-01
In this work nuclear techniques such as Neutron Activation Analysis, PIXE, X-ray fluorescence analysis, Metallography, Uranium series, Rutherford Backscattering for using in analysis of archaeological specimens and materials are described. Also some published works and thesis about analysis of different Mexican and Meso american archaeological sites are referred. (Author)
Chemical analysis by nuclear techniques
International Nuclear Information System (INIS)
Sohn, S. C.; Kim, W. H.; Park, Y. J.; Park, Y. J.; Song, B. C.; Jeon, Y. S.; Jee, K. Y.; Pyo, H. Y.
2002-01-01
This state art report consists of four parts, production of micro-particles, analysis of boron, alpha tracking method and development of neutron induced prompt gamma ray spectroscopy (NIPS) system. The various methods for the production of micro-paticles such as mechanical method, electrolysis method, chemical method, spray method were described in the first part. The second part contains sample treatment, separation and concentration, analytical method, and application of boron analysis. The third part contains characteristics of alpha track, track dectectors, pretreatment of sample, neutron irradiation, etching conditions for various detectors, observation of track on the detector, etc. The last part contains basic theory, neutron source, collimator, neutron shields, calibration of NIPS, and application of NIPS system
Chemical analysis by nuclear techniques
Energy Technology Data Exchange (ETDEWEB)
Sohn, S. C.; Kim, W. H.; Park, Y. J.; Song, B. C.; Jeon, Y. S.; Jee, K. Y.; Pyo, H. Y
2002-01-01
This state art report consists of four parts, production of micro-particles, analysis of boron, alpha tracking method and development of neutron induced prompt gamma ray spectroscopy (NIPS) system. The various methods for the production of micro-paticles such as mechanical method, electrolysis method, chemical method, spray method were described in the first part. The second part contains sample treatment, separation and concentration, analytical method, and application of boron analysis. The third part contains characteristics of alpha track, track dectectors, pretreatment of sample, neutron irradiation, etching conditions for various detectors, observation of track on the detector, etc. The last part contains basic theory, neutron source, collimator, neutron shields, calibration of NIPS, and application of NIPS system.
Directory of Open Access Journals (Sweden)
Dayong Ning
2016-03-01
Full Text Available The acoustic signals of internal combustion engines contain valuable information about the condition of engines. These signals can be used to detect incipient faults in engines. However, these signals are complex and composed of a faulty component and other noise signals of background. As such, engine conditions’ characteristics are difficult to extract through wavelet transformation and acoustic emission techniques. In this study, an instantaneous frequency analysis method was proposed. A new time–frequency model was constructed using a fixed amplitude and a variable cycle sine function to fit adjacent points gradually from a time domain signal. The instantaneous frequency corresponds to single value at any time. This study also introduced instantaneous frequency calculation on the basis of an inverse trigonometric fitting method at any time. The mean value of all local maximum values was then considered to identify the engine condition automatically. Results revealed that the mean of local maximum values under faulty conditions differs from the normal mean. An experiment case was also conducted to illustrate the availability of the proposed method. Using the proposed time–frequency model, we can identify engine condition and determine abnormal sound produced by faulty engines.
Sensitivity Analysis of a CPAM Inverse Algorithm for Composite Laminates Characterization
Directory of Open Access Journals (Sweden)
Farshid Masoumi
2017-01-01
Full Text Available Using experimental data and numerical simulations, a new combined technique is presented for characterization of thin and thick orthotropic composite laminates. Four or five elastic constants, as well as ply orientation angles, are considered as the unknown parameters. The material characterization is first examined for isotropic plates under different boundary conditions to evaluate the method’s accuracy. The proposed algorithm, so-called CPAM (Combined Programs of ABAQUS and MATLAB, utilizes an optimization procedure and makes simultaneous use of vibration test data together with their corresponding numerical solutions. The numerical solutions are based on a commercial finite element package for efficiently identifying the material properties. An inverse method based on particle swarm optimization algorithm is further provided using MATLAB software. The error function to be minimized is the sum of squared differences between experimental and simulated data of eigenfrequencies. To evaluate the robustness of the model’s results in the presence of uncertainty and unwanted noises, a sensitivity analysis that employs Gaussian disorder model is directly applied to the measured frequencies. The results with high accuracy confirm the validity and capability of the present method in simultaneous determination of mechanical constants and fiber orientation angles of composite laminates as compared to prior methods.
International Nuclear Information System (INIS)
Ryu, Jeong Ah; Kim, Bohyun; Kim, Sooah; Yang, Soon Ha; Choi, Moon Hae; Ahn, Hyeong Sik
2003-01-01
To determine the usefulness of tissue harmonic imaging (THI) and pulse-inversion harmonic imaging (PIHI) in the evaluation of normal and abnormal fetuses. Forty-one pregnant women who bore a total of 31 normal and ten abnormal fetuses underwent conventional ultrasonography (CUS), and then THI and PIHI. US images of six organ systems, namely the brain, spine, heart, abdomen, extremities and face were compared between the three techniques in terms of overall conspicuity and the definition of borders and internal structures. For the brain, heart, abdomen and face, overall conspicuity at THI and PIHI was significantly better than at CUS (p < 0.05). There was, though, no significant difference between THI and PIHI. Affected organs in abnormal fetuses were more clearly depicted at THI and PIHI than at CUS. Both THI and PIHI appear to be superior to CUS for the evaluation of normal or abnormal structures, particularly the brain, heart, abdomen and face
Stokes profile analysis and vector magnetic fields. I. Inversion of photospheric lines
International Nuclear Information System (INIS)
Skumanich, A.; Lites, B.W.
1987-01-01
Improvements are proposed for the Auer et al. (1977) method for the analytic inversion of Stokes profiles via nonlinear least squares. The introduction of additional physics into the Mueller absorption matrix (by including damping wings and magnetooptical birefringence, and by decoupling the intensity profile from the three-vector polarization profile in the analysis) is found to result in a more robust inversion method, providing more reliable and accurate estimates of sunspot vector magnetic fields without significant loss of economy. The method is applied to sunspot observations obtained with the High Altitude Observatory polarimeter. 29 references
Analysis of forward and inverse problems in chemical dynamics and spectroscopy
Energy Technology Data Exchange (ETDEWEB)
Rabitz, H. [Princeton Univ., NJ (United States)
1993-12-01
The overall scope of this research concerns the development and application of forward and inverse analysis tools for problems in chemical dynamics and chemical kinetics. The chemical dynamics work is specifically associated with relating features in potential surfaces and resultant dynamical behavior. The analogous inverse research aims to provide stable algorithms for extracting potential surfaces from laboratory data. In the case of chemical kinetics, the focus is on the development of systematic means to reduce the complexity of chemical kinetic models. Recent progress in these directions is summarized below.
Three-dimensional inverse transient heat transfer analysis of thick functionally graded plates
Energy Technology Data Exchange (ETDEWEB)
Haghighi, M.R. Golbahar; Malekzadeh, P. [Department of Mechanical Engineering, School of Engineering, Persian Gulf University, Bushehr 75168 (Iran); Eghtesad, M. [Department of Mechanical Engineering, School of Engineering, Shiraz University, Shiraz 71348-51154 (Iran); Necsulescu, D.S. [Department of Mechanical Engineering, Faculty of Engineering, University of Ottawa, Ottawa, Ontario (Canada)
2009-03-15
In this paper, a three-dimensional transient inverse heat conduction (IHC) procedure is presented to estimate the unknown boundary heat flux of thick functionally graded (FG) plates. For this purpose, the conjugate gradient method (CGM) in conjunction with adjoint problem is used. A recently developed three-dimensional efficient hybrid method is employed to solve variable-coefficient initial-boundary-value differential equations of direct problem as a part of the inverse solution. The accuracy of the inverse analysis is examined by simulating the exact and noisy data for problems with different types of boundary conditions and material properties. In addition to rectangular domain, skew plates are considered. The results obtained show good accuracy for the estimation of boundary heat fluxes. (author)
Cordeiro, Juliana; De Toni, Daniela Cristina; da Silva, Gisele de Souza; Valente, Vera Lucia da Silva
2014-10-01
Detailed chromosome photomaps are the first step to develop further chromosomal analysis to study the evolution of the genetic architecture in any set of species, considering that chromosomal rearrangements, such as inversions, are common features of genome evolution. In this report, we analyzed inversion polymorphisms in 25 different populations belonging to six neotropical species in the cardini group: Drosophila cardini, D. cardinoides, D. neocardini, D. neomorpha, D. parthenogenetica and D. polymorpha. Furthermore, we present the first reference photomaps for the Neotropical D. cardini and D. parthenogenetica and improved photomaps for D. cardinoides, D. neocardini and D. polymorpha. We found 19 new inversions for these species. An exhaustive pairwise comparison of the polytene chromosomes was conducted for the six species in order to understand evolutionary patterns of their chromosomes.
A fully general and adaptive inverse analysis method for cementitious materials
DEFF Research Database (Denmark)
Jepsen, Michael S.; Damkilde, Lars; Lövgren, Ingemar
2016-01-01
The paper presents an adaptive method for inverse determination of the tensile σ - w relationship, direct tensile strength and Young’s modulus of cementitious materials. The method facilitates an inverse analysis with a multi-linear σ - w function. Usually, simple bi- or tri-linear functions...... are applied when modeling the fracture mechanisms in cementitious materials, but the vast development of pseudo-strain hardening, fiber reinforced cementitious materials require inverse methods, capable of treating multi-linear σ - w functions. The proposed method is fully general in the sense that it relies...... of notched specimens and simulated data from a nonlinear hinge model. The paper shows that the results obtained by means of the proposed method is independent on the initial shape of the σ - w function and the initial guess of the tensile strength. The method provides very accurate fits, and the increased...
International Nuclear Information System (INIS)
Zheng Gui-Li; Xuan Li; Zhang Hui; Ye Wen-Jiang; Zhang Zhi-Dong; Song Hong-Wei
2016-01-01
Based on the experimental phenomena of flexoelectric response at defect sites in nematic inversion walls conducted by Kumar et al., we gave the theoretical analysis using the Frank elastic theory. When a direct-current electric field normal to the plane of the substrate is applied to the parallel aligned nematic liquid crystal cell with weak anchoring, the rotation of ±1 defects in the narrow inversion walls can be exhibited. The free energy of liquid crystal molecules around the +1 and –1 defect sites in the nematic inversion walls under the electric field was formulated and the electric-field-driven structural changes at the defect site characterized by polar and azimuthal angles of the local director were simulated. The results reveal that the deviation of azimuthal angle induced by flexoelectric effect are consistent with the switching of extinction brushes at the +1 and −1 defects obtained in the experiment conducted by Kumar et al. (paper)
Digital Repository Service at National Institute of Oceanography (India)
Rao, M.M.M.; Murty, T.V.R.; SuryaPrakash, S.; Chandramouli, P.; Murthy, K.S.R.
. Indust. Appl. Math, 11 (1963) 431-441. 10. Pedersen L B, Interpretation of potential field data – A generalised inverse approach, Geophy. Prosp. 25 (1977) 199-230. 11. Radhakrishna Murthy I V, Swamy K V & Jagannadha Rao S, Automatic inversion... generalised inverse technique in reconstruction of gravity anomalies due to a fault, Indian J. Pure. Appl. Math., 34 (2003) 31-47. 16. Ramana Murty T V, Somayajulu Y K & Murty C S, Reconstruction of sound speed profile through natural generalised inverse...
Directory of Open Access Journals (Sweden)
Xin-Jia Meng
2015-01-01
Full Text Available Multidisciplinary reliability is an important part of the reliability-based multidisciplinary design optimization (RBMDO. However, it usually has a considerable amount of calculation. The purpose of this paper is to improve the computational efficiency of multidisciplinary inverse reliability analysis. A multidisciplinary inverse reliability analysis method based on collaborative optimization with combination of linear approximations (CLA-CO is proposed in this paper. In the proposed method, the multidisciplinary reliability assessment problem is first transformed into a problem of most probable failure point (MPP search of inverse reliability, and then the process of searching for MPP of multidisciplinary inverse reliability is performed based on the framework of CLA-CO. This method improves the MPP searching process through two elements. One is treating the discipline analyses as the equality constraints in the subsystem optimization, and the other is using linear approximations corresponding to subsystem responses as the replacement of the consistency equality constraint in system optimization. With these two elements, the proposed method realizes the parallel analysis of each discipline, and it also has a higher computational efficiency. Additionally, there are no difficulties in applying the proposed method to problems with nonnormal distribution variables. One mathematical test problem and an electronic packaging problem are used to demonstrate the effectiveness of the proposed method.
Advanced computer techniques for inverse modeling of electric current in cardiac tissue
Energy Technology Data Exchange (ETDEWEB)
Hutchinson, S.A.; Romero, L.A.; Diegert, C.F.
1996-08-01
For many years, ECG`s and vector cardiograms have been the tools of choice for non-invasive diagnosis of cardiac conduction problems, such as found in reentrant tachycardia or Wolff-Parkinson-White (WPW) syndrome. Through skillful analysis of these skin-surface measurements of cardiac generated electric currents, a physician can deduce the general location of heart conduction irregularities. Using a combination of high-fidelity geometry modeling, advanced mathematical algorithms and massively parallel computing, Sandia`s approach would provide much more accurate information and thus allow the physician to pinpoint the source of an arrhythmia or abnormal conduction pathway.
DEFF Research Database (Denmark)
Farahani, Saeed Davoudabadi; Andersen, Michael Skipper; de Zee, Mark
2012-01-01
derived from the detailed musculoskeletal analysis. The technique is demonstrated on a human model pedaling a bicycle. We use a physiology-based cost function expressing the mean square of all muscle activities over the cycle to predict a realistic motion pattern. Posture and motion prediction...... on a physics model including dynamic effects and a high level of anatomical realism. First, a musculoskeletal model comprising several hundred muscles is built in AMS. The movement is then parameterized by means of time functions controlling selected degrees of freedom of the model. Subsequently......, the parameters of these functions are optimized to produce an optimum posture or movement according to a user-defined cost function and constraints. The cost function and the constraints are typically express performance, comfort, injury risk, fatigue, muscle load, joint forces and other physiological properties...
International Nuclear Information System (INIS)
Kappadath, S. Cheenu; Shaw, Chris C.
2003-01-01
Breast cancer may manifest as microcalcifications in x-ray mammography. Small microcalcifications, essential to the early detection of breast cancer, are often obscured by overlapping tissue structures. Dual-energy imaging, where separate low- and high-energy images are acquired and synthesized to cancel the tissue structures, may improve the ability to detect and visualize microcalcifications. Transmission measurements at two different kVp values were made on breast-tissue-equivalent materials under narrow-beam geometry using an indirect flat-panel mammographic imager. The imaging scenario consisted of variable aluminum thickness (to simulate calcifications) and variable glandular ratio (defined as the ratio of the glandular-tissue thickness to the total tissue thickness) for a fixed total tissue thickness--the clinical situation of microcalcification imaging with varying tissue composition under breast compression. The coefficients of the inverse-mapping functions used to determine material composition from dual-energy measurements were calculated by a least-squares analysis. The linear function poorly modeled both the aluminum thickness and the glandular ratio. The inverse-mapping functions were found to vary as analytic functions of second (conic) or third (cubic) order. By comparing the model predictions with the calibration values, the root-mean-square residuals for both the cubic and the conic functions were ∼50 μm for the aluminum thickness and ∼0.05 for the glandular ratio
Smith, G. A.; Meyer, G.; Nordstrom, M.
1986-01-01
A new automatic flight control system concept suitable for aircraft with highly nonlinear aerodynamic and propulsion characteristics and which must operate over a wide flight envelope was investigated. This exact model follower inverts a complete nonlinear model of the aircraft as part of the feed-forward path. The inversion is accomplished by a Newton-Raphson trim of the model at each digital computer cycle time of 0.05 seconds. The combination of the inverse model and the actual aircraft in the feed-forward path alloys the translational and rotational regulators in the feedback path to be easily designed by linear methods. An explanation of the model inversion procedure is presented. An extensive set of simulation data for essentially the full flight envelope for a vertical attitude takeoff and landing aircraft (VATOL) is presented. These data demonstrate the successful, smooth, and precise control that can be achieved with this concept. The trajectory includes conventional flight from 200 to 900 ft/sec with path accelerations and decelerations, altitude changes of over 6000 ft and 2g and 3g turns. Vertical attitude maneuvering as a tail sitter along all axes is demonstrated. A transition trajectory from 200 ft/sec in conventional flight to stationary hover in the vertical attitude includes satisfactory operation through lift-cure slope reversal as attitude goes from horizontal to vertical at constant altitude. A vertical attitude takeoff from stationary hover to conventional flight is also demonstrated.
Event tree analysis using artificial intelligence techniques
International Nuclear Information System (INIS)
Dixon, B.W.; Hinton, M.F.
1985-01-01
Artificial Intelligence (AI) techniques used in Expert Systems and Object Oriented Programming are discussed as they apply to Event Tree Analysis. A SeQUence IMPortance calculator, SQUIMP, is presented to demonstrate the implementation of these techniques. Benefits of using AI methods include ease of programming, efficiency of execution, and flexibility of application. The importance of an appropriate user interface is stressed. 5 figs
Inverse modelling of atmospheric tracers: non-Gaussian methods and second-order sensitivity analysis
Directory of Open Access Journals (Sweden)
M. Bocquet
2008-02-01
Full Text Available For a start, recent techniques devoted to the reconstruction of sources of an atmospheric tracer at continental scale are introduced. A first method is based on the principle of maximum entropy on the mean and is briefly reviewed here. A second approach, which has not been applied in this field yet, is based on an exact Bayesian approach, through a maximum a posteriori estimator. The methods share common grounds, and both perform equally well in practice. When specific prior hypotheses on the sources are taken into account such as positivity, or boundedness, both methods lead to purposefully devised cost-functions. These cost-functions are not necessarily quadratic because the underlying assumptions are not Gaussian. As a consequence, several mathematical tools developed in data assimilation on the basis of quadratic cost-functions in order to establish a posteriori analysis, need to be extended to this non-Gaussian framework. Concomitantly, the second-order sensitivity analysis needs to be adapted, as well as the computations of the averaging kernels of the source and the errors obtained in the reconstruction. All of these developments are applied to a real case of tracer dispersion: the European Tracer Experiment [ETEX]. Comparisons are made between a least squares cost function (similar to the so-called 4D-Var approach and a cost-function which is not based on Gaussian hypotheses. Besides, the information content of the observations which is used in the reconstruction is computed and studied on the application case. A connection with the degrees of freedom for signal is also established. As a by-product of these methodological developments, conclusions are drawn on the information content of the ETEX dataset as seen from the inverse modelling point of view.
TV content analysis techniques and applications
Kompatsiaris, Yiannis
2012-01-01
The rapid advancement of digital multimedia technologies has not only revolutionized the production and distribution of audiovisual content, but also created the need to efficiently analyze TV programs to enable applications for content managers and consumers. Leaving no stone unturned, TV Content Analysis: Techniques and Applications provides a detailed exploration of TV program analysis techniques. Leading researchers and academics from around the world supply scientifically sound treatment of recent developments across the related subject areas--including systems, architectures, algorithms,
Statistical evaluation of vibration analysis techniques
Milner, G. Martin; Miller, Patrice S.
1987-01-01
An evaluation methodology is presented for a selection of candidate vibration analysis techniques applicable to machinery representative of the environmental control and life support system of advanced spacecraft; illustrative results are given. Attention is given to the statistical analysis of small sample experiments, the quantification of detection performance for diverse techniques through the computation of probability of detection versus probability of false alarm, and the quantification of diagnostic performance.
Constrained principal component analysis and related techniques
Takane, Yoshio
2013-01-01
In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre
Pajewski, Lara; Giannopoulos, Antonios; Sesnic, Silvestar; Randazzo, Andrea; Lambot, Sébastien; Benedetto, Francesco; Economou, Nikos
2017-04-01
opportunity of testing and validating, against reliable data, their electromagnetic-modelling, inversion, imaging and processing algorithms. One of the most interesting dataset comes from the IFSTTAR Geophysical Test Site, in Nantes (France): this is an open-air laboratory including a large and deep area, filled with various materials arranged in horizontal compacted slices, separated by vertical interfaces and water-tighted in surface; several objects as pipes, polystyrene hollows, boulders and masonry are embedded in the field. Data were collected by using nine different GPR systems and at different frequencies ranging from 200 MHz to 1 GHz. Moreover, some sections of this test site were modelled by using gprMax and the commercial software CST Microwave Studio. Hence, both experimental and synthetic data are available. Further interesting datasets were collected on roads, bridges, concrete cells, columns - and more. (v) WG3 contributed to the TU1208 Education Pack, an open educational package conceived to teach GPR in University courses. (vi) WG3 was very active in offering training activities. The following courses were successfully organised: Training School (TS) "Microwave Imaging and Diagnostics" (in cooperation with the European School of Antennas; 1st edition: Madonna di Campiglio, Italy, March 2014, 2nd edition: Taormina, Italy, October 2016); TS "Numerical modelling of Ground Penetrating Radar using gprMax" (Thessaloniki, Greece, November 2015); TS "Electromagnetic Modelling Techniques for Ground Penetrating Radar" (Split, Croatia, November 2016). Moreover, WG3 organized a workshop on "Electromagnetic modelling with the Finite-Difference Time-Domain technique" (Nantes, France, February 2014) and a workshop on "Electromagnetic modelling and inversion techniques for GPR" (Davos, Switzerland, April 2016) within the 2016 European Conference on Antennas and Propagation (EuCAP). Acknowledgement: The Authors are deeply grateful to COST (European COoperation in Science and
TH-C-12A-06: Feasibility of a MLC-Based Inversely Optimized Multi-Field Grid Therapy Technique
Energy Technology Data Exchange (ETDEWEB)
Jin, J [Georgia Regents University, Augusta, GA (Georgia); Zhao, B; Huang, Y; Kim, J; Qin, Y; Wen, N; Ryu, S; Chetty, I [Henry Ford Health System, Detroit, MI (United States)
2014-06-15
Purpose: Grid therapy (GT), which generates highly spatially modulated dose distributions, can deliver single- or hypo-fractionated radiotherapy for large tumors without causing significant toxicities. GT may be applied in combination with immunotherapy, in light of recent preclinical data of synergetic interaction between radiotherapy and immunotherapy. However, conventional GT uses only one field, which does not have the advantage of multi-fields in 3D conformal-RT or IMRT. We have proposed a novel MLC-based, inverse-planned multi-field 3D GT technique. This study aims to test its deliverability and dosimetric accuracy. Methods: A lattice of small spheres was created as the boost volume within a large target. A simultaneous boost IMRT plan with 8-Gy to the target and 20-Gy to the boost volume was generated in the Eclipse treatment planning system (AAA v10) with a HD120 MLC. Nine beams were used, and the gantry and couch angles were selected so that the spheres were perfectly aligned in every beams eye view. The plan was mapped to a phantom with dose scaled. EBT3 films were calibrated and used to measure the delivered dose. Results: The IMRT plan generated a highly spatially modulated dose distribution in the target. D95%, D50%, D5% for the spheres and the targets in Gy were 18.5, 20.0, 21.4 and 7.9, 9.8, 16.1, respectively. D50% for a 1cm ring 1cm outside the target was 2.9-Gy. Film dosimetry showed good agreement between calculated and delivered dose, with an overall gamma passing rate of 99.6% (3%/1mm). The point dose differences for different spheres varied from 1–6%. Conclusion: We have demonstrated the deliverability and dose calculation accuracy of the MLC-based inversely optimized multi-field GT technique, which achieved a brachytherapy-like dose distribution. Single-fraction high dose can be delivered to the spheres in a large target with minimal dose to the surrounding normal tissue.
Application of homotopy analysis method and inverse solution of a rectangular wet fin
International Nuclear Information System (INIS)
Panda, Srikumar; Bhowmik, Arka; Das, Ranjan; Repaka, Ramjee; Martha, Subash C.
2014-01-01
Highlights: • Solution of a wet fin with is obtained by homotopy analysis method (HAM). • Present HAM results have been well-validated with literature results. • Inverse analysis is done using genetic algorithm. • Measurement error of ±10–12% (approx.) is found to yield satisfactory reconstructions. - Abstract: This paper presents the analytical solution of a rectangular fin under the simultaneous heat and mass transfer across the fin surface and the fin tip, and estimates the unknown thermal and geometrical configurations of the fin using inverse heat transfer analysis. The local temperature field is obtained by using homotopy analysis method for insulated and convective fin tip boundary conditions. Using genetic algorithm, the thermal and geometrical parameters, viz., thermal conductivity of the material, surface heat transfer coefficient and dimensions of the fin have been simultaneously estimated for the prescribed temperature field. Earlier inverse studies on wet fin have been restricted to the analysis of nonlinear governing equation with either insulated tip condition or finite tip temperature only. The present study developed a closed-form solution with the consideration of nonlinearity effects in both governing equation and boundary condition. The study on inverse optimization leads to many feasible combination of fin materials, thermal conditions and fin dimensions. Thus allows the flexibility for designing a fin under wet conditions, based on multiple combinations of fin materials, fin dimensions and thermal configurations to achieve the required heat transfer duty. It is further determined that the allowable measurement error should be limited to ±10–12% in order to achieve satisfactory reconstruction
Elemental analysis techniques using proton microbeam
International Nuclear Information System (INIS)
Sakai, Takuro; Oikawa, Masakazu; Sato, Takahiro
2005-01-01
Proton microbeam is a powerful tool for two-dimensional elemental analysis. The analysis is based on Particle Induced X-ray Emission (PIXE) and Particle Induced Gamma-ray Emission (PIGE) techniques. The paper outlines the principles and instruments, and describes the dental application has been done in JAERI Takasaki. (author)
Liu, Yi; Zhang, He; Liu, Siwei; Lin, Fuchang
2018-05-01
The J-A (Jiles-Atherton) model is widely used to describe the magnetization characteristics of magnetic cores in a low-frequency alternating field. However, this model is deficient in the quantitative analysis of the eddy current loss and residual loss in a high-frequency magnetic field. Based on the decomposition of magnetization intensity, an inverse J-A model is established which uses magnetic flux density B as an input variable. Static and dynamic core losses under high frequency excitation are separated based on the inverse J-A model. Optimized parameters of the inverse J-A model are obtained based on particle swarm optimization. The platform for the pulsed magnetization characteristic test is designed and constructed. The hysteresis curves of ferrite and Fe-based nanocrystalline cores at high magnetization rates are measured. The simulated and measured hysteresis curves are presented and compared. It is found that the inverse J-A model can be used to describe the magnetization characteristics at high magnetization rates and to separate the static loss and dynamic loss accurately.
Techniques for sensitivity analysis of SYVAC results
International Nuclear Information System (INIS)
Prust, J.O.
1985-05-01
Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)
Flow analysis techniques for phosphorus: an overview.
Estela, José Manuel; Cerdà, Víctor
2005-04-15
A bibliographical review on the implementation and the results obtained in the use of different flow analytical techniques for the determination of phosphorus is carried out. The sources, occurrence and importance of phosphorus together with several aspects regarding the analysis and terminology used in the determination of this element are briefly described. A classification as well as a brief description of the basis, advantages and disadvantages of the different existing flow techniques, namely; segmented flow analysis (SFA), flow injection analysis (FIA), sequential injection analysis (SIA), all injection analysis (AIA), batch injection analysis (BIA), multicommutated FIA (MCFIA), multisyringe FIA (MSFIA) and multipumped FIA (MPFIA) is also carried out. The most relevant manuscripts regarding the analysis of phosphorus by means of flow techniques are herein classified according to the detection instrumental technique used with the aim to facilitate their study and obtain an overall scope. Finally, the analytical characteristics of numerous flow-methods reported in the literature are provided in the form of a table and their applicability to samples with different matrixes, namely water samples (marine, river, estuarine, waste, industrial, drinking, etc.), soils leachates, plant leaves, toothpaste, detergents, foodstuffs (wine, orange juice, milk), biological samples, sugars, fertilizer, hydroponic solutions, soils extracts and cyanobacterial biofilms are tabulated.
Quality assurance techniques for activation analysis
International Nuclear Information System (INIS)
Becker, D.A.
1984-01-01
The principles and techniques of quality assurance are applied to the measurement method of activation analysis. Quality assurance is defined to include quality control and quality assessment. Plans for quality assurance include consideration of: personnel; facilities; analytical design; sampling and sample preparation; the measurement process; standards; and documentation. Activation analysis concerns include: irradiation; chemical separation; counting/detection; data collection, and analysis; and calibration. Types of standards discussed include calibration materials and quality assessment materials
International Nuclear Information System (INIS)
Pang, A.K.K.; Hughes, T.
2000-01-01
The present limited retrospective study was performed to assess MR imaging of lipomatous tumours of the musculoskeletal system and to evaluate the potential of the T2 short tau inversion-recovery (STIR) technique for differentiating lipomas from liposarcomas. Magnetic resonance imaging of 12 patients with lipomatous tumours of the musculoskeletal system (eight benign lipomas, three well differentiated liposarcomas and one myxoid liposarcoma) were reviewed. Benign lipomas were usually superficial and showed homogeneity on T1- and T2-weighted spin echo sequences. Full suppression at T2-STIR was readily demonstrated. In contrast the liposarcomas in the present series were all deep-seated. Two well-differentiated liposarcomas showed homogeneity at long and short relaxation time (TR) but failed to show complete suppression at T2-STIR. One case of well-differentiated liposarcoma (dedifferentiated liposarcoma) and one of myxoid liposarcoma showed mild and moderate heterogeneity at T1 and T2, respectively and posed no difficulty in being diagnosed correctly. In conclusion, short and long TR in combination with T2 STIR show promise in differentiating benign from malignant lipomatous tumours of the musculoskeletal system, when taken in combination with the position of the tumour. Copyright (1999) Blackwell Science Pty Ltd
Directory of Open Access Journals (Sweden)
Siti Khadijah Hubadillah
2016-06-01
Full Text Available In this study, low cost ceramic supports were prepared from kaolin via phase inversion technique with two kaolin particle sizes, which are 0.04–0.6 μm (denoted as type A and 10–15 μm (denoted as type B, at different kaolin contents ranging from 14 to 39 wt.%, sintered at 1200 °C. The effect of kaolin particle sizes as well as kaolin contents on membrane structure, pore size distribution, porosity, mechanical strength, surface roughness and gas permeation of the support were investigated. The support was prepared using kaolin type A induced asymmetric structure by combining macroporous voids and sponge-like structure in the support with pore size of 0.38 μm and 1.05 μm, respectively, and exhibited ideal porosity (27.7%, great mechanical strength (98.9 MPa and excellent gas permeation. Preliminary study shows that the kaolin ceramic support in this work is potential to gas separation application at lower cost.
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
International Nuclear Information System (INIS)
Shimazu, Y.; Rooijen, W.F.G. van
2014-01-01
Highlights: • Estimation of the reactivity of nuclear reactor based on neutron flux measurements. • Comparison of the traditional method, and the new approach based on Extended Kalman Filtering (EKF). • Estimation accuracy depends on filter parameters, the selection of which is described in this paper. • The EKF algorithm is preferred if the signal to noise ratio is low (low flux situation). • The accuracy of the EKF depends on the ratio of the filter coefficients. - Abstract: The Extended Kalman Filtering (EKF) technique has been applied for estimation of subcriticality with a good noise filtering and accuracy. The Inverse Point Kinetic (IPK) method has also been widely used for reactivity estimation. The important parameters for the EKF estimation are the process noise covariance, and the measurement noise covariance. However the optimal selection is quite difficult. On the other hand, there is only one parameter in the IPK method, namely the time constant for the first order delay filter. Thus, the selection of this parameter is quite easy. Thus, it is required to give certain idea for the selection of which method should be selected and how to select the required parameters. From this point of view, a qualitative performance comparison is carried out
International Nuclear Information System (INIS)
Yusa, Noritaka; Machida, Eiji; Janousek, Ladislav; Rebican, Mihai; Chen, Zhenmao; Miya, Kenzo
2005-01-01
This paper evaluates the applicability of eddy current inversion techniques to the sizing of defects in Inconel welds with rough surfaces. For this purpose, a plate Inconel weld specimen, which models the welding of a stub tube in a boiling water nuclear reactor is fabricated, and artificial notches machined into the specimen. Eddy current inspections using six different eddy current probes are conducted and efficiencies were evaluated for the six probes for weld inspection. It is revealed that if suitable probes are applied, an Inconel weld does not cause large noise levels during eddy current inspections even though the surface of the weld is rough. Finally, reconstruction of the notches is performed using eddy current signals measured using the uniform eddy current probe that showed the best results among the six probes in this study. A simplified configuration is proposed in order to consider the complicated configuration of the welded specimen in numerical simulations. While reconstructed profiles of the notches are slightly larger than the true profiles, quite good agreements are obtained in spite of the simple approximation of the configuration, which reveals that eddy current testing would be an efficient non-destructive testing method for the sizing of defects in Inconel welds
International Nuclear Information System (INIS)
Yusa, Noritaka; Janousek, Ladislav; Rebican, Mihai; Chen, Zhenmao; Miya, Kenzo; Machida, Eiji
2004-01-01
This paper evaluates the applicability of eddy current inversion techniques to the sizing of defects in Inconel welds with rough surfaces. For this purpose, a plate Inconel weld specimen, which models the welding of a stub tube in a boiling water nuclear reactor, is fabricated, and artificial notches machined into the specimen. Eddy current inspections using six probes in weld inspection evaluated. It is revealed that if suitable probes are applied, an Inconel weld does not provide large noise signals in eddy current inspections even though the surface of the weld is rough. Finally, reconstruction of the notches are performed using eddy current signals measured with the use of the uniform eddy current probe that showed the best results among the six probes in the inspection. A simplified configuration is proposed in order to consider the complicated configuration of the welded specimen in numerical simulations. While reconstructed profiles of the notches are slightly larger than the true profiles, quite good agreements are obtained in spite of the simple approximation of the configuration, which reveals that eddy current testing would be an efficient non-destructive testing method for the sizing of defects in Inconel welds. (author)
Directory of Open Access Journals (Sweden)
J Swain
2017-12-01
Full Text Available Indian Space Research Organization had launched Oceansat-2 on 23 September 2009, and the scatterometer onboard was a space-borne sensor capable of providing ocean surface winds (both speed and direction over the globe for a mission life of 5 years. The observations of ocean surface winds from such a space-borne sensor are the potential source of data covering the global oceans and useful for driving the state-of-the-art numerical models for simulating ocean state if assimilated/blended with weather prediction model products. In this study, an efficient interpolation technique of inverse distance and time is demonstrated using the Oceansat-2 wind measurements alone for a selected month of June 2010 to generate gridded outputs. As the data are available only along the satellite tracks and there are obvious data gaps due to various other reasons, Oceansat-2 winds were subjected to spatio-temporal interpolation, and 6-hour global wind fields for the global oceans were generated over 1 × 1 degree grid resolution. Such interpolated wind fields can be used to drive the state-of-the-art numerical models to predict/hindcast ocean-state so as to experiment and test the utility/performance of satellite measurements alone in the absence of blended fields. The technique can be tested for other satellites, which provide wind speed as well as direction data. However, the accuracy of input winds is obviously expected to have a perceptible influence on the predicted ocean-state parameters. Here, some attempts are also made to compare the interpolated Oceansat-2 winds with available buoy measurements and it was found that they are reasonably in good agreement with a correlation coefficient of R > 0.8 and mean deviation 1.04 m/s and 25° for wind speed and direction, respectively.
A numerical technique for reactor subchannel analysis
International Nuclear Information System (INIS)
Fath, Hassan E.S.
1983-01-01
A numerical technique is developed for the solution of the transient boundary layer equations with a moving liquid-vapour interface boundary. The technique uses the finite difference method with the velocity components defined over an Eulerian mesh. A system of interface massless markers is defined where the markers move with the flow field according to a simple kinematic relation between the interface geometry and the fluid velocity. Different applications of nuclear engineering interest are reported with some available results. The present technique is capable of predicting the interface profile near the wall which is important in the reactor subchannel analysis
Energy Technology Data Exchange (ETDEWEB)
Djara, V.; Cherkaoui, K.; Negara, M. A.; Hurley, P. K., E-mail: paul.hurley@tyndall.ie [Tyndall National Institute, University College Cork, Dyke Parade, Cork (Ireland)
2015-11-28
An alternative multi-frequency inversion-charge pumping (MFICP) technique was developed to directly separate the inversion charge density (N{sub inv}) from the trapped charge density in high-k/InGaAs metal-oxide-semiconductor field-effect transistors (MOSFETs). This approach relies on the fitting of the frequency response of border traps, obtained from inversion-charge pumping measurements performed over a wide range of frequencies at room temperature on a single MOSFET, using a modified charge trapping model. The obtained model yielded the capture time constant and density of border traps located at energy levels aligned with the InGaAs conduction band. Moreover, the combination of MFICP and pulsed I{sub d}-V{sub g} measurements enabled an accurate effective mobility vs N{sub inv} extraction and analysis. The data obtained using the MFICP approach are consistent with the most recent reports on high-k/InGaAs.
Directory of Open Access Journals (Sweden)
Shiann-Jong Lee
2010-01-01
Full Text Available Moment tensor inversion is a routine procedure to obtain information on an earthquake source for moment magnitude and focal mechanism. However, the inversion quality is usually controlled by factors such as knowledge of an earthquake location and the suitability of a 1-D velocity model used. Here we present an improved method to invert the moment tensor solution for local earthquakes. The proposed method differs from routine centroid-moment-tensor inversion of the Broadband Array in Taiwan for Seismology in three aspects. First, the inversion is repeated in the neighborhood of an earthquake_?s hypocenter on a grid basis. Second, it utilizes Green_?s functions based on a true three-dimensional velocity model. And third, it incorporates most of the input waveforms from strong-motion records. The proposed grid-based moment tensor inversion is applied to a local earthquake that occurred near the Taipei basin on 23 October 2004 to demonstrate its effectiveness and superiority over methods used in previous studies. By using the grid-based moment tensor inversion technique and 3-D Green_?s functions, the earthquake source parameters, including earthquake location, moment magnitude and focal mechanism, are accurately found that are sufficiently consistent with regional ground motion observations up to a frequency of 1.0 Hz. This approach can obtain more precise source parameters for other earthquakes in or near a well-modeled basin and crustal structure.
Kruecken, R; Speidel, K; Voulot, D; Neyens, G; Gernhaeuser, R A; Fraile prieto, L M; Leske, J
We propose to measure the sign and magnitude of the g-factors of the first 2$^{+}$ states in radioactive neutron-rich $^{72,74}$Zn applying the transient field (TF) technique in inverse kinematics. The result of this experiment will allow to probe the $\
Analysis of the variability in ground-motion synthesis and inversion
Spudich, Paul A.; Cirella, Antonella; Scognamiglio, Laura; Tinti, Elisa
2017-12-07
models whose ground motions fit the data within the error bounds given by 2 τ , as quantified by using a chi-squared test described below. So, we can ask questions such as, “What are the rupture models with the highest and lowest average rupture speed consistent with the theory errors?” Having found those models, we can then say with confidence that the true rupture speed is somewhere between those values. Although the Bayesian approach gives a complete solution to the inverse problem, it is computationally demanding: Minson and others (2014) needed 1010 forward kinematic simulations to derive their posterior probability distribution. In our approach, only about107 simulations are needed. Moreover, in practical application, only a small set of rupture models may be needed to answer the relevant questions—for example, determining the maximum likelihood solution (achievable through standard inversion techniques) and the two rupture models bounding some property of interest.The specific property that we wish to investigate is the correlation between various rupturemodel parameters, such as peak slip velocity and rupture velocity, in models of real earthquakes. In some simulations of ground motions for hypothetical large earthquakes, such as those by Aagaard and others (2010) and the Southern California Earthquake Center Broadband Simulation Platform (Graves and Pitarka, 2015), rupture speed is assumed to correlate locally with peak slip, although there is evidence that rupture speed should correlate better with peak slip speed, owing to its dependence on local stress drop. We may be able to determine ways to modify Piatanesi and others’s (2007) inversion’s “cost” function to find rupture models with either high or low degrees of correlation between pairs of rupture parameters. We propose a cost function designed to find these two extremal models.
Andrews, Ross N; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan
2018-02-01
X-ray photon correlation spectroscopy (XPCS) and dynamic light scattering (DLS) both reveal dynamics using coherent scattering, but X-rays permit investigating of dynamics in a much more diverse array of materials. Heterogeneous dynamics occur in many such materials, and we showed how classic tools employed in analysis of heterogeneous DLS dynamics extend to XPCS, revealing additional information that conventional Kohlrausch exponential fitting obscures. This work presents the software implementation of inverse transform analysis of XPCS data called CONTIN XPCS, an extension of traditional CONTIN that accommodates dynamics encountered in equilibrium XPCS measurements.
Gold analysis by the gamma absorption technique
International Nuclear Information System (INIS)
Kurtoglu, Arzu; Tugrul, A.B.
2003-01-01
Gold (Au) analyses are generally performed using destructive techniques. In this study, the Gamma Absorption Technique has been employed for gold analysis. A series of different gold alloys of known gold content were analysed and a calibration curve was obtained. This curve was then used for the analysis of unknown samples. Gold analyses can be made non-destructively, easily and quickly by the gamma absorption technique. The mass attenuation coefficients of the alloys were measured around the K-shell absorption edge of Au. Theoretical mass attenuation coefficient values were obtained using the WinXCom program and comparison of the experimental results with the theoretical values showed generally good and acceptable agreement
Directory of Open Access Journals (Sweden)
Seyyed Ghoreishi
2017-09-01
Full Text Available Objective(S: In this work, paclitaxel (PX, a promising anticancer drug, was loaded in the basil seed mucilage (BSM aerogels by implementation of supercritical carbon dioxide (SC-CO2 technology. Then, the effects of operating conditions were studied on the PX mean particle size (MPS, particle size distribution (PSD and drug loading efficiency (DLE. Methods: The employed SC-CO2 process in this research is the combination of phase inversion technique and gas antisolvent (GAS process. The effect of DMSO/water ratio (4 and 6 (v/v, pressure (10-20 MPa, CO2 addition rate (1–3 mL/min and ethanol concentration (5-10% were studied on MPS, PSD and DLE. Scanning electron microscopy (SEM and Zetasizer were used for particle analysis. DLE was investigated by utilizing the high-performance liquid chromatography (HPLC. Results: Nanoparticles of paclitaxel (MPS of 82–131 nm depending on process variables with narrow PSD were successfully loaded in BSM aerogel with DLE of 28–52%. Experimental results indicated that higher DMSO/water ratio, ethanol concentration, pressure and CO2 addition rate reduced MPS and DLE. Conclusions: A modified semi batch SC-CO2 process based on the combination of gas antisolvent process and phase inversion methods using DMSO as co-solvent and ethanol as a secondary solvent was developed for the loading of an anticancer drug, PX, in ocimum basilicum mucilage aerogel. The experimental results determined that the mean particle size, particle size distribution, and drug loading efficiency be controlled with operating conditions.
Sensitivity analysis of hybrid thermoelastic techniques
W.A. Samad; J.M. Considine
2017-01-01
Stress functions have been used as a complementary tool to support experimental techniques, such as thermoelastic stress analysis (TSA) and digital image correlation (DIC), in an effort to evaluate the complete and separate full-field stresses of loaded structures. The need for such coupling between experimental data and stress functions is due to the fact that...
International Nuclear Information System (INIS)
Li, Guo-Qing; Miao, Xing-Yuan; Hu, Yuan-Tai; Wang, Ji
2013-01-01
A comprehensive study on smart beams with piezoelectric elements using an impedance matrix and the inverse Laplace transform is presented. Based on the authors’ previous work, the dynamics of some elements in beam-like smart structures are represented by impedance matrix equations, including a piezoelectric stack, a piezoelectric bimorph, an elastic straight beam or a circular curved beam. A further transform is applied to the impedance matrix to obtain a set of implicit transfer function matrices. Apart from the analytical solutions to the matrices of smart beams, one computation procedure is proposed to obtained the impedance matrices and transfer function matrices using FEA. By these means the dynamic solution of the elements in the frequency domain is transformed to that in Laplacian s-domain and then inversely transformed to time domain. The connections between the elements and boundary conditions of the smart structures are investigated in detail, and one integrated system equation is finally obtained using the symbolic operation of TF matrices. A procedure is proposed for dynamic analysis and control analysis of the smart beam system using mode superposition and a numerical inverse Laplace transform. The first example is given to demonstrate building transfer function associated impedance matrices using both FEA and analytical solutions. The second example is to verify the ability of control analysis using a suspended beam with PZT patches under close-loop control. The third example is designed for dynamic analysis of beams with a piezoelectric stack and a piezoelectric bimorph under various excitations. The last example of one smart beam with a PPF controller shows the applicability to the control analysis of complex systems using the proposed method. All results show good agreement with the other results in the previous literature. The advantages of the proposed methods are also discussed at the end of this paper. (paper)
International Nuclear Information System (INIS)
Zhang Lin; Wang Chaoyang; Luo Xuan; Du Kai; Tu Haiyan; Fan Hong; Luo Qing; Yuan Guanghui; Huang Lizhen
2003-01-01
By thermally induced phase-inversion technique, ploy (4-methyl-1-pentene) (PMP) foams are successfully prepared; the density and pore size are 3-80 mg/cm 3 and 1-20 μm respectively. Durene/naphthalene (60/40) is confirmed as the suitable solvent/nonsolvent binary system. The PMP's thermal properties are characterized by TG-DSC system. It is found that the foams thermal properties depend on the density. The thermal analysis method is utilized to measure the gelation of PMP in the binary solvent/nonsolvent system. The range of gelation temperature is preliminarily determined. The influence of mixture system composition and the cooling rate during the making of foams is discussed. TG-DSC is applied to determine the thermal properties of low-density PMP foams prepared in the laboratory. And the effect of density change on the thermal stability of foams are studied. The thermal analysis data play a great role in improving the foam quality. (authors)
International Nuclear Information System (INIS)
Xiao Ying; Werner-Wasik, Maria; Michalski, D.; Houser, C.; Bednarz, G.; Curran, W.; Galvin, James
2004-01-01
The purpose of this study is to compare 3 intensity-modulated radiation therapy (IMRT) inverse treatment planning techniques as applied to locally-advanced lung cancer. This study evaluates whether sufficient radiotherapy (RT) dose is given for durable control of tumors while sparing a portion of the esophagus, and whether large number of segments and monitor units are required. We selected 5 cases of locally-advanced lung cancer with large central tumor, abutting the esophagus. To ensure that no more than half of the esophagus circumference at any level received the specified dose limit, it was divided into disk-like sections and dose limits were imposed on each. Two sets of dose objectives were specified for tumor and other critical structures for standard dose RT and for dose escalation RT. Plans were generated using an aperture-based inverse planning (ABIP) technique with the Cimmino algorithm for optimization. Beamlet-based inverse treatment planning was carried out with a commercial simulated annealing package (CORVUS) and with an in-house system that used the Cimmino projection algorithm (CIMM). For 3 of the 5 cases, results met all of the constraints from the 3 techniques for the 2 sets of dose objectives. The CORVUS system without delivery efficiency consideration required the most segments and monitor units. The CIMM system reduced the number while the ABIP techniques showed a further reduction, although for one of the cases, a solution was not readily obtained using the ABIP technique for dose escalation objectives
Microextraction sample preparation techniques in biomedical analysis.
Szultka, Malgorzata; Pomastowski, Pawel; Railean-Plugaru, Viorica; Buszewski, Boguslaw
2014-11-01
Biologically active compounds are found in biological samples at relatively low concentration levels. The sample preparation of target compounds from biological, pharmaceutical, environmental, and food matrices is one of the most time-consuming steps in the analytical procedure. The microextraction techniques are dominant. Metabolomic studies also require application of proper analytical technique for the determination of endogenic metabolites present in biological matrix on trace concentration levels. Due to the reproducibility of data, precision, relatively low cost of the appropriate analysis, simplicity of the determination, and the possibility of direct combination of those techniques with other methods (combination types on-line and off-line), they have become the most widespread in routine determinations. Additionally, sample pretreatment procedures have to be more selective, cheap, quick, and environmentally friendly. This review summarizes the current achievements and applications of microextraction techniques. The main aim is to deal with the utilization of different types of sorbents for microextraction and emphasize the use of new synthesized sorbents as well as to bring together studies concerning the systematic approach to method development. This review is dedicated to the description of microextraction techniques and their application in biomedical analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Angle-domain Migration Velocity Analysis using Wave-equation Reflection Traveltime Inversion
Zhang, Sanzong
2012-11-04
The main difficulty with an iterative waveform inversion is that it tends to get stuck in a local minima associated with the waveform misfit function. This is because the waveform misfit function is highly non-linear with respect to changes in the velocity model. To reduce this nonlinearity, we present a reflection traveltime tomography method based on the wave equation which enjoys a more quasi-linear relationship between the model and the data. A local crosscorrelation of the windowed downgoing direct wave and the upgoing reflection wave at the image point yields the lag time that maximizes the correlation. This lag time represents the reflection traveltime residual that is back-projected into the earth model to update the velocity in the same way as wave-equation transmission traveltime inversion. The residual movemout analysis in the angle-domain common image gathers provides a robust estimate of the depth residual which is converted to the reflection traveltime residual for the velocity inversion. We present numerical examples to demonstrate its efficiency in inverting seismic data for complex velocity model.
Gui-Li, Zheng; Hui, Zhang; Wen-Jiang, Ye; Zhi-Dong, Zhang; Hong-Wei, Song; Li, Xuan
2016-03-01
Based on the experimental phenomena of flexoelectric response at defect sites in nematic inversion walls conducted by Kumar et al., we gave the theoretical analysis using the Frank elastic theory. When a direct-current electric field normal to the plane of the substrate is applied to the parallel aligned nematic liquid crystal cell with weak anchoring, the rotation of ±1 defects in the narrow inversion walls can be exhibited. The free energy of liquid crystal molecules around the +1 and -1 defect sites in the nematic inversion walls under the electric field was formulated and the electric-field-driven structural changes at the defect site characterized by polar and azimuthal angles of the local director were simulated. The results reveal that the deviation of azimuthal angle induced by flexoelectric effect are consistent with the switching of extinction brushes at the +1 and -1 defects obtained in the experiment conducted by Kumar et al. Project supported by the National Natural Science Foundation of China (Grant Nos. 11374087, 11274088, and 11304074), the Natural Science Foundation of Hebei Province, China (Grant Nos. A2014202123 and A2016202282), the Research Project of Hebei Education Department, China (Grant Nos. QN2014130 and QN2015260), and the Key Subject Construction Project of Hebei Province University, China.
CRDM motion analysis using machine learning technique
International Nuclear Information System (INIS)
Nishimura, Takuya; Nakayama, Hiroyuki; Saitoh, Mayumi; Yaguchi, Seiji
2017-01-01
Magnetic jack type Control Rod Drive Mechanism (CRDM) for pressurized water reactor (PWR) plant operates control rods in response to electrical signals from a reactor control system. CRDM operability is evaluated by quantifying armature's response of closed/opened time which means interval time between coil energizing/de-energizing points and armature closed/opened points. MHI has already developed an automatic CRDM motion analysis and applied it to actual plants so far. However, CRDM operational data has wide variation depending on their characteristics such as plant condition, plant, etc. In the existing motion analysis, there is an issue of analysis accuracy for applying a single analysis technique to all plant conditions, plants, etc. In this study, MHI investigated motion analysis using machine learning (Random Forests) which is flexibly accommodated to CRDM operational data with wide variation, and is improved analysis accuracy. (author)
Dehmoobadsharifabadi, Armita; Singhal, Sonica; Quiñonez, Carlos
2017-03-01
To compare physician and dentist visits nationally and at the provincial/territorial level and to assess the extent of the "inverse care law" in dental care among different age groups in the same way. Publicly available data from the 2007 to 2008 Canadian Community Health Survey were utilized to investigate physician and dentist visits in the past 12 months in relation to self-perceived general and oral health by performing descriptive statistics and binary logistic regression, controlling for age, sex, education, income, and physician/dentist population ratios. Analysis was conducted for all participants and stratified by age groups - children (12-17 years), adults (18-64 years) and seniors (65 years and over). Nationally and provincially/territorially, it appears that the "inverse care law" persists for dental care but is not present for physician care. Specifically, when comparing to those with excellent general/oral health, individuals with poor general health were 2.71 (95% confidence interval [CI]: 2.70-2.72) times more likely to visit physicians, and individuals with poor oral health were 2.16 (95% CI: 2.16-2.17) times less likely to visit dentists. Stratified analyses by age showed more variability in the extent of the "inverse care law" in children and seniors compared to adults. The "inverse care law" in dental care exists both nationally and provincially/territorially among different age groups. Given this, it is important to assess the government's role in improving access to, and utilization of, dental care in Canada.
PHOTOGRAMMETRIC TECHNIQUES FOR ROAD SURFACE ANALYSIS
Directory of Open Access Journals (Sweden)
V. A. Knyaz
2016-06-01
Full Text Available The quality and condition of a road surface is of great importance for convenience and safety of driving. So the investigations of the behaviour of road materials in laboratory conditions and monitoring of existing roads are widely fulfilled for controlling a geometric parameters and detecting defects in the road surface. Photogrammetry as accurate non-contact measuring method provides powerful means for solving different tasks in road surface reconstruction and analysis. The range of dimensions concerned in road surface analysis can have great variation from tenths of millimetre to hundreds meters and more. So a set of techniques is needed to meet all requirements of road parameters estimation. Two photogrammetric techniques for road surface analysis are presented: for accurate measuring of road pavement and for road surface reconstruction based on imagery obtained from unmanned aerial vehicle. The first technique uses photogrammetric system based on structured light for fast and accurate surface 3D reconstruction and it allows analysing the characteristics of road texture and monitoring the pavement behaviour. The second technique provides dense 3D model road suitable for road macro parameters estimation.
Diffraction analysis of customized illumination technique
Lim, Chang-Moon; Kim, Seo-Min; Eom, Tae-Seung; Moon, Seung Chan; Shin, Ki S.
2004-05-01
Various enhancement techniques such as alternating PSM, chrome-less phase lithography, double exposure, etc. have been considered as driving forces to lead the production k1 factor towards below 0.35. Among them, a layer specific optimization of illumination mode, so-called customized illumination technique receives deep attentions from lithographers recently. A new approach for illumination customization based on diffraction spectrum analysis is suggested in this paper. Illumination pupil is divided into various diffraction domains by comparing the similarity of the confined diffraction spectrum. Singular imaging property of individual diffraction domain makes it easier to build and understand the customized illumination shape. By comparing the goodness of image in each domain, it was possible to achieve the customized shape of illumination. With the help from this technique, it was found that the layout change would not gives the change in the shape of customized illumination mode.
Fault tree analysis: concepts and techniques
International Nuclear Information System (INIS)
Fussell, J.B.
1976-01-01
Concepts and techniques of fault tree analysis have been developed over the past decade and now predictions from this type analysis are important considerations in the design of many systems such as aircraft, ships and their electronic systems, missiles, and nuclear reactor systems. Routine, hardware-oriented fault tree construction can be automated; however, considerable effort is needed in this area to get the methodology into production status. When this status is achieved, the entire analysis of hardware systems will be automated except for the system definition step. Automated analysis is not undesirable; to the contrary, when verified on adequately complex systems, automated analysis could well become a routine analysis. It could also provide an excellent start for a more in-depth fault tree analysis that includes environmental effects, common mode failure, and human errors. The automated analysis is extremely fast and frees the analyst from the routine hardware-oriented fault tree construction, as well as eliminates logic errors and errors of oversight in this part of the analysis. Automated analysis then affords the analyst a powerful tool to allow his prime efforts to be devoted to unearthing more subtle aspects of the modes of failure of the system
Energy Technology Data Exchange (ETDEWEB)
Shashi, V.; Golden, W.L.; Allinson, P.S. [Univ. of Virginia Health Sciences Center, Charlottesville, VA (United States)] [and others
1996-06-01
It has been demonstrated in animal studies that, in animals heterozygous for pericentric chromosomal inversions, loop formation is greatly reduced during meiosis. This results in absence of recombination within the inverted segment, with recombination seen only outside the inversion. A recent study in yeast has shown that telomeres, rather than centromeres, lead in chromosome movement just prior to meiosis and may be involved in promoting recombination. We studied by cytogenetic analysis and DNA polymorphisms the nature of meiotic recombination in a three-generation family with a large pericentric X chromosome inversion, inv(X)(p21.1q26), in which Duchenne muscular dystrophy (DMD) was cosegregating with the inversion. On DNA analysis there was no evidence of meiotic recombination between the inverted and normal X chromosomes in the inverted segment. Recombination was seen at the telomeric regions, Xp22 and Xq27-28. No deletion or point mutation was found on analysis of the DMD gene. On the basis of the FISH results, we believe that the X inversion is the mutation responsible for DMD in this family. Our results indicate that (1) pericentric X chromosome inversions result in reduction of recombination between the normal and inverted X chromosomes; (2) meiotic X chromosome pairing in these individuals is likely initiated at the telomeres; and (3) in this family DMD is caused by the pericentric inversion. 50 refs., 7 figs., 1 tab.
Applications of neutron activation analysis technique
International Nuclear Information System (INIS)
Jonah, S. A.
2000-07-01
The technique was developed as far back as 1936 by G. Hevesy and H. Levy for the analysis of Dy using an isotopic source. Approximately 40 elements can be analyzed by instrumental neutron activation analysis (INNA) technique with neutrons from a nuclear reactor. By applying radiochemical separation, the number of elements that can be analysed may be increased to almost 70. Compared with other analytical methods used in environmental and industrial research, NAA has some unique features. These are multi-element capability, rapidity, reproducibility of results, complementarity to other methods, freedom from analytical blank and independency of chemical state of elements. There are several types of neutron sources namely: nuclear reactors, accelerator-based and radioisotope-based sources, but nuclear reactors with high fluxes of neutrons from the fission of 235 U give the most intense irradiation, and hence the highest available sensitivities for NAA. In this paper, the applications of NAA of socio-economic importance are discussed. The benefits of using NAA and related nuclear techniques for on-line applications in industrial process control are highlighted. A brief description of the NAA set-ups at CERT is enumerated. Finally, NAA is compared with other leading analytical techniques
Chromatographic Techniques for Rare Earth Elements Analysis
Chen, Beibei; He, Man; Zhang, Huashan; Jiang, Zucheng; Hu, Bin
2017-04-01
The present capability of rare earth element (REE) analysis has been achieved by the development of two instrumental techniques. The efficiency of spectroscopic methods was extraordinarily improved for the detection and determination of REE traces in various materials. On the other hand, the determination of REEs very often depends on the preconcentration and separation of REEs, and chromatographic techniques are very powerful tools for the separation of REEs. By coupling with sensitive detectors, many ambitious analytical tasks can be fulfilled. Liquid chromatography is the most widely used technique. Different combinations of stationary phases and mobile phases could be used in ion exchange chromatography, ion chromatography, ion-pair reverse-phase chromatography and some other techniques. The application of gas chromatography is limited because only volatile compounds of REEs can be separated. Thin-layer and paper chromatography are techniques that cannot be directly coupled with suitable detectors, which limit their applications. For special demands, separations can be performed by capillary electrophoresis, which has very high separation efficiency.
International Nuclear Information System (INIS)
Kubo, S; Ioka, S; Onchi, S; Matsumoto, Y
2010-01-01
When slug flow runs through a pipe, nonuniform and time-varying thermal stresses develop and there is a possibility that thermal fatigue occurs. Therefore it is necessary to know the temperature distributions and the stress distributions in the pipe for the integrity assessment of the pipe. It is, however, difficult to measure the inner surface temperature directly. Therefore establishment of the estimation method of the temperature history on inner surface of pipe is needed. As a basic study on the estimation method of the temperature history on the inner surface of a pipe with slug flow, this paper presents an estimation method of the temperature on the inner surface of a plate from the temperature on the outer surface. The relationship between the temperature history on the outer surface and the inner surface is obtained analytically. Using the results of the mathematical analysis, the inverse analysis method of the inner surface temperature history estimation from the outer surface temperature history is proposed. It is found that the inner surface temperature history can be estimated from the outer surface temperature history by applying the inverse analysis method, even when it is expressed by the multiple frequency components.
Andrews, Ross N; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan
2018-02-01
X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.
Artificial Intelligence techniques for big data analysis
Aditya Khatri
2017-01-01
During my stay in Salamanca (Spain), I was fortunate enough to participate in the BISITE Research Group of the University of Salamanca. The University of Salamanca is the oldest university in Spain and in 2018 it celebrates its 8th centenary. As a computer science researcher, I participated in one of the many international projects that the research group has active, especially in big data analysis using Artificial Intelligence (AI) techniques. AI is one of BISITE's main lines of rese...
International Nuclear Information System (INIS)
Chen Jun; Di Yujin; Bu Chunqing; Zhang Yanfeng; Li Shuhua
2012-01-01
Objective: To analyze the characteristics of double inversion recovery (DIR) turbo field echo (TFE) and turbo spin echo (TSE) sequences and explore the value of double inversion recovery TFE sequence on carotid artery wall imaging. Patients and methods: 56 patients (32 males and 24 females, aged 31–76 years with a mean age of 53 years) were performed with DIR TFE and DIR TSE T1 weighted imaging (T1WI) sequences on carotid artery bifurcations. Image quality acquired by different techniques were evaluated and scored by two physicians. Whether there is significant difference is determined by SPSS 11.0 software. Paired-samples t test was used for statistics. Results: There was no significant difference in the image quality scores between two sequences (t = 0.880, P = 0.383 > 0.05). Conclusions: DIR TFE sequence has short scanning time and high spatial resolution. DIR TFE sequence can be used as the preferred sequence for screening carotid atherosclerotic plaque compared with DIR TSE sequence.
Applications Of Binary Image Analysis Techniques
Tropf, H.; Enderle, E.; Kammerer, H. P.
1983-10-01
After discussing the conditions where binary image analysis techniques can be used, three new applications of the fast binary image analysis system S.A.M. (Sensorsystem for Automation and Measurement) are reported: (1) The human view direction is measured at TV frame rate while the subject's head is free movable. (2) Industrial parts hanging on a moving conveyor are classified prior to spray painting by robot. (3) In automotive wheel assembly, the eccentricity of the wheel is minimized by turning the tyre relative to the rim in order to balance the eccentricity of the components.
Infusing Reliability Techniques into Software Safety Analysis
Shi, Ying
2015-01-01
Software safety analysis for a large software intensive system is always a challenge. Software safety practitioners need to ensure that software related hazards are completely identified, controlled, and tracked. This paper discusses in detail how to incorporate the traditional reliability techniques into the entire software safety analysis process. In addition, this paper addresses how information can be effectively shared between the various practitioners involved in the software safety analyses. The author has successfully applied the approach to several aerospace applications. Examples are provided to illustrate the key steps of the proposed approach.
The development of human behavior analysis techniques
International Nuclear Information System (INIS)
Lee, Jung Woon; Lee, Yong Hee; Park, Geun Ok; Cheon, Se Woo; Suh, Sang Moon; Oh, In Suk; Lee, Hyun Chul; Park, Jae Chang.
1997-07-01
In this project, which is to study on man-machine interaction in Korean nuclear power plants, we developed SACOM (Simulation Analyzer with a Cognitive Operator Model), a tool for the assessment of task performance in the control rooms using software simulation, and also develop human error analysis and application techniques. SACOM was developed to assess operator's physical workload, workload in information navigation at VDU workstations, and cognitive workload in procedural tasks. We developed trip analysis system including a procedure based on man-machine interaction analysis system including a procedure based on man-machine interaction analysis and a classification system. We analyzed a total of 277 trips occurred from 1978 to 1994 to produce trip summary information, and for 79 cases induced by human errors time-lined man-machine interactions. The INSTEC, a database system of our analysis results, was developed. The MARSTEC, a multimedia authoring and representation system for trip information, was also developed, and techniques for human error detection in human factors experiments were established. (author). 121 refs., 38 tabs., 52 figs
The development of human behavior analysis techniques
Energy Technology Data Exchange (ETDEWEB)
Lee, Jung Woon; Lee, Yong Hee; Park, Geun Ok; Cheon, Se Woo; Suh, Sang Moon; Oh, In Suk; Lee, Hyun Chul; Park, Jae Chang
1997-07-01
In this project, which is to study on man-machine interaction in Korean nuclear power plants, we developed SACOM (Simulation Analyzer with a Cognitive Operator Model), a tool for the assessment of task performance in the control rooms using software simulation, and also develop human error analysis and application techniques. SACOM was developed to assess operator`s physical workload, workload in information navigation at VDU workstations, and cognitive workload in procedural tasks. We developed trip analysis system including a procedure based on man-machine interaction analysis system including a procedure based on man-machine interaction analysis and a classification system. We analyzed a total of 277 trips occurred from 1978 to 1994 to produce trip summary information, and for 79 cases induced by human errors time-lined man-machine interactions. The INSTEC, a database system of our analysis results, was developed. The MARSTEC, a multimedia authoring and representation system for trip information, was also developed, and techniques for human error detection in human factors experiments were established. (author). 121 refs., 38 tabs., 52 figs.
An Inverse Analysis Approach to the Characterization of Chemical Transport in Paints
Willis, Matthew P.; Stevenson, Shawn M.; Pearl, Thomas P.; Mantooth, Brent A.
2014-01-01
The ability to directly characterize chemical transport and interactions that occur within a material (i.e., subsurface dynamics) is a vital component in understanding contaminant mass transport and the ability to decontaminate materials. If a material is contaminated, over time, the transport of highly toxic chemicals (such as chemical warfare agent species) out of the material can result in vapor exposure or transfer to the skin, which can result in percutaneous exposure to personnel who interact with the material. Due to the high toxicity of chemical warfare agents, the release of trace chemical quantities is of significant concern. Mapping subsurface concentration distribution and transport characteristics of absorbed agents enables exposure hazards to be assessed in untested conditions. Furthermore, these tools can be used to characterize subsurface reaction dynamics to ultimately design improved decontaminants or decontamination procedures. To achieve this goal, an inverse analysis mass transport modeling approach was developed that utilizes time-resolved mass spectroscopy measurements of vapor emission from contaminated paint coatings as the input parameter for calculation of subsurface concentration profiles. Details are provided on sample preparation, including contaminant and material handling, the application of mass spectrometry for the measurement of emitted contaminant vapor, and the implementation of inverse analysis using a physics-based diffusion model to determine transport properties of live chemical warfare agents including distilled mustard (HD) and the nerve agent VX. PMID:25226346
A new analysis technique for microsamples
International Nuclear Information System (INIS)
Boyer, R.; Journoux, J.P.; Duval, C.
1989-01-01
For many decades, isotopic analysis of Uranium or Plutonium has been performed by mass spectrometry. The most recent analytical techniques, using the counting method or a plasma torch combined with a mass spectrometer (ICP.MS) have not yet to reach a greater degree of precision than the older methods in this field. The two means of ionization for isotopic analysis - by electronic bombardment of atoms or molecules (source of gas ions) and - by thermal effect (thermoionic source) are compared revealing some inconsistency between the quantity of sample necessary for analysis and the luminosity. In fact, the quantity of sample necessary for the gas source mass spectrometer is 10 to 20 times greater than that for the thermoionization spectrometer, while the sample consumption is between 10 5 to 10 6 times greater. This proves that almost the entire sample is not necessary for the measurement; it is only required because of the system of introduction for the gas spectrometer. The new analysis technique referred to as ''Microfluorination'' corrects this anomaly and exploits the advantages of the electron bombardment method of ionization
Analysis of factor VIII gene inversions in 164 unrelated hemophilia A families
Energy Technology Data Exchange (ETDEWEB)
Vnencak-Jones, L.; Phillips, J.A. III; Janco, R.L. [Vanderbilt Univ. School of Medicine, Nashville, TN (United States)] [and others
1994-09-01
Hemophilia A is an X-linked recessive disease with variable phenotype and both heterogeneous and wide spread mutations in the factor VIII (F8) gene. As a result, diagnostic carrier or prenatal testing often relies upon laborious DNA linkage analysis. Recently, inversion mutations resulting from an intrachromosomal recombination between DNA sequences in one of two A genes {approximately}500 kb upstream from the F8 gene and a homologous A gene in intron 22 of the F8 gene were identified and found in 45% of severe hemophiliacs. We have analyzed banked DNA collected since 1986 from affected males or obligate carrier females representing 164 unrelated hemophilia A families. The disease was sporadic in 37%, familial in 54% and in 10% of families incomplete information was given. A unique deletion was identified in 1/164, a normal pattern was observed in 110/164 (67%), and 53/164 (32%) families had inversion mutations with 43/53 (81%) involving the distal A gene (R3 pattern) and 10/53 (19%) involving the proximal A gene (R2 pattern). While 19% of all rearrangements were R2, in 35 families with severe disease (< 1% VIII:C activity) all 16 rearrangements seen were R3. In 18 families with the R3 pattern and known activities, 16 (89%) had levels < 1%, with the remaining 2 families having {le} 2.4% activity. Further, 18 referrals specifically noted the production of inhibitors and 8/18 (45%) had the R3 pattern. Our findings demonstrate that the R3 inversion mutation patterns is (1) only seen with VIII:C activity levels of {le} 2.4%, (2) seen in 46% of families with severe hemophilia, (3) seen in 45% of hemophiliacs known to have inhibitors, (4) not correlated with sporadic or familial disease and (5) not in disequilibrium with the Bcl I or Taq I intron 18 or ST14 polymorphisms. Finally, in families positive for an inversion mutation, direct testing offers a highly accurate and less expensive alternative to DNA linkage analysis.
Flash Infrared Thermography Contrast Data Analysis Technique
Koshti, Ajay
2014-01-01
This paper provides information on an IR Contrast technique that involves extracting normalized contrast versus time evolutions from the flash thermography inspection infrared video data. The analysis calculates thermal measurement features from the contrast evolution. In addition, simulation of the contrast evolution is achieved through calibration on measured contrast evolutions from many flat-bottom holes in the subject material. The measurement features and the contrast simulation are used to evaluate flash thermography data in order to characterize delamination-like anomalies. The thermal measurement features relate to the anomaly characteristics. The contrast evolution simulation is matched to the measured contrast evolution over an anomaly to provide an assessment of the anomaly depth and width which correspond to the depth and diameter of the equivalent flat-bottom hole (EFBH) similar to that used as input to the simulation. A similar analysis, in terms of diameter and depth of an equivalent uniform gap (EUG) providing a best match with the measured contrast evolution, is also provided. An edge detection technique called the half-max is used to measure width and length of the anomaly. Results of the half-max width and the EFBH/EUG diameter are compared to evaluate the anomaly. The information provided here is geared towards explaining the IR Contrast technique. Results from a limited amount of validation data on reinforced carbon-carbon (RCC) hardware are included in this paper.
Williams, Charles A.; Richardson, Randall M.
1988-01-01
A nonlinear weighted least-squares analysis was performed for a synthetic elastic layer over a viscoelastic half-space model of strike-slip faulting. Also, an inversion of strain rate data was attempted for the locked portions of the San Andreas fault in California. Based on an eigenvector analysis of synthetic data, it is found that the only parameter which can be resolved is the average shear modulus of the elastic layer and viscoelastic half-space. The other parameters were obtained by performing a suite of inversions for the fault. The inversions on data from the northern San Andreas resulted in predicted parameter ranges similar to those produced by inversions on data from the whole fault.
Boonyasiriwat, Chaiwoot; Schuster, Gerard T.; Valasek, Paul A.; Cao, Weiping
2010-01-01
an accurate and highly resolved velocity tomogram for the 2D SEG/EAGE salt model. In the application of MWT to the field data, the inversion process is carried out using a multiscale method with a dynamic early-arrival muting window to mitigate the local
Kendall, Bradley J; Rubenstein, Joel H; Cook, Michael B; Vaughan, Thomas L; Anderson, Lesley A; Murray, Liam J; Shaheen, Nicholas J; Corley, Douglas A; Chandar, Apoorva K; Li, Li; Greer, Katarina B; Chak, Amitabh; El-Serag, Hashem B; Whiteman, David C; Thrift, Aaron P
2016-10-01
Gluteofemoral obesity (determined by measurement of subcutaneous fat in the hip and thigh regions) could reduce risks of cardiovascular and diabetic disorders associated with abdominal obesity. We evaluated whether gluteofemoral obesity also reduces the risk of Barrett's esophagus (BE), a premalignant lesion associated with abdominal obesity. We collected data from non-Hispanic white participants in 8 studies in the Barrett's and Esophageal Adenocarcinoma Consortium. We compared measures of hip circumference (as a proxy for gluteofemoral obesity) from cases of BE (n = 1559) separately with 2 control groups: 2557 population-based controls and 2064 individuals with gastroesophageal reflux disease (GERD controls). Study-specific odds ratios (ORs) and 95% confidence intervals (95% CIs) were estimated using individual participant data and multivariable logistic regression and combined using a random-effects meta-analysis. We found an inverse relationship between hip circumference and BE (OR per 5-cm increase, 0.88; 95% CI, 0.81-0.96), compared with population-based controls in a multivariable model that included waist circumference. This association was not observed in models that did not include waist circumference. Similar results were observed in analyses stratified by frequency of GERD symptoms. The inverse association with hip circumference was statistically significant only among men (vs population-based controls: OR, 0.85; 95% CI, 0.76-0.96 for men; OR, 0.93; 95% CI, 0.74-1.16 for women). For men, within each category of waist circumference, a larger hip circumference was associated with a decreased risk of BE. Increasing waist circumference was associated with an increased risk of BE in the mutually adjusted population-based and GERD control models. Although abdominal obesity is associated with an increased risk of BE, there is an inverse association between gluteofemoral obesity and BE, particularly among men. Copyright © 2016 AGA Institute. Published by
Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest
2017-12-01
The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo (MCMC) inversion algorithm, a full non-linear uncertainty analysis of the parameters and the parameter correlations can be accessed. This is essential to understand to what degree the spectral Cole-Cole parameters can be resolved from TDIP data. MCMC inversions of synthetic TDIP data, which show bell-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations increase and become non-linear. It is further investigated how waveform and parameter values influence the resolution of the Cole-Cole parameters. A limiting factor is the value of the frequency exponent, C. As C decreases, the resolution of all the Cole-Cole parameters decreases and the results become increasingly non-linear. While the values of the time constant, τ, must be in the acquisition range to resolve the parameters well, the choice between a 50 per cent and a 100 per cent duty cycle for the current injection does not have an influence on the parameter resolution. The limits of resolution and linearity are also studied in a comparison between the MCMC and a linearized gradient-based inversion approach. The two methods are consistent for resolved models, but the linearized approach tends to underestimate the uncertainties for poorly resolved parameters due to the corresponding non-linear features. Finally, an MCMC inversion of 1-D field data verifies that spectral Cole-Cole parameters can also be resolved from TD field measurements.
Reliability analysis techniques for the design engineer
International Nuclear Information System (INIS)
Corran, E.R.; Witt, H.H.
1982-01-01
This paper describes a fault tree analysis package that eliminates most of the housekeeping tasks involved in proceeding from the initial construction of a fault tree to the final stage of presenting a reliability analysis in a safety report. It is suitable for designers with relatively little training in reliability analysis and computer operation. Users can rapidly investigate the reliability implications of various options at the design stage and evolve a system which meets specified reliability objectives. Later independent review is thus unlikely to reveal major shortcomings necessitating modification and project delays. The package operates interactively, allowing the user to concentrate on the creative task of developing the system fault tree, which may be modified and displayed graphically. For preliminary analysis, system data can be derived automatically from a generic data bank. As the analysis proceeds, improved estimates of critical failure rates and test and maintenance schedules can be inserted. The technique is applied to the reliability analysis of the recently upgraded HIFAR Containment Isolation System. (author)
Inverse analysis of non-uniform temperature distributions using multispectral pyrometry
Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling
2016-05-01
Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.
Pajewski, Lara; Giannopoulos, Antonis; van der Kruk, Jan
2015-04-01
This work aims at presenting the ongoing research activities carried out in Working Group 3 (WG3) 'EM methods for near-field scattering problems by buried structures; data processing techniques' of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (www.GPRadar.eu). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. WG3 is structured in four Projects. Project 3.1 deals with 'Electromagnetic modelling for GPR applications.' Project 3.2 is concerned with 'Inversion and imaging techniques for GPR applications.' The topic of Project 3.3 is the 'Development of intrinsic models for describing near-field antenna effects, including antenna-medium coupling, for improved radar data processing using full-wave inversion.' Project 3.4 focuses on 'Advanced GPR data-processing algorithms.' Electromagnetic modeling tools that are being developed and improved include the Finite-Difference Time-Domain (FDTD) technique and the spectral domain Cylindrical-Wave Approach (CWA). One of the well-known freeware and versatile FDTD simulators is GprMax that enables an improved realistic representation of the soil/material hosting the sought structures and of the GPR antennas. Here, input/output tools are being developed to ease the definition of scenarios and the visualisation of numerical results. The CWA expresses the field scattered by subsurface two-dimensional targets with arbitrary cross-section as a sum of cylindrical waves. In this way, the interaction is taken into account of multiple scattered fields within the medium hosting the sought targets. Recently, the method has been extended to deal with through-the-wall scenarios. One of the
Directory of Open Access Journals (Sweden)
Vladimir eKozunov
2015-04-01
Full Text Available Although MEG/EEG signals are highly variable between subjects, they allow characterizing systematic changes of cortical activity in both space and time. Traditionally a two-step procedure is used. The first step is a transition from sensor to source space by the means of solving an ill-posed inverse problem for each subject individually. The second is mapping of cortical regions consistently active across subjects. In practice the first step often leads to a set of active cortical regions whose location and timecourses display a great amount of interindividual variability hindering the subsequent group analysis.We propose Group Analysis Leads to Accuracy (GALA - a solution that combines the two steps into one. GALA takes advantage of individual variations of cortical geometry and sensor locations. It exploits the ensuing variability in electromagnetic forward model as a source of additional information. We assume that for different subjects functionally identical cortical regions are located in close proximity and partially overlap and their timecourses are correlated. This relaxed similarity constraint on the inverse solution can be expressed within a probabilistic framework, allowing for an iterative algorithm solving the inverse problem jointly for all subjects.A systematic simulation study showed that GALA, as compared with the standard min-norm approach, improves accuracy of true activity recovery, when accuracy is assessed both in terms of spatial proximity of the estimated and true activations and correct specification of spatial extent of the activated regions. This improvement obtained without using any noise normalization techniques for both solutions, preserved for a wide range of between-subject variations in both spatial and temporal features of regional activation. The corresponding activation timecourses exhibit significantly higher similarity across subjects. Similar results were obtained for a real MEG dataset of face
Low energy analysis techniques for CUORE
Energy Technology Data Exchange (ETDEWEB)
Alduino, C.; Avignone, F.T.; Chott, N.; Creswick, R.J.; Rosenfeld, C.; Wilson, J. [University of South Carolina, Department of Physics and Astronomy, Columbia, SC (United States); Alfonso, K.; Huang, H.Z.; Sakai, M.; Schmidt, J. [University of California, Department of Physics and Astronomy, Los Angeles, CA (United States); Artusa, D.R.; Rusconi, C. [University of South Carolina, Department of Physics and Astronomy, Columbia, SC (United States); INFN-Laboratori Nazionali del Gran Sasso, L' Aquila (Italy); Azzolini, O.; Camacho, A.; Keppel, G.; Palmieri, V.; Pira, C. [INFN-Laboratori Nazionali di Legnaro, Padua (Italy); Bari, G.; Deninno, M.M. [INFN-Sezione di Bologna, Bologna (Italy); Beeman, J.W. [Lawrence Berkeley National Laboratory, Materials Science Division, Berkeley, CA (United States); Bellini, F.; Cosmelli, C.; Ferroni, F.; Piperno, G. [Sapienza Universita di Roma, Dipartimento di Fisica, Rome (Italy); INFN-Sezione di Roma, Rome (Italy); Benato, G.; Singh, V. [University of California, Department of Physics, Berkeley, CA (United States); Bersani, A.; Caminata, A. [INFN-Sezione di Genova, Genoa (Italy); Biassoni, M.; Brofferio, C.; Capelli, S.; Carniti, P.; Cassina, L.; Chiesa, D.; Clemenza, M.; Faverzani, M.; Fiorini, E.; Gironi, L.; Gotti, C.; Maino, M.; Nastasi, M.; Nucciotti, A.; Pavan, M.; Pozzi, S.; Sisti, M.; Terranova, F.; Zanotti, L. [Universita di Milano-Bicocca, Dipartimento di Fisica, Milan (Italy); INFN-Sezione di Milano Bicocca, Milan (Italy); Branca, A.; Taffarello, L. [INFN-Sezione di Padova, Padua (Italy); Bucci, C.; Cappelli, L.; D' Addabbo, A.; Gorla, P.; Pattavina, L.; Pirro, S. [INFN-Laboratori Nazionali del Gran Sasso, L' Aquila (Italy); Canonica, L. [INFN-Laboratori Nazionali del Gran Sasso, L' Aquila (Italy); Massachusetts Institute of Technology, Cambridge, MA (United States); Cao, X.G.; Fang, D.Q.; Ma, Y.G.; Wang, H.W.; Zhang, G.Q. [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai (China); Cardani, L.; Casali, N.; Dafinei, I.; Morganti, S.; Mosteiro, P.J.; Tomei, C.; Vignati, M. [INFN-Sezione di Roma, Rome (Italy); Copello, S.; Di Domizio, S.; Marini, L.; Pallavicini, M. [INFN-Sezione di Genova, Genoa (Italy); Universita di Genova, Dipartimento di Fisica, Genoa (Italy); Cremonesi, O.; Ferri, E.; Giachero, A.; Pessina, G.; Previtali, E. [INFN-Sezione di Milano Bicocca, Milan (Italy); Cushman, J.S.; Davis, C.J.; Heeger, K.M.; Lim, K.E.; Maruyama, R.H. [Yale University, Department of Physics, New Haven, CT (United States); D' Aguanno, D.; Pagliarone, C.E. [INFN-Laboratori Nazionali del Gran Sasso, L' Aquila (Italy); Universita degli Studi di Cassino e del Lazio Meridionale, Dipartimento di Ingegneria Civile e Meccanica, Cassino (Italy); Dell' Oro, S. [INFN-Laboratori Nazionali del Gran Sasso, L' Aquila (Italy); INFN-Gran Sasso Science Institute, L' Aquila (Italy); Di Vacri, M.L.; Santone, D. [INFN-Laboratori Nazionali del Gran Sasso, L' Aquila (Italy); Universita dell' Aquila, Dipartimento di Scienze Fisiche e Chimiche, L' Aquila (Italy); Drobizhev, A.; Hennings-Yeomans, R.; Kolomensky, Yu.G.; Wagaarachchi, S.L. [University of California, Department of Physics, Berkeley, CA (United States); Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Franceschi, M.A.; Ligi, C.; Napolitano, T. [INFN-Laboratori Nazionali di Frascati, Rome (Italy); Freedman, S.J. [University of California, Department of Physics, Berkeley, CA (United States); Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Fujikawa, B.K.; Mei, Y.; Schmidt, B.; Smith, A.R.; Welliver, B. [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Giuliani, A.; Novati, V. [Universite Paris-Saclay, CSNSM, Univ. Paris-Sud, CNRS/IN2P3, Orsay (France); Gladstone, L.; Leder, A.; Ouellet, J.L.; Winslow, L.A. [Massachusetts Institute of Technology, Cambridge, MA (United States); Gutierrez, T.D. [California Polytechnic State University, Physics Department, San Luis Obispo, CA (United States); Haller, E.E. [Lawrence Berkeley National Laboratory, Materials Science Division, Berkeley, CA (United States); University of California, Department of Materials Science and Engineering, Berkeley, CA (United States); Han, K. [Shanghai Jiao Tong University, Department of Physics and Astronomy, Shanghai (China); Hansen, E. [University of California, Department of Physics and Astronomy, Los Angeles, CA (United States); Massachusetts Institute of Technology, Cambridge, MA (United States); Kadel, R. [Lawrence Berkeley National Laboratory, Physics Division, Berkeley, CA (United States); Martinez, M. [Sapienza Universita di Roma, Dipartimento di Fisica, Rome (Italy); INFN-Sezione di Roma, Rome (Italy); Universidad de Zaragoza, Laboratorio de Fisica Nuclear y Astroparticulas, Saragossa (Spain); Moggi, N.; Zucchelli, S. [INFN-Sezione di Bologna, Bologna (Italy); Universita di Bologna - Alma Mater Studiorum, Dipartimento di Fisica e Astronomia, Bologna (IT); Nones, C. [CEA/Saclay, Service de Physique des Particules, Gif-sur-Yvette (FR); Norman, E.B.; Wang, B.S. [Lawrence Livermore National Laboratory, Livermore, CA (US); University of California, Department of Nuclear Engineering, Berkeley, CA (US); O' Donnell, T. [Virginia Polytechnic Institute and State University, Center for Neutrino Physics, Blacksburg, VA (US); Sangiorgio, S.; Scielzo, N.D. [Lawrence Livermore National Laboratory, Livermore, CA (US); Wise, T. [Yale University, Department of Physics, New Haven, CT (US); University of Wisconsin, Department of Physics, Madison, WI (US); Woodcraft, A. [University of Edinburgh, SUPA, Institute for Astronomy, Edinburgh (GB); Zimmermann, S. [Lawrence Berkeley National Laboratory, Engineering Division, Berkeley, CA (US)
2017-12-15
CUORE is a tonne-scale cryogenic detector operating at the Laboratori Nazionali del Gran Sasso (LNGS) that uses tellurium dioxide bolometers to search for neutrinoless double-beta decay of {sup 130}Te. CUORE is also suitable to search for low energy rare events such as solar axions or WIMP scattering, thanks to its ultra-low background and large target mass. However, to conduct such sensitive searches requires improving the energy threshold to 10 keV. In this paper, we describe the analysis techniques developed for the low energy analysis of CUORE-like detectors, using the data acquired from November 2013 to March 2015 by CUORE-0, a single-tower prototype designed to validate the assembly procedure and new cleaning techniques of CUORE. We explain the energy threshold optimization, continuous monitoring of the trigger efficiency, data and event selection, and energy calibration at low energies in detail. We also present the low energy background spectrum of CUORE-0 below 60 keV. Finally, we report the sensitivity of CUORE to WIMP annual modulation using the CUORE-0 energy threshold and background, as well as an estimate of the uncertainty on the nuclear quenching factor from nuclear recoils in CUORE-0. (orig.)
Machine monitoring via current signature analysis techniques
International Nuclear Information System (INIS)
Smith, S.F.; Castleberry, K.N.; Nowlin, C.H.
1992-01-01
A significant need in the effort to provide increased production quality is to provide improved plant equipment monitoring capabilities. Unfortunately, in today's tight economy, even such monitoring instrumentation must be implemented in a recognizably cost effective manner. By analyzing the electric current drawn by motors, actuator, and other line-powered industrial equipment, significant insights into the operations of the movers, driven equipment, and even the power source can be obtained. The generic term 'current signature analysis' (CSA) has been coined to describe several techniques for extracting useful equipment or process monitoring information from the electrical power feed system. A patented method developed at Oak Ridge National Laboratory is described which recognizes the presence of line-current modulation produced by motors and actuators driving varying loads. The in-situ application of applicable linear demodulation techniques to the analysis of numerous motor-driven systems is also discussed. The use of high-quality amplitude and angle-demodulation circuitry has permitted remote status monitoring of several types of medium and high-power gas compressors in (US DOE facilities) driven by 3-phase induction motors rated from 100 to 3,500 hp, both with and without intervening speed increasers. Flow characteristics of the compressors, including various forms of abnormal behavior such as surging and rotating stall, produce at the output of the specialized detectors specific time and frequency signatures which can be easily identified for monitoring, control, and fault-prevention purposes. The resultant data are similar in form to information obtained via standard vibration-sensing techniques and can be analyzed using essentially identical methods. In addition, other machinery such as refrigeration compressors, brine pumps, vacuum pumps, fans, and electric motors have been characterized
International Nuclear Information System (INIS)
El-Morshedy, Salah El-Din
2010-01-01
Research reactors of power greater than 20 MW are usually designed to be cooled with upward coolant flow direction inside the reactor core. This is mainly to prevent flow inversion problems following a pump coast down. However, in some designs and under certain operating conditions, flow inversion phenomenon is predicted. In the present work, the best-estimate Material Testing Reactors Thermal-Hydraulic Analysis program (MTRTHA) is used to simulate a typical MTR reactor behavior with upward cooling under a hypothetical case of loss of off-site power. The flow inversion phenomenon is predicted under certain decay heat and/or pool temperature values below the design values. The reactor simulation under loss of off-site power is performed for two cases namely; two-flap valves open and one flap-valve fails to open. The model results for the flow inversion phenomenon prediction is analyzed and a solution of the problem is suggested. (orig.)
Directory of Open Access Journals (Sweden)
S. Ars
2017-12-01
Full Text Available This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping
Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe
2017-12-01
This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Je [Sejong Univ., Seoul (Korea, Republic of); Alkhatee, Sari; Roh, Gyuhong; Lee, Byungchul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-05-15
Dose absorption and energy absorption buildup factors are widely used in the shielding analysis. The dose rate of the medium is main concern in the dose buildup factor, however energy absorption is an important parameter in the energy buildup factors. ANSI/ANS-6.4.3-1991 standard data is widely used based on interpolation and extrapolation by means of an approximation method. Recently, Yoshida's geometric progression (GP) formulae are also popular and it is already implemented in QAD code. In the QAD code, two buildup factors are notated as DOSE for standard air exposure response and ENG for the response of the energy absorbed in the material itself. In this paper, a new least square fitting method is suggested to obtain a reliable buildup factors proposed since 1991. Total 4 datasets of air exposure buildup factors are used for evaluation including ANSI/ANS-6.4.3-1991, Taylor, Berger, and GP data. The standard deviation of the fitted data are analyzed based on the results. A new reverse least square fitting method is proposed in this study in order to reduce the fitting uncertainties. It adapts an inverse function rather than the original function by the distribution slope of dataset. Some quantitative comparisons are provided for concrete and lead in this paper, too. This study is focused on the least square fitting of existing buildup factors to be utilized in the point-kernel code for radiation shielding analysis. The inverse least square fitting method is suggested to obtain more reliable results of concave shaped dataset such as concrete. In the concrete case, the variance and residue are decreased significantly, too. However, the convex shaped case of lead can be applied to the usual least square fitting method. In the future, more datasets will be tested by using the least square fitting. And the fitted data could be implemented to the existing point-kernel codes.
Dybus, W.; Benoit, M. H.; Ebinger, C. J.
2011-12-01
The crustal thickness beneath much of the eastern half of the US is largely unconstrained. Though there have been several controlled source seismic surveys of the region, many of these studies suffer from rays that turn in the crust above the Moho, resulting in somewhat ambiguous crustal thickness values. Furthermore, the broadband seismic station coverage east of the Mississippi has been limited, and most of the region remains largely understudied. In this study, we estimated the depth to the Moho using both spectral analysis and inversion of Bouguer gravity anomalies. We systematically estimated depths to lithospheric density contrasts from radial power spectra of Bouguer gravity within 100 km X 100 km windows eastward from the Mississippi River to the Atlantic Coast, and northward from North Carolina to Maine. The slopes and slope breaks in the radial power spectra were computed using an automated algorithm. The slope values for each window were visually inspected and then used to estimate the depth to the Moho and other lithospheric density contrasts beneath each windowed region. Additionally, we performed a standard Oldenburg-Parker inversion for lithospheric density contrasts using various reference depths and density contrasts that are realistic for the different physiographic provinces in the Eastern US. Our preliminary results suggest that the gravity-derived Moho depths are similar to those found using seismic data, and that the crust is relatively thinner (~28-33 km) than expected in beneath the Piedmont region (~35-40 km). Given the relative paucity of seismic data in the eastern US, analysis of onshore gravity data is a valuable tool for interpolating between seismic stations.
Cost analysis and estimating tools and techniques
Nussbaum, Daniel
1990-01-01
Changes in production processes reflect the technological advances permeat ing our products and services. U. S. industry is modernizing and automating. In parallel, direct labor is fading as the primary cost driver while engineering and technology related cost elements loom ever larger. Traditional, labor-based ap proaches to estimating costs are losing their relevance. Old methods require aug mentation with new estimating tools and techniques that capture the emerging environment. This volume represents one of many responses to this challenge by the cost analysis profession. The Institute of Cost Analysis (lCA) is dedicated to improving the effective ness of cost and price analysis and enhancing the professional competence of its members. We encourage and promote exchange of research findings and appli cations between the academic community and cost professionals in industry and government. The 1990 National Meeting in Los Angeles, jointly spo~sored by ICA and the National Estimating Society (NES),...
Population estimation techniques for routing analysis
International Nuclear Information System (INIS)
Sathisan, S.K.; Chagari, A.K.
1994-01-01
A number of on-site and off-site factors affect the potential siting of a radioactive materials repository at Yucca Mountain, Nevada. Transportation related issues such route selection and design are among them. These involve evaluation of potential risks and impacts, including those related to population. Population characteristics (total population and density) are critical factors in the risk assessment, emergency preparedness and response planning, and ultimately in route designation. This paper presents an application of Geographic Information System (GIS) technology to facilitate such analyses. Specifically, techniques to estimate critical population information are presented. A case study using the highway network in Nevada is used to illustrate the analyses. TIGER coverages are used as the basis for population information at a block level. The data are then synthesized at tract, county and state levels of aggregation. Of particular interest are population estimates for various corridor widths along transport corridors -- ranging from 0.5 miles to 20 miles in this paper. A sensitivity analysis based on the level of data aggregation is also presented. The results of these analysis indicate that specific characteristics of the area and its population could be used as indicators to aggregate data appropriately for the analysis
Müller, David; Cattaneo, Stefano; Meier, Florian; Welz, Roland; de Vries, Tjerk; Portugal-Cohen, Meital; Antonio, Diana C; Cascio, Claudia; Calzolai, Luigi; Gilliland, Douglas; de Mello, Andrew
2016-04-01
We demonstrate the use of inverse supercritical carbon dioxide (scCO2) extraction as a novel method of sample preparation for the analysis of complex nanoparticle-containing samples, in our case a model sunscreen agent with titanium dioxide nanoparticles. The sample was prepared for analysis in a simplified process using a lab scale supercritical fluid extraction system. The residual material was easily dispersed in an aqueous solution and analyzed by Asymmetrical Flow Field-Flow Fractionation (AF4) hyphenated with UV- and Multi-Angle Light Scattering detection. The obtained results allowed an unambiguous determination of the presence of nanoparticles within the sample, with almost no background from the matrix itself, and showed that the size distribution of the nanoparticles is essentially maintained. These results are especially relevant in view of recently introduced regulatory requirements concerning the labeling of nanoparticle-containing products. The novel sample preparation method is potentially applicable to commercial sunscreens or other emulsion-based cosmetic products and has important ecological advantages over currently used sample preparation techniques involving organic solvents. Copyright © 2016 Elsevier B.V. All rights reserved.
Techniques for Analysis of Plant Phenolic Compounds
Directory of Open Access Journals (Sweden)
Thomas H. Roberts
2013-02-01
Full Text Available Phenolic compounds are well-known phytochemicals found in all plants. They consist of simple phenols, benzoic and cinnamic acid, coumarins, tannins, lignins, lignans and flavonoids. Substantial developments in research focused on the extraction, identification and quantification of phenolic compounds as medicinal and/or dietary molecules have occurred over the last 25 years. Organic solvent extraction is the main method used to extract phenolics. Chemical procedures are used to detect the presence of total phenolics, while spectrophotometric and chromatographic techniques are utilized to identify and quantify individual phenolic compounds. This review addresses the application of different methodologies utilized in the analysis of phenolic compounds in plant-based products, including recent technical developments in the quantification of phenolics.
Radio-analysis. Definitions and techniques
International Nuclear Information System (INIS)
Bourrel, F.; Courriere, Ph.
2003-01-01
This paper presents the different steps of the radio-labelling of a molecule for two purposes: the radio-immuno-analysis and the auto-radiography: 1 - definitions, radiations and radioprotection: activity of a radioactive source; half-life; radioactivity (alpha-, beta- and gamma radioactivity, internal conversion); radioprotection (irradiation, contamination); 2 - radionuclides used in medical biology and obtention of labelled molecules: gamma emitters ( 125 I, 57 Co); beta emitters; obtention of labelled molecules (general principles, high specific activity and choice of the tracer, molecule to be labelled); main labelling techniques (iodation, tritium); purification of the labelled compound (dialysis, gel-filtering or molecular exclusion chromatography, high performance liquid chromatography); quality estimation of the labelled compound (labelling efficiency calculation, immuno-reactivity conservation, stability and preservation). (J.S.)
Bortolozo, Cassiano Antonio; Bokhonok, Oleg; Porsani, Jorge Luís; Monteiro dos Santos, Fernando Acácio; Diogo, Liliana Alcazar; Slob, Evert
2017-11-01
Ambiguities in geophysical inversion results are always present. How these ambiguities appear in most cases open to interpretation. It is interesting to investigate ambiguities with regard to the parameters of the models under study. Residual Function Dispersion Map (RFDM) can be used to differentiate between global ambiguities and local minima in the objective function. We apply RFDM to Vertical Electrical Sounding (VES) and TEM Sounding inversion results. Through topographic analysis of the objective function we evaluate the advantages and limitations of electrical sounding data compared with TEM sounding data, and the benefits of joint inversion in comparison with the individual methods. The RFDM analysis proved to be a very interesting tool for understanding the joint inversion method of VES/TEM. Also the advantage of the applicability of the RFDM analyses in real data is explored in this paper to demonstrate not only how the objective function of real data behaves but the applicability of the RFDM approach in real cases. With the analysis of the results, it is possible to understand how the joint inversion can reduce the ambiguity of the methods.
Analysis of Program Obfuscation Schemes with Variable Encoding Technique
Fukushima, Kazuhide; Kiyomoto, Shinsaku; Tanaka, Toshiaki; Sakurai, Kouichi
Program analysis techniques have improved steadily over the past several decades, and software obfuscation schemes have come to be used in many commercial programs. A software obfuscation scheme transforms an original program or a binary file into an obfuscated program that is more complicated and difficult to analyze, while preserving its functionality. However, the security of obfuscation schemes has not been properly evaluated. In this paper, we analyze obfuscation schemes in order to clarify the advantages of our scheme, the XOR-encoding scheme. First, we more clearly define five types of attack models that we defined previously, and define quantitative resistance to these attacks. Then, we compare the security, functionality and efficiency of three obfuscation schemes with encoding variables: (1) Sato et al.'s scheme with linear transformation, (2) our previous scheme with affine transformation, and (3) the XOR-encoding scheme. We show that the XOR-encoding scheme is superior with regard to the following two points: (1) the XOR-encoding scheme is more secure against a data-dependency attack and a brute force attack than our previous scheme, and is as secure against an information-collecting attack and an inverse transformation attack as our previous scheme, (2) the XOR-encoding scheme does not restrict the calculable ranges of programs and the loss of efficiency is less than in our previous scheme.
International Nuclear Information System (INIS)
Samani, Abbas; Zubovits, Judit; Plewes, Donald
2007-01-01
Understanding and quantifying the mechanical properties of breast tissues has been a subject of interest for the past two decades. This has been motivated in part by interest in modelling soft tissue response for surgery planning and virtual-reality-based surgical training. Interpreting elastography images for diagnostic purposes also requires a sound understanding of normal and pathological tissue mechanical properties. Reliable data on tissue elastic properties are very limited and those which are available tend to be inconsistent, in part as a result of measurement methodology. We have developed specialized techniques to measure tissue elasticity of breast normal tissues and tumour specimens and applied them to 169 fresh ex vivo breast tissue samples including fat and fibroglandular tissue as well as a range of benign and malignant breast tumour types. Results show that, under small deformation conditions, the elastic modulus of normal breast fat and fibroglandular tissues are similar while fibroadenomas were approximately twice the stiffness. Fibrocystic disease and malignant tumours exhibited a 3-6-fold increased stiffness with high-grade invasive ductal carcinoma exhibiting up to a 13-fold increase in stiffness compared to fibrogalndular tissue. A statistical analysis showed that differences between the elastic modulus of the majority of those tissues were statistically significant. Implications for the specificity advantages of elastography are reviewed
Energy Technology Data Exchange (ETDEWEB)
Samani, Abbas [Department of Medical Biophysics/Electrical and Computer Engineering, University of Western Ontario, Medical Sciences Building, London, Ontario, N6A 5C1 (Canada); Zubovits, Judit [Department of Anatomic Pathology, Sunnybrook Health Sciences Centre, 2075 Bayview Avenue, Toronto, Ontario, M4N 3M5 (Canada); Plewes, Donald [Department of Medical Biophysics, University of Toronto, 2075 Bayview Avenue, Toronto, Ontario, M4N 3M5 (Canada)
2007-03-21
Understanding and quantifying the mechanical properties of breast tissues has been a subject of interest for the past two decades. This has been motivated in part by interest in modelling soft tissue response for surgery planning and virtual-reality-based surgical training. Interpreting elastography images for diagnostic purposes also requires a sound understanding of normal and pathological tissue mechanical properties. Reliable data on tissue elastic properties are very limited and those which are available tend to be inconsistent, in part as a result of measurement methodology. We have developed specialized techniques to measure tissue elasticity of breast normal tissues and tumour specimens and applied them to 169 fresh ex vivo breast tissue samples including fat and fibroglandular tissue as well as a range of benign and malignant breast tumour types. Results show that, under small deformation conditions, the elastic modulus of normal breast fat and fibroglandular tissues are similar while fibroadenomas were approximately twice the stiffness. Fibrocystic disease and malignant tumours exhibited a 3-6-fold increased stiffness with high-grade invasive ductal carcinoma exhibiting up to a 13-fold increase in stiffness compared to fibrogalndular tissue. A statistical analysis showed that differences between the elastic modulus of the majority of those tissues were statistically significant. Implications for the specificity advantages of elastography are reviewed.
Samani, Abbas; Zubovits, Judit; Plewes, Donald
2007-03-01
Understanding and quantifying the mechanical properties of breast tissues has been a subject of interest for the past two decades. This has been motivated in part by interest in modelling soft tissue response for surgery planning and virtual-reality-based surgical training. Interpreting elastography images for diagnostic purposes also requires a sound understanding of normal and pathological tissue mechanical properties. Reliable data on tissue elastic properties are very limited and those which are available tend to be inconsistent, in part as a result of measurement methodology. We have developed specialized techniques to measure tissue elasticity of breast normal tissues and tumour specimens and applied them to 169 fresh ex vivo breast tissue samples including fat and fibroglandular tissue as well as a range of benign and malignant breast tumour types. Results show that, under small deformation conditions, the elastic modulus of normal breast fat and fibroglandular tissues are similar while fibroadenomas were approximately twice the stiffness. Fibrocystic disease and malignant tumours exhibited a 3-6-fold increased stiffness with high-grade invasive ductal carcinoma exhibiting up to a 13-fold increase in stiffness compared to fibrogalndular tissue. A statistical analysis showed that differences between the elastic modulus of the majority of those tissues were statistically significant. Implications for the specificity advantages of elastography are reviewed.
Hvilshøj, S.; Jensen, K. H.; Barlebo, H. C.; Madsen, B.
1999-08-01
Inverse numerical modeling was applied to analyze pumping tests of partially penetrating wells carried out in three wells established in an unconfined aquifer in Vejen, Denmark, where extensive field investigations had previously been carried out, including tracer tests, mini-slug tests, and other hydraulic tests. Drawdown data from multiple piezometers located at various horizontal and vertical distances from the pumping well were included in the optimization. Horizontal and vertical hydraulic conductivities, specific storage, and specific yield were estimated, assuming that the aquifer was either a homogeneous system with vertical anisotropy or composed of two or three layers of different hydraulic properties. In two out of three cases, a more accurate interpretation was obtained for a multi-layer model defined on the basis of lithostratigraphic information obtained from geological descriptions of sediment samples, gammalogs, and flow-meter tests. Analysis of the pumping tests resulted in values for horizontal hydraulic conductivities that are in good accordance with those obtained from slug tests and mini-slug tests. Besides the horizontal hydraulic conductivity, it is possible to determine the vertical hydraulic conductivity, specific yield, and specific storage based on a pumping test of a partially penetrating well. The study demonstrates that pumping tests of partially penetrating wells can be analyzed using inverse numerical models. The model used in the study was a finite-element flow model combined with a non-linear regression model. Such a model can accommodate more geological information and complex boundary conditions, and the parameter-estimation procedure can be formalized to obtain optimum estimates of hydraulic parameters and their standard deviations.
Martins, Evandro; Poncelet, Denis; Rodrigues, Ramila Cristiane; Renard, Denis
2017-09-01
In the first part of this article, it was described an innovative method of oil encapsulation from dripping-inverse gelation using water-in-oil (W/O) emulsions. It was noticed that the method of oil encapsulation was quite different depending on the emulsion type (W/O or oil-in-water (O/W)) used and that the emulsion structure (W/O or O/W) had a high impact on the dripping technique and the capsules characteristics. The objective of this article was to elucidate the differences between the dripping techniques using both emulsions and compare the capsule properties (mechanical resistance and release of actives). The oil encapsulation using O/W emulsions was easier to perform and did not require the use of emulsion destabilisers. However, capsules produced from W/O emulsions were more resistant to compression and showed the slower release of actives over time. The findings detailed here widened the knowledge of the inverse gelation and gave opportunities to develop new techniques of oil encapsulation.
Flame analysis using image processing techniques
Her Jie, Albert Chang; Zamli, Ahmad Faizal Ahmad; Zulazlan Shah Zulkifli, Ahmad; Yee, Joanne Lim Mun; Lim, Mooktzeng
2018-04-01
This paper presents image processing techniques with the use of fuzzy logic and neural network approach to perform flame analysis. Flame diagnostic is important in the industry to extract relevant information from flame images. Experiment test is carried out in a model industrial burner with different flow rates. Flame features such as luminous and spectral parameters are extracted using image processing and Fast Fourier Transform (FFT). Flame images are acquired using FLIR infrared camera. Non-linearities such as thermal acoustic oscillations and background noise affect the stability of flame. Flame velocity is one of the important characteristics that determines stability of flame. In this paper, an image processing method is proposed to determine flame velocity. Power spectral density (PSD) graph is a good tool for vibration analysis where flame stability can be approximated. However, a more intelligent diagnostic system is needed to automatically determine flame stability. In this paper, flame features of different flow rates are compared and analyzed. The selected flame features are used as inputs to the proposed fuzzy inference system to determine flame stability. Neural network is used to test the performance of the fuzzy inference system.
Analysis of obsidians by PIXE technique
International Nuclear Information System (INIS)
Nuncio Q, A.E.
1998-01-01
This work presents the characterization of obsydian samples from different mineral sites in Mexico, undertaken by an Ion Beam Analysis: PIXE (Proton Induced X-ray Emission). As part of an intensive investigation of obsidian in Mesoamerica by anthropologists from Mexico National Institute of Anthropology and History, 818 samples were collected from different volcanic sources in central Mexico for the purpose of establishing a data bank of element concentrations of each source. Part of this collection was analyzed by Neutron activation analysis and most of the important elements concentrations reported. In this work, a non-destructive IBA technique (PIXE) are used to analyze obsydian samples. The application of this technique were carried out at laboratories of the ININ Nuclear Center facilities. The samples consisted of of obsydians from ten different volcanic sources. This pieces were mounted on a sample holder designed for the purpose of exposing each sample to the proton beam. This PIXE analysis was carried out with an ET Tandem Accelerator at the ININ. X-ray spectrometry was carried out with an external beam facility employing a Si(Li) detector set at 52.5 degrees in relation to the target normal (parallel to the beam direction) and 4.2 cm away from the target center. A filter was set in front of the detector, to determine the best attenuation conditions to obtain most of the elements, taking into account that X-ray spectra from obsydians are dominated by intense major elements lines. Thus, a 28 μ m- thick aluminium foil absorber was selected and used to reduce the intensity of the major lines as well as pile-up effects. The mean proton energy was 2.62 MeV, and the beam profile was about 4 mm in diameter. As results were founded elemental concentrations of a set of samples from ten different sources: Altotonga (Veracruz), Penjamo (Guanajuato), Otumba (Mexico), Zinapecuaro (Michoacan), Ucareo (Michoacan), Tres Cabezas (Puebla), Sierra Navajas (Hidalgo), Zaragoza
Handbook of Qualitative Research Techniques and Analysis in Entrepreneurship
DEFF Research Database (Denmark)
One of the most challenging tasks in the research design process is choosing the most appropriate data collection and analysis techniques. This Handbook provides a detailed introduction to five qualitative data collection and analysis techniques pertinent to exploring entreprneurial phenomena....
Energy Technology Data Exchange (ETDEWEB)
Amaral, Marcello Magri
2012-07-01
Optical Coherence Tomography (OCT) is based on the media backscattering properties in order to obtain tomographic images. In a similar way, LIDAR (Light Detection and Range) technique uses these properties to determine atmospheric characteristics, specially the signal extinction coefficient. Exploring this similarity allowed the application of signal inversion methods to the OCT images, allowing to construct images based in the extinction coefficient, original result until now. The goal of this work was to study, propose, develop and implement algorithms based on OCT signal inversion methodologies with the aim of determine the extinction coefficient as a function of depth. Three inversion methods were used and implemented in LABView{sup R}: slope, boundary point and optical depth. Associated errors were studied and real samples (homogeneous and stratified) were used for two and three dimension analysis. The extinction coefficient images obtained from the optical depth method were capable to differentiate air from the sample. The images were studied applying PCA and cluster analysis that established the methodology strength in determining the sample's extinction coefficient value. Moreover, the optical depth methodology was applied to study the hypothesis that there is some correlation between signal extinction coefficient and the enamel teeth demineralization during a cariogenic process. By applying this methodology, it was possible to observe the variation of the extinction coefficient as depth function and its correlation with microhardness variation, showing that in deeper layers its values tends to a healthy tooth values, behaving as the same way that the microhardness. (author)
1D inversion and analysis of marine controlled-source EM data
DEFF Research Database (Denmark)
Christensen, N.B.; Dodds, Kevin; Bulley, Ian
2006-01-01
been displaced by resistive oil or gas. We present preliminary results from an investigation of the applicability of one-dimensional inversion of the data. A noise model for the data set is developed and inversion is carried out with multi-layer models and 4-layer models. For the data set in question...
Directory of Open Access Journals (Sweden)
Moslem Moradi
2015-06-01
Full Text Available Here in, an application of a new seismic inversion algorithm in one of Iran’s oilfields is described. Stochastic (geostatistical seismic inversion, as a complementary method to deterministic inversion, is perceived as contribution combination of geostatistics and seismic inversion algorithm. This method integrates information from different data sources with different scales, as prior information in Bayesian statistics. Data integration leads to a probability density function (named as a posteriori probability that can yield a model of subsurface. The Markov Chain Monte Carlo (MCMC method is used to sample the posterior probability distribution, and the subsurface model characteristics can be extracted by analyzing a set of the samples. In this study, the theory of stochastic seismic inversion in a Bayesian framework was described and applied to infer P-impedance and porosity models. The comparison between the stochastic seismic inversion and the deterministic model based seismic inversion indicates that the stochastic seismic inversion can provide more detailed information of subsurface character. Since multiple realizations are extracted by this method, an estimation of pore volume and uncertainty in the estimation were analyzed.
The OSCAR experiment: using full-waveform inversion in the analysis of young oceanic crust
Silverton, Akela; Morgan, Joanna; Wilson, Dean; Hobbs, Richard
2017-04-01
The OSCAR experiment aims to derive an integrated model to better explain the effects of heat loss and alteration by hydrothermal fluids, associated with the cooling of young oceanic crust at an axial ridge. High-resolution seismic imaging of the sediments and basaltic basement can be used to map fluid flow pathways between the oceanic crust and the surrounding ocean. To obtain these high-resolution images, we undertake full-waveform inversion (FWI), an advanced seismic imaging technique capable of resolving velocity heterogeneities at a wide range of length scales, from background trends to fine-scale geological/crustal detail, in a fully data-driven automated manner. This technology is widely used within the petroleum sector due to its potential to obtain high-resolution P-wave velocity models that lead to improvements in migrated seismic images of the subsurface. Here, we use the P-wave velocity model obtained from travel-time tomography as the starting model in the application of acoustic, time-domain FWI to a multichannel streamer field dataset acquired in the east Pacific along a profile between the Costa Rica spreading centre and the Ocean Drilling Program (ODP) borehole 504B, where the crust is approximately six million years old. FWI iteratively improves the velocity model by minimizing the misfit between the predicted data and the field data. It seeks to find a high-fidelity velocity model that is capable of matching individual seismic waveforms of the original raw field dataset, with an initial focus on matching the low-frequency components of the early arriving energy. Quality assurance methods adopted during the inversion ensure convergence in the direction of the global minimum. We demonstrate that FWI is able to recover fine-scale, high-resolution velocity heterogeneities within the young oceanic crust along the profile. The highly resolved FWI velocity model is useful in the identification of the layer 2A/2B interface and low-velocity layers that
Parlangeau, Camille; Lacombe, Olivier; Schueller, Sylvie; Daniel, Jean-Marc
2018-01-01
The inversion of calcite twin data is a powerful tool to reconstruct paleostresses sustained by carbonate rocks during their geological history. Following Etchecopar's (1984) pioneering work, this study presents a new technique for the inversion of calcite twin data that reconstructs the 5 parameters of the deviatoric stress tensors from both monophase and polyphase twin datasets. The uncertainties in the parameters of the stress tensors reconstructed by this new technique are evaluated on numerically-generated datasets. The technique not only reliably defines the 5 parameters of the deviatoric stress tensor, but also reliably separates very close superimposed stress tensors (30° of difference in maximum principal stress orientation or switch between σ3 and σ2 axes). The technique is further shown to be robust to sampling bias and to slight variability in the critical resolved shear stress. Due to our still incomplete knowledge of the evolution of the critical resolved shear stress with grain size, our results show that it is recommended to analyze twin data subsets of homogeneous grain size to minimize possible errors, mainly those concerning differential stress values. The methodological uncertainty in principal stress orientations is about ± 10°; it is about ± 0.1 for the stress ratio. For differential stresses, the uncertainty is lower than ± 30%. Applying the technique to vein samples within Mesozoic limestones from the Monte Nero anticline (northern Apennines, Italy) demonstrates its ability to reliably detect and separate tectonically significant paleostress orientations and magnitudes from naturally deformed polyphase samples, hence to fingerprint the regional paleostresses of interest in tectonic studies.
Energy Technology Data Exchange (ETDEWEB)
Moy, Charles K.S. [School of Civil Engineering, University of Sydney, Sydney NSW 2006 (Australia); Australian Centre for Microscopy and Microanalysis, University of Sydney, Sydney NSW 2006 (Australia); ARC Centre of Excellence for Design in Light Metals, University of Sydney, Sydney NSW 2006 (Australia); Bocciarelli, Massimiliano, E-mail: massimiliano.bocciarelli@polimi.it [Department of Structural Engineering, Technical University of Milan (Politecnico di Milano), 20133 Milan (Italy); Ringer, Simon P. [Australian Centre for Microscopy and Microanalysis, University of Sydney, Sydney NSW 2006 (Australia); ARC Centre of Excellence for Design in Light Metals, University of Sydney, Sydney NSW 2006 (Australia); Ranzi, Gianluca [School of Civil Engineering, University of Sydney, Sydney NSW 2006 (Australia); Australian Centre for Microscopy and Microanalysis, University of Sydney, Sydney NSW 2006 (Australia); ARC Centre of Excellence for Design in Light Metals, University of Sydney, Sydney NSW 2006 (Australia)
2011-11-25
Highlights: {yields} Identification of mechanical properties by indentation test and inverse analysis. {yields} Pile-up height is also considered as experimental information. {yields} Inverse problem results to be well posed also in the case of mystical materials. {yields} 2024 Al alloy samples prepared using different age-hardening treatments are studied. - Abstract: This paper outlines an inverse analysis approach aimed at the identification of the mechanical properties of metallic materials based on the experimental results obtained from indentation tests. Previous work has shown the ill-posed nature of the inverse problem based on the load-penetration curve when dealing with mystical materials, which exhibit identical indentation curves even if possessing different yield and strain-hardening properties. For this reason, an additional measurement is used in the present study as input for the inverse analysis which consists of the maximum pile-up height measured after the indentation test. This approach lends itself for practical applications as the load-penetration curve can be easily obtained from commonly available micro-indenters while the pile-up present at the end of the test can be measured by different instruments depending on the size of the indented area, for example by means of an atomic force microscope or a laser profilometer. The inverse analysis procedure consists of a batch deterministic approach, and conventional optimization algorithms are employed for the minimization of the discrepancy norm. The first part of the paper outlines how the inclusion of both the maximum height of the pile-up and the indentation curve in the input data of the inverse analysis leads to a well-defined inverse problem using parameters of mystical materials. The approach is then applied to real experimental data obtained from three sets of 2024 Al alloy samples prepared using different age-hardening treatments. The accuracy of the identification process is validated
International Nuclear Information System (INIS)
Moy, Charles K.S.; Bocciarelli, Massimiliano; Ringer, Simon P.; Ranzi, Gianluca
2011-01-01
Highlights: → Identification of mechanical properties by indentation test and inverse analysis. → Pile-up height is also considered as experimental information. → Inverse problem results to be well posed also in the case of mystical materials. → 2024 Al alloy samples prepared using different age-hardening treatments are studied. - Abstract: This paper outlines an inverse analysis approach aimed at the identification of the mechanical properties of metallic materials based on the experimental results obtained from indentation tests. Previous work has shown the ill-posed nature of the inverse problem based on the load-penetration curve when dealing with mystical materials, which exhibit identical indentation curves even if possessing different yield and strain-hardening properties. For this reason, an additional measurement is used in the present study as input for the inverse analysis which consists of the maximum pile-up height measured after the indentation test. This approach lends itself for practical applications as the load-penetration curve can be easily obtained from commonly available micro-indenters while the pile-up present at the end of the test can be measured by different instruments depending on the size of the indented area, for example by means of an atomic force microscope or a laser profilometer. The inverse analysis procedure consists of a batch deterministic approach, and conventional optimization algorithms are employed for the minimization of the discrepancy norm. The first part of the paper outlines how the inclusion of both the maximum height of the pile-up and the indentation curve in the input data of the inverse analysis leads to a well-defined inverse problem using parameters of mystical materials. The approach is then applied to real experimental data obtained from three sets of 2024 Al alloy samples prepared using different age-hardening treatments. The accuracy of the identification process is validated against the mechanical
Techniques and Applications of Urban Data Analysis
AlHalawani, Sawsan N.
2016-05-26
Digitization and characterization of urban spaces are essential components as we move to an ever-growing ’always connected’ world. Accurate analysis of such digital urban spaces has become more important as we continue to get spatial and social context-aware feedback and recommendations in our daily activities. Modeling and reconstruction of urban environments have thus gained unprecedented importance in the last few years. Such analysis typically spans multiple disciplines, such as computer graphics, and computer vision as well as architecture, geoscience, and remote sensing. Reconstructing an urban environment usually requires an entire pipeline consisting of different tasks. In such a pipeline, data analysis plays a strong role in acquiring meaningful insights from the raw data. This dissertation primarily focuses on the analysis of various forms of urban data and proposes a set of techniques to extract useful information, which is then used for different applications. The first part of this dissertation presents a semi-automatic framework to analyze facade images to recover individual windows along with their functional configurations such as open or (partially) closed states. The main advantage of recovering both the repetition patterns of windows and their individual deformation parameters is to produce a factored facade representation. Such a factored representation enables a range of applications including interactive facade images, improved multi-view stereo reconstruction, facade-level change detection, and novel image editing possibilities. The second part of this dissertation demonstrates the importance of a layout configuration on its performance. As a specific application scenario, I investigate the interior layout of warehouses wherein the goal is to assign items to their storage locations while reducing flow congestion and enhancing the speed of order picking processes. The third part of the dissertation proposes a method to classify cities
Numerical modeling techniques for flood analysis
Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.
2016-12-01
Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.
International Nuclear Information System (INIS)
Alumbaugh, D.L.
1997-01-01
'It is the objective of this proposed study to develop and field test a new, integrated Hybrid Hydrologic-Geophysical Inverse Technique (HHGIT) for characterization of the vadose zone at contaminated sites. This fundamentally new approach to site characterization and monitoring will provide detailed knowledge about hydrological properties, geological heterogeneity and the extent and movement of contamination. HHGIT combines electrical resistivity tomography (ERT) to geophysically sense a 3D volume, statistical information about fabric of geological formations, and sparse data on moisture and contaminant distributions. Combining these three types of information into a single inversion process will provide much better estimates of spatially varied hydraulic properties and three-dimensional contaminant distributions than could be obtained from interpreting the data types individually. Furthermore, HHGIT will be a geostatistically based estimation technique; the estimates represent conditional mean hydraulic property fields and contaminant distributions. Thus, this method will also quantify the uncertainty of the estimates as well as the estimates themselves. The knowledge of this uncertainty is necessary to determine the likelihood of success of remediation efforts and the risk posed by hazardous materials. Controlled field experiments will be conducted to provide critical data sets for evaluation of these methodologies, for better understanding of mechanisms controlling contaminant movement in the vadose zone, and for evaluation of the HHGIT method as a long term monitoring strategy.'
Directory of Open Access Journals (Sweden)
Ardhia Wishnuprakasa
2016-12-01
Full Text Available In this study, the IEEE 519 Standard as a basis benchmarking for voltage (THDV and current (THDI in draft performance. Comparative Study based onthree-techniques of 2-Level Converter (2LC by using a Star-Connection Induction Motor (Y-CIM in ExtraLow Voltage (ELV Configuration.For the detail explanation, a primary inverter as Direct-Inverterby PWMdirect (PWM degreesand asecondary inverter as Inverse-Inverterby PWMinverse(PWM + PI degrees. It tends a modified algorithm,for eachof SPWM in six rules, and FHIPWM in 5th harmonics Injectedin standard modulation as the purpose for the Open-Ends of Pre-Dual Inverter in Decoupled SPWM for twelve rules, and Decoupled FHIPWM in combination of 5th harmonics Injectedin combination of two-standard-modulation. Those techniques are the purpose of two-inverter combination, which namelythe Equal Direct-Inverse (EDI algorithmproduct of prototyping in similarities. The observation is restricted in voltage scope between Simulation by using Power Simulator (PSIMand Application by using Microcontroller ARM STM32F4 Discovery.
Directory of Open Access Journals (Sweden)
M.A. Nwachukwu
2017-01-01
Full Text Available The use of trial pits as a first step in quarry site development causes land degradation and results in more failure than success for potential quarry investors in some parts of the world. In this paper, resistivity, depth and distance values derived from 26 Vertical Electric Soundings (VES and 2 profiling inversion sections were successfully used to evaluate a quarry site prior to development. The target rock Diabase (Dolerite was observed and it had a resistivity range of 3.0 × 104 –7. 8 × 106 Ω-m, and was clearly distinguishable from associated rocks with its bright red color code on the AGI 1D inversion software. This target rock was overlain by quartzite, indurate shale and mudstone as overburden materials. The quartzite, with its off-red colour, has a resistivity range of 2.0 × 103–2.9 × 105 Ω-m, while the indurate shale, with a yellowish-brown colour, showed resistivity values ranging from 6.1 × 102 – 2.8 × 105 Ω-m. Topsoil was clayey, with a resistivity range from 8 – 8.6 × 102u Ω-m and depths of 0.3–1.8 m, often weathered and replaced by associated rocks outcrops. The diabase rock, in the three prospective pits mapped, showed thicknesses of between 40 and 76 m across the site. The prospective pits were identified to accommodate an estimated 2,569,450 tonnes of diabase with an average quarry pit depth of 50 m. This figure was justified by physical observations made at a nearby quarry pit and from test holes. Communities were able to prepare a geophysical appraisal of the intrusive body in their domain for economic planning and sustainability of the natural resource.
Inferring time‐varying recharge from inverse analysis of long‐term water levels
Dickinson, Jesse; Hanson, R.T.; Ferré, T.P.A.; Leake, S.A.
2004-01-01
Water levels in aquifers typically vary in response to time‐varying rates of recharge, suggesting the possibility of inferring time‐varying recharge rates on the basis of long‐term water level records. Presumably, in the southwestern United States (Arizona, Nevada, New Mexico, southern California, and southern Utah), rates of mountain front recharge to alluvial aquifers depend on variations in precipitation rates due to known climate cycles such as the El Niño‐Southern Oscillation index and the Pacific Decadal Oscillation. This investigation examined the inverse application of a one‐dimensional analytical model for periodic flow described by Lloyd R. Townley in 1995 to estimate periodic recharge variations on the basis of variations in long‐term water level records using southwest aquifers as the case study. Time‐varying water level records at various locations along the flow line were obtained by simulation of forward models of synthetic basins with applied sinusoidal recharge of either a single period or composite of multiple periods of length similar to known climate cycles. Periodic water level components, reconstructed using singular spectrum analysis (SSA), were used to calibrate the analytical model to estimate each recharge component. The results demonstrated that periodic recharge estimates were most accurate in basins with nearly uniform transmissivity and the accuracy of the recharge estimates depends on monitoring well location. A case study of the San Pedro Basin, Arizona, is presented as an example of calibrating the analytical model to real data.
Quantitative blood flow analysis with digital techniques
International Nuclear Information System (INIS)
Forbes, G.
1984-01-01
The general principles of digital techniques in quantitating absolute blood flow during arteriography are described. Results are presented for a phantom constructed to correlate digitally calculated absolute flow with direct flow measurements. The clinical use of digital techniques in cerebrovascular angiography is briefly described. (U.K.)
Quantitative analysis of SMEX'02 AIRSAR data for soil moisture inversion
Zyl, J. J. van; Njoku, E.; Jackson, T.
2003-01-01
This paper discusses in detail the characteristics of the AIRSAR data acquired, and provides an initial quantitative assessment of the accuracy of the radar inversion algorithms under these vegetated conditions.
Multi-parameter Analysis and Inversion for Anisotropic Media Using the Scattering Integral Method
Djebbi, Ramzi
2017-01-01
the model. I study the prospect of applying a scattering integral approach for multi-parameter inversion for a transversely isotropic model with a vertical axis of symmetry. I mainly analyze the sensitivity kernels to understand the sensitivity of seismic
Born reflection kernel analysis and wave-equation reflection traveltime inversion in elastic media
Wang, Tengfei; Cheng, Jiubing
2017-01-01
Elastic reflection waveform inversion (ERWI) utilize the reflections to update the low and intermediate wavenumbers in the deeper part of model. However, ERWI suffers from the cycle-skipping problem due to the objective function of waveform residual
Wang, James S.; Kawa, S. Randolph; Collatz, G. James; Baker, David F.; Ott, Lesley
2015-01-01
About one-half of the global CO2 emissions from fossil fuel combustion and deforestation accumulates in the atmosphere, where it contributes to global warming. The rest is taken up by vegetation and the ocean. The precise contribution of the two sinks, and their location and year-to-year variability are, however, not well understood. We use two different approaches, batch Bayesian synthesis inversion and variational data assimilation, to deduce the global spatiotemporal distributions of CO2 fluxes during 2009-2010. One of our objectives is to assess different sources of uncertainties in inferred fluxes, including uncertainties in prior flux estimates and observations, and differences in inversion techniques. For prior constraints, we utilize fluxes and uncertainties from the CASA-GFED model of the terrestrial biosphere and biomass burning driven by satellite observations and interannually varying meteorology. We also use measurement-based ocean flux estimates and two sets of fixed fossil CO2 emissions. Here, our inversions incorporate column CO2 measurements from the GOSAT satellite (ACOS retrieval, filtered and bias-corrected) and in situ observations (individual flask and afternoon-average continuous observations) to estimate fluxes in 108 regions over 8-day intervals for the batch inversion and at 3 x 3.75 weekly for the variational system. Relationships between fluxes and atmospheric concentrations are derived consistently for the two inversion systems using the PCTM atmospheric transport model driven by meteorology from the MERRA reanalysis. We compare the posterior fluxes and uncertainties derived using different data sets and the two inversion approaches, and evaluate the posterior atmospheric concentrations against independent data including aircraft measurements. The optimized fluxes generally resemble those from other studies. For example, the results indicate that the terrestrial biosphere is a net CO2 sink, and a GOSAT-only inversion suggests a shift in
Energy Technology Data Exchange (ETDEWEB)
2018-03-19
R code that performs the analysis of a data set presented in the paper ‘Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications’ by Lewis, J., Zhang, A., Anderson-Cook, C. It provides functions for doing inverse predictions in this setting using several different statistical methods. The data set is a publicly available data set from a historical Plutonium production experiment.
Analysis of inversions in the factor VIII gene in Spanish hemophilia A patients and families
Energy Technology Data Exchange (ETDEWEB)
Domenech, M.; Tizzano, E.; Baiget, M. [Hospital de Sant Pau, Barcelona (Spain); Altisent, C. [Hospital Vall d`Hebron, Barcelona (Spain)
1994-09-01
Intron 22 is the largest intron of the factor VIII gene and contains a CpG island from which two additional transcripts originate. One of these transcripts corresponds to the F8A gene which have telomeric extragenic copies in the X chromosome. An inversion involving homologous recombination between the intragenic and the distal or proximal copies of the F8A gene has been recently described as a common cause of severe hemophilia A (HA). We analyzed intron 22 rearrangements in 195 HA patients (123 familial and 72 sporadic cases). According to factor VIII levels, our sample was classified as severe in 114 cases, moderate in 29 cases and mild in 52 cases. An intron 22 (F8A) probe was hybridized to Southern blots of BcII digested DNA obtained from peripheral blood. A clear pattern of altered bands identifies distal or proximal inversions. We detected an abnormal pattern identifying an inversion in 49 (25%) of the analyzed cases. 43% of severe HA patients (49 cases) showed an inversion. As expected, no inversion was found in the moderate and mild group of patients. We found a high proportion (78%) of the distal rearrangement. From 49 identified inversions, 33 were found in familial cases (27%), while the remaining 15 were detected in sporadic patients (22%) in support that this mutational event occurs with a similar frequency in familial or sporadic cases. In addition, we detected a significant tendency of distal inversion to occur more frequently in familial cases than in sporadic cases. Inhibitor development to factor VIII was documented in approximately 1/3 of the patients with inversion. The identification of such a frequent molecular event in severe hemophilia A patients has been applied in our families to carrier and prenatal diagnosis, to determine the origin of the mutation in the sporadic cases and to detect the presence of germinal mosaicism.
Molina-Aguilera, A.; Mancilla, F. D. L.; Julià, J.; Morales, J.
2017-12-01
Joint inversion techniques of P-receiver functions and wave dispersion data implicitly assume an isotropic radial stratified earth. The conventional approach invert stacked radial component receiver functions from different back-azimuths to obtain a laterally homogeneous single-velocity model. However, in the presence of strong lateral heterogeneities as anisotropic layers and/or dipping interfaces, receiver functions are considerably perturbed and both the radial and transverse components exhibit back azimuthal dependences. Harmonic analysis methods exploit these azimuthal periodicities to separate the effects due to the isotropic flat-layered structure from those effects caused by lateral heterogeneities. We implement a harmonic analysis method based on radial and transverse receiver functions components and carry out a synthetic study to illuminate the capabilities of the method in isolating the isotropic flat-layered part of receiver functions and constrain the geometry and strength of lateral heterogeneities. The independent of the baz P receiver function are jointly inverted with phase and group dispersion curves using a linearized inversion procedure. We apply this approach to high dense seismic profiles ( 2 km inter-station distance, see figure) located in the central Betics (western Mediterranean region), a region which has experienced complex geodynamic processes and exhibit strong variations in Moho topography. The technique presented here is robust and can be applied systematically to construct a 3-D model of the crust and uppermost mantle across large networks.
Ceylan, Halil; Gopalakrishnan, Kasthurirangan; Birkan Bayrak, Mustafa; Guclu, Alper
2013-09-01
The need to rapidly and cost-effectively evaluate the present condition of pavement infrastructure is a critical issue concerning the deterioration of ageing transportation infrastructure all around the world. Nondestructive testing (NDT) and evaluation methods are well-suited for characterising materials and determining structural integrity of pavement systems. The falling weight deflectometer (FWD) is a NDT equipment used to assess the structural condition of highway and airfield pavement systems and to determine the moduli of pavement layers. This involves static or dynamic inverse analysis (referred to as backcalculation) of FWD deflection profiles in the pavement surface under a simulated truck load. The main objective of this study was to employ biologically inspired computational systems to develop robust pavement layer moduli backcalculation algorithms that can tolerate noise or inaccuracies in the FWD deflection data collected in the field. Artificial neural systems, also known as artificial neural networks (ANNs), are valuable computational intelligence tools that are increasingly being used to solve resource-intensive complex engineering problems. Unlike the linear elastic layered theory commonly used in pavement layer backcalculation, non-linear unbound aggregate base and subgrade soil response models were used in an axisymmetric finite element structural analysis programme to generate synthetic database for training and testing the ANN models. In order to develop more robust networks that can tolerate the noisy or inaccurate pavement deflection patterns in the NDT data, several network architectures were trained with varying levels of noise in them. The trained ANN models were capable of rapidly predicting the pavement layer moduli and critical pavement responses (tensile strains at the bottom of the asphalt concrete layer, compressive strains on top of the subgrade layer and the deviator stresses on top of the subgrade layer), and also pavement
Real analysis modern techniques and their applications
Folland, Gerald B
1999-01-01
An in-depth look at real analysis and its applications-now expanded and revised.This new edition of the widely used analysis book continues to cover real analysis in greater detail and at a more advanced level than most books on the subject. Encompassing several subjects that underlie much of modern analysis, the book focuses on measure and integration theory, point set topology, and the basics of functional analysis. It illustrates the use of the general theories and introduces readers to other branches of analysis such as Fourier analysis, distribution theory, and probability theory.This edi
International Nuclear Information System (INIS)
Bunshah, R.F.
1976-01-01
A number of different techniques which range over several different aspects of materials research are covered in this volume. They are concerned with property evaluation of 4 0 K and below, surface characterization, coating techniques, techniques for the fabrication of composite materials, computer methods, data evaluation and analysis, statistical design of experiments and non-destructive test techniques. Topics covered in this part include internal friction measurements; nondestructive testing techniques; statistical design of experiments and regression analysis in metallurgical research; and measurement of surfaces of engineering materials
Directory of Open Access Journals (Sweden)
C. B. Alden
2018-03-01
Full Text Available Advances in natural gas extraction technology have led to increased activity in the production and transport sectors in the United States and, as a consequence, an increased need for reliable monitoring of methane leaks to the atmosphere. We present a statistical methodology in combination with an observing system for the detection and attribution of fugitive emissions of methane from distributed potential source location landscapes such as natural gas production sites. We measure long (> 500 m, integrated open-path concentrations of atmospheric methane using a dual frequency comb spectrometer and combine measurements with an atmospheric transport model to infer leak locations and strengths using a novel statistical method, the non-zero minimum bootstrap (NZMB. The new statistical method allows us to determine whether the empirical distribution of possible source strengths for a given location excludes zero. Using this information, we identify leaking source locations (i.e., natural gas wells through rejection of the null hypothesis that the source is not leaking. The method is tested with a series of synthetic data inversions with varying measurement density and varying levels of model–data mismatch. It is also tested with field observations of (1 a non-leaking source location and (2 a source location where a controlled emission of 3.1 × 10−5 kg s−1 of methane gas is released over a period of several hours. This series of synthetic data tests and outdoor field observations using a controlled methane release demonstrates the viability of the approach for the detection and sizing of very small leaks of methane across large distances (4+ km2 in synthetic tests. The field tests demonstrate the ability to attribute small atmospheric enhancements of 17 ppb to the emitting source location against a background of combined atmospheric (e.g., background methane variability and measurement uncertainty of 5 ppb (1σ, when
Alden, Caroline B.; Ghosh, Subhomoy; Coburn, Sean; Sweeney, Colm; Karion, Anna; Wright, Robert; Coddington, Ian; Rieker, Gregory B.; Prasad, Kuldeep
2018-03-01
Advances in natural gas extraction technology have led to increased activity in the production and transport sectors in the United States and, as a consequence, an increased need for reliable monitoring of methane leaks to the atmosphere. We present a statistical methodology in combination with an observing system for the detection and attribution of fugitive emissions of methane from distributed potential source location landscapes such as natural gas production sites. We measure long (> 500 m), integrated open-path concentrations of atmospheric methane using a dual frequency comb spectrometer and combine measurements with an atmospheric transport model to infer leak locations and strengths using a novel statistical method, the non-zero minimum bootstrap (NZMB). The new statistical method allows us to determine whether the empirical distribution of possible source strengths for a given location excludes zero. Using this information, we identify leaking source locations (i.e., natural gas wells) through rejection of the null hypothesis that the source is not leaking. The method is tested with a series of synthetic data inversions with varying measurement density and varying levels of model-data mismatch. It is also tested with field observations of (1) a non-leaking source location and (2) a source location where a controlled emission of 3.1 × 10-5 kg s-1 of methane gas is released over a period of several hours. This series of synthetic data tests and outdoor field observations using a controlled methane release demonstrates the viability of the approach for the detection and sizing of very small leaks of methane across large distances (4+ km2 in synthetic tests). The field tests demonstrate the ability to attribute small atmospheric enhancements of 17 ppb to the emitting source location against a background of combined atmospheric (e.g., background methane variability) and measurement uncertainty of 5 ppb (1σ), when measurements are averaged over 2 min. The
Application of functional analysis techniques to supervisory systems
International Nuclear Information System (INIS)
Lambert, Manuel; Riera, Bernard; Martel, Gregory
1999-01-01
The aim of this paper is to apply firstly two interesting functional analysis techniques for the design of supervisory systems for complex processes, and secondly to discuss the strength and the weaknesses of each of them. Two functional analysis techniques have been applied, SADT (Structured Analysis and Design Technique) and FAST (Functional Analysis System Technique) on a process, an example of a Water Supply Process Control (WSPC) system. These techniques allow a functional description of industrial processes. The paper briefly discusses the functions of a supervisory system and some advantages of the application of functional analysis for the design of a 'human' centered supervisory system. Then the basic principles of the two techniques applied on the WSPC system are presented. Finally, the different results obtained from the two techniques are discussed
IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES
Institute of Scientific and Technical Information of China (English)
纳瑟; 刘重庆
2002-01-01
A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.
Walker, Joseph F; Zanis, Michael J; Emery, Nancy C
2014-04-01
Complete chloroplast genome studies can help resolve relationships among large, complex plant lineages such as Asteraceae. We present the first whole plastome from the Madieae tribe and compare its sequence variation to other chloroplast genomes in Asteraceae. We used high throughput sequencing to obtain the Lasthenia burkei chloroplast genome. We compared sequence structure and rates of molecular evolution in the small single copy (SSC), large single copy (LSC), and inverted repeat (IR) regions to those for eight Asteraceae accessions and one Solanaceae accession. The chloroplast sequence of L. burkei is 150 746 bp and contains 81 unique protein coding genes and 4 coding ribosomal RNA sequences. We identified three major inversions in the L. burkei chloroplast, all of which have been found in other Asteraceae lineages, and a previously unreported inversion in Lactuca sativa. Regions flanking inversions contained tRNA sequences, but did not have particularly high G + C content. Substitution rates varied among the SSC, LSC, and IR regions, and rates of evolution within each region varied among species. Some observed differences in rates of molecular evolution may be explained by the relative proportion of coding to noncoding sequence within regions. Rates of molecular evolution vary substantially within and among chloroplast genomes, and major inversion events may be promoted by the presence of tRNAs. Collectively, these results provide insight into different mechanisms that may promote intramolecular recombination and the inversion of large genomic regions in the plastome.
MCNP perturbation technique for criticality analysis
International Nuclear Information System (INIS)
McKinney, G.W.; Iverson, J.L.
1995-01-01
The differential operator perturbation technique has been incorporated into the Monte Carlo N-Particle transport code MCNP and will become a standard feature of future releases. This feature includes first and/or second order terms of the Taylor Series expansion for response perturbations related to cross-section data (i.e., density, composition, etc.). Criticality analyses can benefit from this technique in that predicted changes in the track-length tally estimator of K eff may be obtained for multiple perturbations in a single run. A key advantage of this method is that a precise estimate of a small change in response (i.e., < 1%) is easily obtained. This technique can also offer acceptable accuracy, to within a few percent, for up to 20-30% changes in a response
Improved Tandem Measurement Techniques for Aerosol Particle Analysis
Rawat, Vivek Kumar
Non-spherical, chemically inhomogeneous (complex) nanoparticles are encountered in a number of natural and engineered environments, including combustion systems (which produces highly non-spherical aggregates), reactors used in gas-phase materials synthesis of doped or multicomponent materials, and in ambient air. These nanoparticles are often highly diverse in size, composition and shape, and hence require determination of property distribution functions for accurate characterization. This thesis focuses on development of tandem mobility-mass measurement techniques coupled with appropriate data inversion routines to facilitate measurement of two dimensional size-mass distribution functions while correcting for the non-idealities of the instruments. Chapter 1 provides the detailed background and motivation for the studies performed in this thesis. In chapter 2, the development of an inversion routine is described which is employed to determine two dimensional size-mass distribution functions from Differential Mobility Analyzer-Aerosol Particle Mass analyzer tandem measurements. Chapter 3 demonstrates the application of the two dimensional distribution function to compute cumulative mass distribution function and also evaluates the validity of this technique by comparing the calculated total mass concentrations to measured values for a variety of aerosols. In Chapter 4, this tandem measurement technique with the inversion routine is employed to analyze colloidal suspensions. Chapter 5 focuses on application of a transverse modulation ion mobility spectrometer coupled with a mass spectrometer to study the effect of vapor dopants on the mobility shifts of sub 2 nm peptide ion clusters. These mobility shifts are then compared to models based on vapor uptake theories. Finally, in Chapter 6, a conclusion of all the studies performed in this thesis is provided and future avenues of research are discussed.
Data Analysis Techniques for Physical Scientists
Pruneau, Claude A.
2017-10-01
Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.
Surface analysis and techniques in biology
Smentkowski, Vincent S
2014-01-01
This book highlights state-of-the-art surface analytical instrumentation, advanced data analysis tools, and the use of complimentary surface analytical instrumentation to perform a complete analysis of biological systems.
Inverse analysis of a rectangular fin using the lattice Boltzmann method
International Nuclear Information System (INIS)
Bamdad, Keivan; Ashorynejad, Hamid Reza
2015-01-01
Highlights: • Lattice Boltzmann method is used to study a transient conductive-convective fin. • LBM and Conjugate Gradient Method (CGM) are used to solve an inverse problem in fins. • LBM–ACGM estimates the unknown boundary conditions of fins accurately. • The accuracy and CPU time of LBM–ACGM are compared to IFDM–ACGM. • LBM–ACGM could be a good alternative for the conventional inverse methods. - Abstract: Inverse methods have many applications in determining unknown variables in heat transfer problems when direct measurements are impossible. As most common inverse methods are iterative and time consuming especially for complex geometries, developing more efficient methods seems necessary. In this paper, a direct transient conduction–convection heat transfer problem (fin) under several boundary conditions was solved by using lattice Boltzmann method (LBM), and then the results were successfully validated against both the finite difference method and analytical solution. Then, in the inverse problem both unknown base temperatures and heat fluxes in the rectangular fin were estimated by combining the adjoint conjugate gradient method (ACGM) and LBM. A close agreement between the exact values and estimated results confirmed the validity and accuracy of the ACGM–LBM. To compare the calculation time of ACGM–LBM, the inverse problem was solved by implicit finite difference methods as well. This comparison proved that the ACGM–LBM was an accurate and fast method to determine unknown thermal boundary conditions in transient conduction–convection heat transfer problems. The findings can efficiently determine the unknown variables in fins when a desired temperature distribution is available
Directory of Open Access Journals (Sweden)
2005-10-01
Full Text Available With a draft genome-sequence assembly for the chimpanzee available, it is now possible to perform genome-wide analyses to identify, at a submicroscopic level, structural rearrangements that have occurred between chimpanzees and humans. The goal of this study was to investigate chromosomal regions that are inverted between the chimpanzee and human genomes. Using the net alignments for the builds of the human and chimpanzee genome assemblies, we identified a total of 1,576 putative regions of inverted orientation, covering more than 154 mega-bases of DNA. The DNA segments are distributed throughout the genome and range from 23 base pairs to 62 mega-bases in length. For the 66 inversions more than 25 kilobases (kb in length, 75% were flanked on one or both sides by (often unrelated segmental duplications. Using PCR and fluorescence in situ hybridization we experimentally validated 23 of 27 (85% semi-randomly chosen regions; the largest novel inversion confirmed was 4.3 mega-bases at human Chromosome 7p14. Gorilla was used as an out-group to assign ancestral status to the variants. All experimentally validated inversion regions were then assayed against a panel of human samples and three of the 23 (13% regions were found to be polymorphic in the human genome. These polymorphic inversions include 730 kb (at 7p22, 13 kb (at 7q11, and 1 kb (at 16q24 fragments with a 5%, 30%, and 48% minor allele frequency, respectively. Our results suggest that inversions are an important source of variation in primate genome evolution. The finding of at least three novel inversion polymorphisms in humans indicates this type of structural variation may be a more common feature of our genome than previously realized.
Generalized inverses theory and computations
Wang, Guorong; Qiao, Sanzheng
2018-01-01
This book begins with the fundamentals of the generalized inverses, then moves to more advanced topics. It presents a theoretical study of the generalization of Cramer's rule, determinant representations of the generalized inverses, reverse order law of the generalized inverses of a matrix product, structures of the generalized inverses of structured matrices, parallel computation of the generalized inverses, perturbation analysis of the generalized inverses, an algorithmic study of the computational methods for the full-rank factorization of a generalized inverse, generalized singular value decomposition, imbedding method, finite method, generalized inverses of polynomial matrices, and generalized inverses of linear operators. This book is intended for researchers, postdocs, and graduate students in the area of the generalized inverses with an undergraduate-level understanding of linear algebra.
Analysis of a spectrum of a positron annihilation half life through inverse problem studies
International Nuclear Information System (INIS)
Monteiro, Roberto Pellacani G.; Viterbo, Vanessa C.; Braga, Joao Pedro; Magalhaes, Wellington F. de; Braga, A.P.
2002-01-01
Inversion of positron annihilation lifetime spectroscopy, based on a neural network Hopfield model and singular value decomposition (SVD) associated to Tikhonov regularization is presented in this work. From a previous reported density function for lysozyme in water a simulated spectrum, without spectrometer resolution effects, was generated. The precision of the inverted density function was analyzed taking into account the number of neurons and the learning time of the Hopfield network and the maximum position and areas for the spectral peaks in the SVD method considering noise and noiseless data. A fair agreement was obtained when comparing the inversion results with direct exact results. (author)
Methane combustion kinetic rate constants determination: an ill-posed inverse problem analysis
Directory of Open Access Journals (Sweden)
Bárbara D. L. Ferreira
2013-01-01
Full Text Available Methane combustion was studied by the Westbrook and Dryer model. This well-established simplified mechanism is very useful in combustion science, for computational effort can be notably reduced. In the inversion procedure to be studied, rate constants are obtained from [CO] concentration data. However, when inherent experimental errors in chemical concentrations are considered, an ill-conditioned inverse problem must be solved for which appropriate mathematical algorithms are needed. A recurrent neural network was chosen due to its numerical stability and robustness. The proposed methodology was compared against Simplex and Levenberg-Marquardt, the most used methods for optimization problems.
Inversion factor in the comparative analysis of dynamical processes in radioecology
Energy Technology Data Exchange (ETDEWEB)
Zarubin, O.; Zarubina, N. [Institute for Nuclear Researh of National Academy of Science of Ukraine (Ukraine)
2014-07-01
We have studied levels of specific activity of radionuclides in fish and fungi of the Kiev region of Ukraine since 1986 till 2013, including 30-km alienation zone of Chernobyl Nuclear Power Plant (ChNPP) after the accident. The radionuclides specific activity dynamics analysis for 10 species of freshwater fishes of different trophic levels and at 7 species of higher fungi was carried out for this period. Multiple research of specific activity of radionuclides in fish was carried out on the Kanevskoe reservoir and cooling-pond of ChNPP, in fungi - on 6 testing areas, which are situated within the range of 2 to 150 km from ChNPP. The basic attention was given to accumulation of {sup 137}Cs. We have established that dynamics of specific activity of {sup 137}Cs within different species of fish in the same reservoir is not identical. Dynamics of specific activity of {sup 137}Cs within various species of fungi of the same testing area is also not identical. Dynamics of specific activity of {sup 137}Cs with the investigated objects of various testing dry-land and water areas also varies. Authors suggest an inversion factor to be used for comparison of dynamics of specific activity of {sup 137}Cs, which in case of biota is a nonlinear process: K{sub inv} = A{sub 0} / A{sub t}, where A{sub 0} stands for the value of specific activity of the radionuclide at time 0; A{sub t} - specific activity of radionuclide at time t. Therefore, K{sub inv} reflects ratio (inversion) of specific activity of radionuclides to its starting value as a function of time, where K{sub inv} > 1 corresponds to increase in radionuclides' specific activity and K{sub inv} < 1 corresponds to its decrease. For example, K{sub inv} of {sup 137}Cs in fish Rutilus rutilus in the Kanevskoe reservoir was equal to 0.57, and 13.33 in the cooling-pond of ChNPP, at Blicca bjoerkna 0.95 and 29.61 accordingly in 1987 - 1996. In 1987 - 2011 K{sub inv} of {sup 137}Cs at R. rutilus in the Kanevskoe reservoir
Vintila, Iuliana; Gavrus, Adinel
2017-10-01
The present research paper proposes the validation of a rigorous computation model used as a numerical tool to identify rheological behavior of complex emulsions W/O. Considering a three-dimensional description of a general viscoplastic flow it is detailed the thermo-mechanical equations used to identify fluid or soft material's rheological laws starting from global experimental measurements. Analyses are conducted for complex emulsions W/O having generally a Bingham behavior using the shear stress - strain rate dependency based on a power law and using an improved analytical model. Experimental results are investigated in case of rheological behavior for crude and refined rapeseed/soybean oils and four types of corresponding W/O emulsions using different physical-chemical composition. The rheological behavior model was correlated with the thermo-mechanical analysis of a plane-plane rheometer, oil content, chemical composition, particle size and emulsifier's concentration. The parameters of rheological laws describing the industrial oils and the W/O concentrated emulsions behavior were computed from estimated shear stresses using a non-linear regression technique and from experimental torques using the inverse analysis tool designed by A. Gavrus (1992-2000).
Survey of immunoassay techniques for biological analysis
International Nuclear Information System (INIS)
Burtis, C.A.
1986-10-01
Immunoassay is a very specific, sensitive, and widely applicable analytical technique. Recent advances in genetic engineering have led to the development of monoclonal antibodies which further improves the specificity of immunoassays. Originally, radioisotopes were used to label the antigens and antibodies used in immunoassays. However, in the last decade, numerous types of immunoassays have been developed which utilize enzymes and fluorescent dyes as labels. Given the technical, safety, health, and disposal problems associated with using radioisotopes, immunoassays that utilize the enzyme and fluorescent labels are rapidly replacing those using radioisotope labels. These newer techniques are as sensitive, are easily automated, have stable reagents, and do not have a disposal problem. 6 refs., 1 fig., 2 tabs
Hybrid chemical and nondestructive-analysis technique
International Nuclear Information System (INIS)
Hsue, S.T.; Marsh, S.F.; Marks, T.
1982-01-01
A hybrid chemical/NDA technique has been applied at the Los Alamos National Laboratory to the assay of plutonium in ion-exchange effluents. Typical effluent solutions contain low concentrations of plutonium and high concentrations of americium. A simple trioctylphosphine oxide (TOPO) separation can remove 99.9% of the americium. The organic phase that contains the separated plutonium can be accurately assayed by monitoring the uranium L x-ray intensities
Data analysis techniques for gravitational wave observations
Indian Academy of Sciences (India)
Astrophysical sources of gravitational waves fall broadly into three categories: (i) transient and bursts, (ii) periodic or continuous wave and (iii) stochastic. Each type of source requires a different type of data analysis strategy. In this talk various data analysis strategies will be reviewed. Optimal filtering is used for extracting ...
Inverse modeling of cloud-aerosol interactions -- Part 1: Detailed response surface analysis
Partridge, D.G.; Vrugt, J.A.; Tunved, P.; Ekman, A.M.L.; Gorea, D.; Sooroshian, A.
2011-01-01
New methodologies are required to probe the sensitivity of parameters describing cloud droplet activation. This paper presents an inverse modeling-based method for exploring cloud-aerosol interactions via response surfaces. The objective function, containing the difference between the measured and
Inverse Kinematic Analysis and Evaluation of a Robot for Nondestructive Testing Application
Directory of Open Access Journals (Sweden)
Zongxing Lu
2015-01-01
Full Text Available The robot system has been utilized in the nondestructive testing field in recent years. However, only a few studies have focused on the application of ultrasonic testing for complex work pieces with the robot system. The inverse kinematics problem of the 6-DOF robot should be resolved before the ultrasonic testing task. A new effective solution for curved-surface scanning with a 6-DOF robot system is proposed in this study. A new arm-wrist separateness method is adopted to solve the inverse problem of the robot system. Eight solutions of the joint angles can be acquired with the proposed inverse kinematics method. The shortest distance rule is adopted to optimize the inverse kinematics solutions. The best joint-angle solution is identified. Furthermore, a 3D-application software is developed to simulate ultrasonic trajectory planning for complex-shape work pieces with a 6-DOF robot. Finally, the validity of the scanning method is verified based on the C-scan results of a work piece with a curved surface. The developed robot ultrasonic testing system is validated. The proposed method provides an effective solution to this problem and would greatly benefit the development of industrial nondestructive testing.
Chromosomal changes in pathology and during evolution: analysis of pericentric inversions
International Nuclear Information System (INIS)
Dutrillaux, B.; Aurias, A.; Viegas-Pequignot, E.
1980-01-01
The great similarities between pericentric inversions observed in human pathology, having occurred during evolution, or radio-induced in human cells, indicate that they do not occur at random. About 1/3rd to 1/4th of these chromosomal rearrangements are capable to induce abnormal progeny after aneusomy of recombination, during meiosis [fr
An inverse dynamics model for the analysis, reconstruction and prediction of bipedal walking
Koopman, Hubertus F.J.M.; Grootenboer, H.J.; de Jongh, Henk J.; Huijing, P.A.J.B.M.; de Vries, J.
1995-01-01
Walking is a constrained movement which may best be observed during the double stance phase when both feet contact the floor. When analyzing a measured movement with an inverse dynamics model, a violation of these constrains will always occur due to measuring errors and deviations of the segments
Green, M; Wald, E R; Dashefsky, B; Barbadora, K; Wadowsky, R M
1996-01-01
Two nosocomial cases of Legionnaires' disease occurred in children. Legionella pneumophila serogroup 1 was isolated from both patients and 30 of 39 plumbing system sites in the hospital. The patient and hospital environmental isolates yielded identical field inversion gel electrophoretic patterns which differed from patterns observed with epidemiologically unrelated strains.
Hou, Zhenlong; Huang, Danian
2017-09-01
In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.
Visualization techniques for malware behavior analysis
Grégio, André R. A.; Santos, Rafael D. C.
2011-06-01
Malware spread via Internet is a great security threat, so studying their behavior is important to identify and classify them. Using SSDT hooking we can obtain malware behavior by running it in a controlled environment and capturing interactions with the target operating system regarding file, process, registry, network and mutex activities. This generates a chain of events that can be used to compare them with other known malware. In this paper we present a simple approach to convert malware behavior into activity graphs and show some visualization techniques that can be used to analyze malware behavior, individually or grouped.
Techniques for Intelligence Analysis of Networks
National Research Council Canada - National Science Library
Cares, Jeffrey R
2005-01-01
...) there are significant intelligence analysis manifestations of these properties; and (4) a more satisfying theory of Networked Competition than currently exists for NCW/NCO is emerging from this research...
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by
The Network Protocol Analysis Technique in Snort
Wu, Qing-Xiu
Network protocol analysis is a network sniffer to capture data for further analysis and understanding of the technical means necessary packets. Network sniffing is intercepted by packet assembly binary format of the original message content. In order to obtain the information contained. Required based on TCP / IP protocol stack protocol specification. Again to restore the data packets at protocol format and content in each protocol layer. Actual data transferred, as well as the application tier.
Proportional Derivative Control with Inverse Dead-Zone for Pendulum Systems
Directory of Open Access Journals (Sweden)
José de Jesús Rubio
2013-01-01
Full Text Available A proportional derivative controller with inverse dead-zone is proposed for the control of pendulum systems. The proposed method has the characteristic that the inverse dead-zone is cancelled with the pendulum dead-zone. Asymptotic stability of the proposed technique is guaranteed by the Lyapunov analysis. Simulations of two pendulum systems show the effectiveness of the proposed technique.
Directory of Open Access Journals (Sweden)
Tjahyo NugrohoAdji
2013-07-01
The result shows that firstly, the aquifer within the research area can be grouped into several aquifer systems (i.e. denudational hill, colluvial plain, alluvial plain, and beach ridges from recharge to discharge which generally have potential groundwater resources in terms of the depth and fluctuation of groundwater table. Secondly, flownets analysis gives three flowpaths that are plausible to be modeled in order to describe their hydrogeochemical reactions. Thirdly, the Saturation Indices (SI analysis shows that there are a positive correlation between the mineral occurrence and composition and the value of SI from recharge to discharge. In addition, The Mass Balance Model indicates that dissolution and precipitation of aquifer minerals is dominantly change the chemical composition along flowpath and the rate of the mass transfer between two wells shows a discrepancy and be certain of the percentage of the nature of aquifer mineral. Lastly, there is an interesting characteristic of mass balance chemical reaction occurs which is the entire chemical reaction shows that the sum of smallest mineral fmmol/litre will firstly always totally be reacted.
Uncertainty analysis technique for OMEGA Dante measurementsa)
May, M. J.; Widmann, K.; Sorce, C.; Park, H.-S.; Schneider, M.
2010-10-01
The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determined flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.
Uncertainty analysis technique for OMEGA Dante measurements
International Nuclear Information System (INIS)
May, M. J.; Widmann, K.; Sorce, C.; Park, H.-S.; Schneider, M.
2010-01-01
The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determined flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.
Uncertainty Analysis Technique for OMEGA Dante Measurements
International Nuclear Information System (INIS)
May, M.J.; Widmann, K.; Sorce, C.; Park, H.; Schneider, M.
2010-01-01
The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determined flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.
International Nuclear Information System (INIS)
Sellitto, P.; Del Frate, F.
2014-01-01
Atmospheric temperature profiles are inferred from passive satellite instruments, using thermal infrared or microwave observations. Here we investigate on the feasibility of the retrieval of height resolved temperature information in the ultraviolet spectral region. The temperature dependence of the absorption cross sections of ozone in the Huggins band, in particular in the interval 320–325 nm, is exploited. We carried out a sensitivity analysis and demonstrated that a non-negligible information on the temperature profile can be extracted from this small band. Starting from these results, we developed a neural network inversion algorithm, trained and tested with simulated nadir EnviSat-SCIAMACHY ultraviolet observations. The algorithm is able to retrieve the temperature profile with root mean square errors and biases comparable to existing retrieval schemes that use thermal infrared or microwave observations. This demonstrates, for the first time, the feasibility of temperature profiles retrieval from space-borne instruments operating in the ultraviolet. - Highlights: • A sensitivity analysis and an inversion scheme to retrieve temperature profiles from satellite UV observations (320–325 nm). • The exploitation of the temperature dependence of the absorption cross section of ozone in the Huggins band is proposed. • First demonstration of the feasibility of temperature profiles retrieval from satellite UV observations. • RMSEs and biases comparable with more established techniques involving TIR and MW observations
Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.
2010-01-01
Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in
Bayesian inversion of refraction seismic traveltime data
Ryberg, T.; Haberland, Ch
2018-03-01
We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test
Reliability analysis techniques for the design engineer
International Nuclear Information System (INIS)
Corran, E.R.; Witt, H.H.
1980-01-01
A fault tree analysis package is described that eliminates most of the housekeeping tasks involved in proceeding from the initial construction of a fault tree to the final stage of presenting a reliability analysis in a safety report. It is suitable for designers with relatively little training in reliability analysis and computer operation. Users can rapidly investigate the reliability implications of various options at the design stage, and evolve a system which meets specified reliability objectives. Later independent review is thus unlikely to reveal major shortcomings necessitating modification and projects delays. The package operates interactively allowing the user to concentrate on the creative task of developing the system fault tree, which may be modified and displayed graphically. For preliminary analysis system data can be derived automatically from a generic data bank. As the analysis procedes improved estimates of critical failure rates and test and maintenance schedules can be inserted. The computations are standard, - identification of minimal cut-sets, estimation of reliability parameters, and ranking of the effect of the individual component failure modes and system failure modes on these parameters. The user can vary the fault trees and data on-line, and print selected data for preferred systems in a form suitable for inclusion in safety reports. A case history is given - that of HIFAR containment isolation system. (author)
Nucelar reactor seismic safety analysis techniques
International Nuclear Information System (INIS)
Cummings, G.E.; Wells, J.E.; Lewis, L.C.
1979-04-01
In order to provide insights into the seismic safety requirements for nuclear power plants, a probabilistic based systems model and computational procedure have been developed. This model and computational procedure will be used to identify where data and modeling uncertainties need to be decreased by studying the effect of these uncertainties on the probability of radioactive release and the probability of failure of various structures, systems, and components. From the estimates of failure and release probabilities and their uncertainties the most sensitive steps in the seismic methodologies can be identified. In addition, the procedure will measure the uncertainty due to random occurrences, e.g. seismic event probabilities, material property variability, etc. The paper discusses the elements of this systems model and computational procedure, the event-tree/fault-tree development, and the statistical techniques to be employed
Analysis of Jordanian Cigarettes Using XRF Techniques
International Nuclear Information System (INIS)
Kullab, M.; Ismail, A.; AL-kofahi, M.
2002-01-01
Sixteen brands of Jordanian cigarettes were analyzed using X-ray Fluorescence (XRF) techniques. These cigarettes were found to contain the elements: Si, S, Cl, K, Ca, P, Ti, Mn, Fe, Cu, Zn, Br.Rb and Sr. The major elements with concentrations of more than 1% by weight were Cl,K and Ca. The elements with minor concentrations, Between 0.1 and 1% by weight, were Si, S and P. The trace elements with concentrations below 0.1% by weight were Ti, Mn, Fe, Cu, Zn, Br, Rb and Sr. The toxicity of some trace elements, like Br, Rb, and Sr, which are present in some brands of Jordanian cigarettes, is discussed. (Author's) 24 refs., 1 tab., 1 fig
International Nuclear Information System (INIS)
Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.
2014-01-01
A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online
Decentralized control using compositional analysis techniques
Kerber, F.; van der Schaft, A. J.
2011-01-01
Decentralized control strategies aim at achieving a global control target by means of distributed local controllers acting on individual subsystems of the overall plant. In this sense, decentralized control is a dual problem to compositional analysis where a global verification task is decomposed
Techniques and Applications of Urban Data Analysis
AlHalawani, Sawsan
2016-01-01
Digitization and characterization of urban spaces are essential components as we move to an ever-growing ’always connected’ world. Accurate analysis of such digital urban spaces has become more important as we continue to get spatial and social
Evaluating Dynamic Analysis Techniques for Program Comprehension
Cornelissen, S.G.M.
2009-01-01
Program comprehension is an essential part of software development and software maintenance, as software must be sufficiently understood before it can be properly modified. One of the common approaches in getting to understand a program is the study of its execution, also known as dynamic analysis.
Sensor module design and forward and inverse kinematics analysis of 6-DOF sorting transferring robot
Zhou, Huiying; Lin, Jiajian; Liu, Lei; Tao, Meng
2017-09-01
To meet the demand of high strength express sorting, it is significant to design a robot with multiple degrees of freedom that can sort and transfer. This paper uses infrared sensor, color sensor and pressure sensor to receive external information, combine the plan of motion path in advance and the feedback information from the sensors, then write relevant program. In accordance with these, we can design a 6-DOF robot that can realize multi-angle seizing. In order to obtain characteristics of forward and inverse kinematics, this paper describes the coordinate directions and pose estimation by the D-H parameter method and closed solution. On the basis of the solution of forward and inverse kinematics, geometric parameters of links and link parameters are optimized in terms of application requirements. In this way, this robot can identify route, sort and transfer.
Energy Technology Data Exchange (ETDEWEB)
Castaneda M, V. H.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Leon P, A. A.; Hernandez P, C. F.; Espinoza G, J. G.; Ortiz R, J. M.; Vega C, H. R. [Universidad Autonoma de Zacatecas, 98000 Zacatecas, Zac. (Mexico); Mendez, R. [CIEMAT, Departamento de Metrologia de Radiaciones Ionizantes, Laboratorio de Patrones Neutronicos, Av. Complutense 22, 28040 Madrid (Spain); Gallego, E. [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Sousa L, M. A. [Comision Nacional de Energia Nuclear, Centro de Investigacion de Tecnologia Nuclear, Av. Pte. Antonio Carlos 6627, Pampulha, 31270-901 Belo Horizonte, Minas Gerais (Brazil)
2016-10-15
The Taguchi methodology has proved to be highly efficient to solve inverse problems, in which the values of some parameters of the model must be obtained from the observed data. There are intrinsic mathematical characteristics that make a problem known as inverse. Inverse problems appear in many branches of science, engineering and mathematics. To solve this type of problem, researches have used different techniques. Recently, the use of techniques based on Artificial Intelligence technology is being explored by researches. This paper presents the use of a software tool based on artificial neural networks of generalized regression in the solution of inverse problems with application in high energy physics, specifically in the solution of the problem of neutron spectrometry. To solve this problem we use a software tool developed in the Mat Lab programming environment, which employs a friendly user interface, intuitive and easy to use for the user. This computational tool solves the inverse problem involved in the reconstruction of the neutron spectrum based on measurements made with a Bonner spheres spectrometric system. Introducing this information, the neural network is able to reconstruct the neutron spectrum with high performance and generalization capability. The tool allows that the end user does not require great training or technical knowledge in development and/or use of software, so it facilitates the use of the program for the resolution of inverse problems that are in several areas of knowledge. The techniques of Artificial Intelligence present singular veracity to solve inverse problems, given the characteristics of artificial neural networks and their network topology, therefore, the tool developed has been very useful, since the results generated by the Artificial Neural Network require few time in comparison to other techniques and are correct results comparing them with the actual data of the experiment. (Author)
International Nuclear Information System (INIS)
Yuan, Kai; Yang, Lijun; Du, Xiaoze; Yang, Yongping
2014-01-01
Highlights: • A new optical fiber monolith reactor model for CO 2 reduction was developed. • Methanol concentration versus fiber location and operation parameters was obtained. • Reaction efficiency increases by 31.1% due to the four fibers and inverse layout. • With increasing space of fiber and channel center, methanol concentration increases. • Methanol concentration increases as the vapor ratio and light intensity increase. - Abstract: Photocatalytic CO 2 reduction seems potential to mitigate greenhouse gas emissions and produce renewable energy. A new model of photocatalytic CO 2 reduction in optical fiber monolith reactor with multiple inverse lights was developed in this study to improve the conversion of CO 2 to CH 3 OH. The new light distribution equation was derived, by which the light distribution was modeled and analyzed. The variations of CH 3 OH concentration with the fiber location and operation parameters were obtained by means of numerical simulation. The results show that the outlet CH 3 OH concentration is 31.1% higher than the previous model, which is attributed to the four fibers and inverse layout. With the increase of the distance between the fiber and the monolith center, the average CH 3 OH concentration increases. The average CH 3 OH concentration also rises as the light input and water vapor percentage increase, but declines with increasing the inlet velocity. The maximum conversion rate and quantum efficiency in the model are 0.235 μmol g −1 h −1 and 0.0177% respectively, both higher than previous internally illuminated monolith reactor (0.16 μmol g −1 h −1 and 0.012%). The optical fiber monolith reactor layout with multiple inverse lights is recommended in the design of photocatalytic reactor of CO 2 reduction
Documentation and analysis of the Schlumberger interactive 1-D inversion program slumb
Energy Technology Data Exchange (ETDEWEB)
Sandberg, S.
1979-09-01
This computer program is designed to accept field data from a Schlumberger resistivity array and invert it in terms of a one-dimensional layered geoelectrical model. Because the inverse problem is non-linear, an initial guess model is required. This input model can be obtained by traditional curve matching, or by repeated use of a forward algorithm. This program was written in FORTRAN V and developed on a UNIVAC 1108 computer.
10th Australian conference on nuclear techniques of analysis. Proceedings
International Nuclear Information System (INIS)
1998-01-01
These proceedings contains abstracts and extended abstracts of 80 lectures and posters presented at the 10th Australian conference on nuclear techniques of analysis hosted by the Australian National University in Canberra, Australia from 24-26 of November 1997. The conference was divided into sessions on the following topics : ion beam analysis and its applications; surface science; novel nuclear techniques of analysis, characterization of thin films, electronic and optoelectronic material formed by ion implantation, nanometre science and technology, plasma science and technology. A special session was dedicated to new nuclear techniques of analysis, future trends and developments. Separate abstracts were prepared for the individual presentation included in this volume
10th Australian conference on nuclear techniques of analysis. Proceedings
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-06-01
These proceedings contains abstracts and extended abstracts of 80 lectures and posters presented at the 10th Australian conference on nuclear techniques of analysis hosted by the Australian National University in Canberra, Australia from 24-26 of November 1997. The conference was divided into sessions on the following topics : ion beam analysis and its applications; surface science; novel nuclear techniques of analysis, characterization of thin films, electronic and optoelectronic material formed by ion implantation, nanometre science and technology, plasma science and technology. A special session was dedicated to new nuclear techniques of analysis, future trends and developments. Separate abstracts were prepared for the individual presentation included in this volume.
A methodological comparison of customer service analysis techniques
James Absher; Alan Graefe; Robert Burns
2003-01-01
Techniques used to analyze customer service data need to be studied. Two primary analysis protocols, importance-performance analysis (IP) and gap score analysis (GA), are compared in a side-by-side comparison using data from two major customer service research projects. A central concern is what, if any, conclusion might be different due solely to the analysis...
Nuclear techniques for analysis of environmental samples
International Nuclear Information System (INIS)
1986-12-01
The main purposes of this meeting were to establish the state-of-the-art in the field, to identify new research and development that is required to provide an adequate framework for analysis of environmental samples and to assess needs and possibilities for international cooperation in problem areas. This technical report was prepared on the subject based on the contributions made by the participants. A separate abstract was prepared for each of the 9 papers
Application of activation techniques to biological analysis
International Nuclear Information System (INIS)
Bowen, H.J.M.
1981-01-01
Applications of activation analysis in the biological sciences are reviewed for the period of 1970 to 1979. The stages and characteristics of activation analysis are described, and its advantages and disadvantages enumerated. Most applications involve activation by thermal neutrons followed by either radiochemical or instrumental determination. Relatively little use has been made of activation by fast neutrons, photons, or charged particles. In vivo analyses are included, but those based on prompt gamma or x-ray emission are not. Major applications include studies of reference materials, and the elemental analysis of plants, marine biota, animal and human tissues, diets, and excreta. Relatively little use of it has been made in biochemistry, microbiology, and entomology, but it has become important in toxicology and environmental science. The elements most often determined are Ag, As, Au, Br, Ca, Cd, Cl, Co, Cr, Cs, Cu, Fe, Hg, I, K, Mn, Mo, Na, Rb, Sb, Sc, Se, and Zn, while few or no determinations of B, Be, Bi, Ga, Gd, Ge, H, In, Ir, Li, Nd, Os, Pd, Pr, Pt, Re, Rh, Ru, Te, Tl, or Y have been made in biological materials
Multi-parameter Analysis and Inversion for Anisotropic Media Using the Scattering Integral Method
Djebbi, Ramzi
2017-10-24
The main goal in seismic exploration is to identify locations of hydrocarbons reservoirs and give insights on where to drill new wells. Therefore, estimating an Earth model that represents the right physics of the Earth\\'s subsurface is crucial in identifying these targets. Recent seismic data, with long offsets and wide azimuth features, are more sensitive to anisotropy. Accordingly, multiple anisotropic parameters need to be extracted from the recorded data on the surface to properly describe the model. I study the prospect of applying a scattering integral approach for multi-parameter inversion for a transversely isotropic model with a vertical axis of symmetry. I mainly analyze the sensitivity kernels to understand the sensitivity of seismic data to anisotropy parameters. Then, I use a frequency domain scattering integral approach to invert for the optimal parameterization. The scattering integral approach is based on the explicit computation of the sensitivity kernels. I present a new method to compute the traveltime sensitivity kernels for wave equation tomography using the unwrapped phase. I show that the new kernels are a better alternative to conventional cross-correlation/Rytov kernels. I also derive and analyze the sensitivity kernels for a transversely isotropic model with a vertical axis of symmetry. The kernels structure, for various opening/scattering angles, highlights the trade-off regions between the parameters. For a surface recorded data, I show that the normal move-out velocity vn, ƞ and δ parameterization is suitable for a simultaneous inversion of diving waves and reflections. Moreover, when seismic data is inverted hierarchically, the horizontal velocity vh, ƞ and ϵ is the parameterization with the least trade-off. In the frequency domain, the hierarchical inversion approach is naturally implemented using frequency continuation, which makes vh, ƞ and ϵ parameterization attractive. I formulate the multi-parameter inversion using the
Ingram, WT
2012-01-01
Inverse limits provide a powerful tool for constructing complicated spaces from simple ones. They also turn the study of a dynamical system consisting of a space and a self-map into a study of a (likely more complicated) space and a self-homeomorphism. In four chapters along with an appendix containing background material the authors develop the theory of inverse limits. The book begins with an introduction through inverse limits on [0,1] before moving to a general treatment of the subject. Special topics in continuum theory complete the book. Although it is not a book on dynamics, the influen
New analytical techniques for cuticle chemical analysis
International Nuclear Information System (INIS)
Schulten, H.R.
1994-01-01
1) The analytical methodology of pyrolysis-gas chromatography/mass spectrometry (Py-GC/MS) and direct pyrolysis-mass spectrometry (Py-MS) using soft ionization techniques by high electric fields (FL) are briefly described. Recent advances of Py-GC/MS and Py-FIMS for the analyses of complex organic matter such as plant materials, humic substances, dissolved organic matter in water (DOM) and soil organic matter (SOM) in agricultural and forest soils are given to illustrate the potential and limitations of the applied methods. 2) Novel applications of Py-GC/MS and Py-MS in combination with conventional analytical data in an integrated, chemometric approach to investigate the dynamics of plant lipids are reported. This includes multivariate statistical investigations on maturation, senescence, humus genesis, and environmental damages in spruce ecosystems. 3) The focal point is the author's integrated investigations on emission-induced changes of selected conifer plant constituents. Pattern recognition of Py-MS data of desiccated spruce needles provides a method for distinguishing needles damaged in different ways and determining the cause. Spruce needles were collected from both controls and trees treated with sulphur dioxide (acid rain), nitrogen dioxide, and ozone under controlled conditions. Py-MS and chemometric data evaluation are employed to characterize and classify leaves and their epicuticular waxes. Preliminary mass spectrometric evaluations of isolated cuticles of different plants such as spruce, ivy, holly, and philodendron, as well as ivy cuticles treated in vivo with air pollutants such as surfactants and pesticides are given. (orig.)
A technique for human error analysis (ATHEANA)
Energy Technology Data Exchange (ETDEWEB)
Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others
1996-05-01
Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.
A technique for human error analysis (ATHEANA)
International Nuclear Information System (INIS)
Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W.
1996-05-01
Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions
International Nuclear Information System (INIS)
Sharma, Pavan K.; Gera, B.; Ghosh, A.K.; Kushwaha, H.S.
2010-01-01
Scalar dispersion in the atmosphere is an important area wherein different approaches are followed in development of good analytical model. The analyses based on Computational Fluid Dynamics (CFD) codes offer an opportunity of model development based on first principles of physics and hence such models have an edge over the existing models. Both forward and backward calculation methods are being developed for atmospheric dispersion around NPPs at BARC Forward modeling methods, which describe the atmospheric transport from sources to receptors, use forward-running transport and dispersion models or computational fluid dynamics models which are run many times, and the resulting dispersion field is compared to observations from multiple sensors. Backward or inverse modeling methods use only one model run in the reverse direction from the receptors to estimate the upwind sources. Inverse modeling methods include adjoint and tangent linear models, Kalman filters, and variational data assimilation, and neural network. The present paper is aimed at developing a new approach where the identified specific signatures at receptor points form the basis for source estimation or inversions. This approach is expected to reduce the large transient data sets to reduced and meaningful data sets. In fact this reduces the inherently transient data set into a time independent mean data set. Forward computation were carried out with CFD code for various case to generate a large set of data to train the ANN. Specific signature analysis was carried out to find the parameters of interest for ANN training like peak concentration, time to reach peak concentration and time to fall, the ANN was trained with data and source strength and location were predicted from ANN. Inverse problem was performed using ANN approach in long range atmospheric dispersion. An illustration of application of CFD code for atmospheric dispersion studies for a hypothetical case is also included in the paper. (author)
A case study of the sensitivity of forecast skill to data and data analysis techniques
Baker, W. E.; Atlas, R.; Halem, M.; Susskind, J.
1983-01-01
A series of experiments have been conducted to examine the sensitivity of forecast skill to various data and data analysis techniques for the 0000 GMT case of January 21, 1979. These include the individual components of the FGGE observing system, the temperatures obtained with different satellite retrieval methods, and the method of vertical interpolation between the mandatory pressure analysis levels and the model sigma levels. It is found that NESS TIROS-N infrared retrievals seriously degrade a rawinsonde-only analysis over land, resulting in a poorer forecast over North America. Less degradation in the 72-hr forecast skill at sea level and some improvement at 500 mb is noted, relative to the control with TIROS-N retrievals produced with a physical inversion method which utilizes a 6-hr forecast first guess. NESS VTPR oceanic retrievals lead to an improved forecast over North America when added to the control.
Development of chemical analysis techniques: pt. 3
International Nuclear Information System (INIS)
Kim, K.J.; Chi, K.Y.; Choi, G.C.
1981-01-01
For the purpose of determining trace rare earths a spectrofluorimetric method has been studied. Except Ce and Tb, the fluorescence intensities are not enough to allow satisfactory analysis. Complexing agents such as tungstate and hexafluoroacetylacetone should be employed to increase fluorescence intensities. As a preliminary experiment for the separation of individual rare earth element and uranium, the distribution coefficient, % S here, are obtained on the Dowex 50 W against HCl concentration by a batch method. These % S data are utilized to obtain elution curves. The % S data showed a minimum at around 4 M HCl. To understand this previously known phenomenon the adsorption of Cl - on Dowex 50 W is examined as a function of HCl concentration and found to be decreasing while % S of rare earths increasing. It is interpreted that Cl - and rare earth ions are moved into the resin phase separately and that the charge and the charge densities of these ions are responsible for the different % S curves. Dehydration appears to play an important role in the upturn of the % S curves at higher HCl concentrations
Emotional Freedom Techniques for Anxiety: A Systematic Review With Meta-analysis.
Clond, Morgan
2016-05-01
Emotional Freedom Technique (EFT) combines elements of exposure and cognitive therapies with acupressure for the treatment of psychological distress. Randomized controlled trials retrieved by literature search were assessed for quality using the criteria developed by the American Psychological Association's Division 12 Task Force on Empirically Validated Treatments. As of December 2015, 14 studies (n = 658) met inclusion criteria. Results were analyzed using an inverse variance weighted meta-analysis. The pre-post effect size for the EFT treatment group was 1.23 (95% confidence interval, 0.82-1.64; p freedom technique treatment demonstrated a significant decrease in anxiety scores, even when accounting for the effect size of control treatment. However, there were too few data available comparing EFT to standard-of-care treatments such as cognitive behavioral therapy, and further research is needed to establish the relative efficacy of EFT to established protocols.
NUMERICAL ANALYSIS OF AN INVERSE PROBLEM ORIGINATED IN PHENOMENON OF POLLUTION AIR URBAN
Directory of Open Access Journals (Sweden)
Aníbal Coronel
2016-12-01
Full Text Available This paper presents the calibration study of a two - dimensional mathematical model for the problem of urban air pollution. It is mainly assumed that air pollution is afected by wind convection, diffusion and chemical reactions of pollutants. Consequently, a convection-diffusion-reaction equation is obtained as a direct problem. In the inverse problem, the determination of the diffusion is analyzed, assuming that one has an observation of the pollutants in a nite time. To solve it numerically the nite volume method is used, the least squares function is considered as cost function and the gradient is calculated with the sensitivity method.
International Nuclear Information System (INIS)
AL-Yahia, Omar S.; Albati, Mohammad A.; Park, Jonghark; Chae, Heetaek; Jo, Daeseong
2013-01-01
Highlights: • Transient analyses of a slow and fast LOFA were investigated. • A reactor kinetic and thermal hydraulic coupled model was developed. • Based on force balance, the flow rate during flow inversion was determined. • Flow inversion in a hot channel occurred earlier than in an average channel. • Two temperature peaks were observed during both slow and fast LOFA. - Abstract: Transient analyses of the IAEA 10 MW MTR reactor are investigated during a fast and slow Loss of Flow Accident (LOFA) with a neutron kinetic and thermal hydraulic coupling model. A spatial-dependent thermal hydraulic technique is adopted for analyzing the local thermal hydraulic parameters and hotspot location during a flow inversion. The flow rate through the channel is determined in terms of a balance between driving and preventing forces. Friction and buoyancy forces act as resistance of the flow before a flow inversion while buoyancy force becomes the driving force after a flow inversion. By taking into account the buoyancy effect to determine the flow rate, the difference in the flow inversion time between hot and average channels is investigated: a flow inversion occurs earlier in the hot channel than in an average channel. Furthermore, the movement of the hotspot location before and after a flow inversion is investigated for a slow and fast LOFA. During a flow inversion, two temperature peaks are observed: (1) the first temperature peak is at the initiation of the LOFA, and (2) the second temperature peak is when a flow inversion occurs. The maximum temperature of the cladding is found at the second temperature peak for both LOFA analyses, and is lower than the saturation temperature
International Nuclear Information System (INIS)
Kaneko Mikami, Wakako; Kazama, Toshiki; Sato, Hirotaka
2013-01-01
The purpose of this study was to compare two fat suppression methods in contrast-enhanced MR imaging of breast cancer at 3.0 T: the two-point Dixon method and the frequency selective inversion method. Forty female patients with breast cancer underwent contrast-enhanced three-dimensional T1-weighted MR imaging at 3.0 T. Both the two-point Dixon method and the frequency selective inversion method were applied. Quantitative analyses of the residual fat signal-to-noise ratio and the contrast noise ratio (CNR) of lesion-to-breast parenchyma, lesion-to-fat, and parenchyma-to-fat were performed. Qualitative analyses of the uniformity of fat suppression, image contrast, and the visibility of breast lesions and axillary metastatic adenopathy were performed. The signal-to-noise ratio was significantly lower in the two-point Dixon method (P<0.001). All CNR values were significantly higher in the two-point Dixon method (P<0.001 and P=0.001, respectively). According to qualitative analysis, both the uniformity of fat suppression and image contrast with the two-point Dixon method were significantly higher (P<0.001 and P=0.002, respectively). Visibility of breast lesions and metastatic adenopathy was significantly better in the two-point Dixon method (P<0.001 and P=0.03, respectively). The two-point Dixon method suppressed the fat signal more potently and improved contrast and visibility of the breast lesions and axillary adenopathy. (author)
Contributions to fuzzy polynomial techniques for stability analysis and control
Pitarch Pérez, José Luis
2014-01-01
The present thesis employs fuzzy-polynomial control techniques in order to improve the stability analysis and control of nonlinear systems. Initially, it reviews the more extended techniques in the field of Takagi-Sugeno fuzzy systems, such as the more relevant results about polynomial and fuzzy polynomial systems. The basic framework uses fuzzy polynomial models by Taylor series and sum-of-squares techniques (semidefinite programming) in order to obtain stability guarantees...
LiFAP-based PVdF-HFP microporous membranes by phase-inversion technique with Li/LiFePO{sub 4} cell
Energy Technology Data Exchange (ETDEWEB)
Aravindan, V.; Vickraman, P. [Gandhigram Rural University, Department of Physics, Gandhigram (India); Sivashanmugam, A.; Thirunakaran, R.; Gopukumar, S. [Central Electrochemical Research Institute, Electrochemical Energy Systems Division, Karaikudi (India)
2009-12-15
Polyvinylidenefluoride-hexafluoropropylene-based (PVdF-HFP-based) gel and composite microporous membranes (GPMs and CPMs) were prepared by phase-inversion technique in the presence 10 wt% of AlO(OH){sub n} nanoparticles. The prepared membranes were gelled with 0.5-M LiPF{sub 3}(CF{sub 2}CF{sub 3}){sub 3} (lithium fluoroalkylphosphate, LiFAP) in EC:DEC (1:1 v/v) and subjected to various characterizations; the AC impedance study shows that CPMs exhibit higher conductivity than GPMs. Mechanical stability measurements on these systems reveal that CPMs exhibit Young's modulus higher than that of bare and GPMs and addition of nanoparticles drastically improves the elongation break was also noted. Transition of the host from {alpha} to {beta} phase after the loading of nanosized filler was confirmed by XRD and Raman studies. Physico-chemical properties, like liquid uptake, porosity, surface area, and activation energy, of the membranes were calculated and results are summarized. Cycling performance of Li/CPM/LiFePO{sub 4} coin cell was fabricated and evaluated at C/10 rate and delivered a discharge capacity of 157 and 148 mAh g {sup -1} respectively for first and tenth cycles. (orig.)
International Nuclear Information System (INIS)
Chang, C.J.; Anghaie, S.
1998-01-01
A numerical experimental technique is presented to find an optimum solution to an undetermined inverse gamma-ray transport problem involving the nondestructive assay of radionuclide inventory in a nuclear waste drum. The method introduced is an optimization scheme based on performing a large number of numerical simulations that account for the counting statistics, the nonuniformity of source distribution, and the heterogeneous density of the self-absorbing medium inside the waste drum. The simulation model uses forward projection and backward reconstruction algorithms. The forward projection algorithm uses randomly selected source distribution and a first-flight kernel method to calculate external detector responses. The backward reconstruction algorithm uses the conjugate gradient with nonnegative constraint or the maximum likelihood expectation maximum method to reconstruct the source distribution based on calculated detector responses. Total source activity is determined by summing the reconstructed activity of each computational grid. By conducting 10,000 numerical simulations, the error bound and the associated confidence level for the prediction of total source activity are determined. The accuracy and reliability of the simulation model are verified by performing a series of experiments in a 208-ell waste barrel. Density heterogeneity is simulated by using different materials distributed in 37 egg-crate-type compartments simulating a vertical segment of the barrel. Four orthogonal detector positions are used to measure the emerging radiation field from the distributed source. Results of the performed experiments are in full agreement with the estimated error and the confidence level, which are predicted by the simulation model
Mishin, V. V.; Mishin, V. M.; Karavaev, Yu.; Han, J. P.; Wang, C.
2016-07-01
We report on novel features of the saturation process of the polar cap magnetic flux and Poynting flux into the magnetosphere from the solar wind during three superstorms. In addition to the well-known effect of the interplanetary electric (Esw) and southward magnetic (interplanetary magnetic field (IMF) Bz) fields, we found that the saturation depends also on the solar wind ram pressure Pd. By means of the magnetogram inversion technique and a global MHD numerical model Piecewise Parabolic Method with a Lagrangian Remap, we explore the dependence of the magnetopause standoff distance on ram pressure and the southward IMF. Unlike earlier studies, in the considered superstorms both Pd and Bz achieve extreme values. As a result, we show that the compression rate of the dayside magnetosphere decreases with increasing Pd and the southward Bz, approaching very small values for extreme Pd ≥ 15 nPa and Bz ≤ -40 nT. This dependence suggests that finite compressibility of the magnetosphere controls saturation of superstorms.
Sensitivity analysis for elastic full-waveform inversion in VTI media
Kamath, Nishant
2014-08-05
Multiparameter full-waveform inversion (FWI) is generally nonunique, and the results are strongly influenced by the geometry of the experiment and the type of recorded data. Studying the sensitivity of different subsets of data to the model parameters may help in choosing an optimal acquisition design, inversion workflow, and parameterization. Here, we derive the Fréchet kernel for FWI of multicomponent data from a 2D VTI (tranversely isotropic with a vertical symmetry axis) medium. The kernel is obtained by linearizing the elastic wave equation using the Born approximation and employing the asymptotic Green\\'s function. The amplitude of the kernel (‘radiation pattern’) yields the angle-dependent energy scattered by a perturbation in a certain model parameter. The perturbations are described in terms of the P- and S-wave vertical velocities and the P-wave normal-moveout and horizontal velocities. The background medium is assumed to be homogeneous and isotropic, which allows us to obtain simple expressions for the radiation patterns corresonding to all four velocities. These patterns help explain the FWI results for multicomponent transmission data generated for Gaussian anomalies in the Thomsen parameters inserted into a homogeneous VTI medium.
Sensitivity analysis for elastic full-waveform inversion in VTI media
Kamath, Nishant; Tsvankin, Ilya
2014-01-01
Multiparameter full-waveform inversion (FWI) is generally nonunique, and the results are strongly influenced by the geometry of the experiment and the type of recorded data. Studying the sensitivity of different subsets of data to the model parameters may help in choosing an optimal acquisition design, inversion workflow, and parameterization. Here, we derive the Fréchet kernel for FWI of multicomponent data from a 2D VTI (tranversely isotropic with a vertical symmetry axis) medium. The kernel is obtained by linearizing the elastic wave equation using the Born approximation and employing the asymptotic Green's function. The amplitude of the kernel (‘radiation pattern’) yields the angle-dependent energy scattered by a perturbation in a certain model parameter. The perturbations are described in terms of the P- and S-wave vertical velocities and the P-wave normal-moveout and horizontal velocities. The background medium is assumed to be homogeneous and isotropic, which allows us to obtain simple expressions for the radiation patterns corresonding to all four velocities. These patterns help explain the FWI results for multicomponent transmission data generated for Gaussian anomalies in the Thomsen parameters inserted into a homogeneous VTI medium.
International Nuclear Information System (INIS)
Burkhard, N.R.
1979-01-01
The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables
An operator expansion technique for path integral analysis
International Nuclear Information System (INIS)
Tsvetkov, I.V.
1995-01-01
A new method of path integral analysis in the framework of a power series technique is presented. The method is based on the operator expansion of an exponential. A regular procedure to calculate the correction terms is found. (orig.)
Search for the top quark using multivariate analysis techniques
International Nuclear Information System (INIS)
Bhat, P.C.
1994-08-01
The D0 collaboration is developing top search strategies using multivariate analysis techniques. We report here on applications of the H-matrix method to the eμ channel and neural networks to the e+jets channel
Inverse problems in the Bayesian framework
International Nuclear Information System (INIS)
Calvetti, Daniela; Somersalo, Erkki; Kaipio, Jari P
2014-01-01
The history of Bayesian methods dates back to the original works of Reverend Thomas Bayes and Pierre-Simon Laplace: the former laid down some of the basic principles on inverse probability in his classic article ‘An essay towards solving a problem in the doctrine of chances’ that was read posthumously in the Royal Society in 1763. Laplace, on the other hand, in his ‘Memoirs on inverse probability’ of 1774 developed the idea of updating beliefs and wrote down the celebrated Bayes’ formula in the form we know today. Although not identified yet as a framework for investigating inverse problems, Laplace used the formalism very much in the spirit it is used today in the context of inverse problems, e.g., in his study of the distribution of comets. With the evolution of computational tools, Bayesian methods have become increasingly popular in all fields of human knowledge in which conclusions need to be drawn based on incomplete and noisy data. Needless to say, inverse problems, almost by definition, fall into this category. Systematic work for developing a Bayesian inverse problem framework can arguably be traced back to the 1980s, (the original first edition being published by Elsevier in 1987), although articles on Bayesian methodology applied to inverse problems, in particular in geophysics, had appeared much earlier. Today, as testified by the articles in this special issue, the Bayesian methodology as a framework for considering inverse problems has gained a lot of popularity, and it has integrated very successfully with many traditional inverse problems ideas and techniques, providing novel ways to interpret and implement traditional procedures in numerical analysis, computational statistics, signal analysis and data assimilation. The range of applications where the Bayesian framework has been fundamental goes from geophysics, engineering and imaging to astronomy, life sciences and economy, and continues to grow. There is no question that Bayesian
Neutron activation analysis: an emerging technique for conservation/preservation
International Nuclear Information System (INIS)
Sayre, E.V.
1976-01-01
The diverse applications of neutron activation in analysis, preservation, and documentation of art works and artifacts are described with illustrations for each application. The uses of this technique to solve problems of attribution and authentication, to reveal the inner structure and composition of art objects, and, in some instances to recreate details of the objects are described. A brief discussion of the theory and techniques of neutron activation analysis is also included
Development of evaluation method for software safety analysis techniques
International Nuclear Information System (INIS)
Huang, H.; Tu, W.; Shih, C.; Chen, C.; Yang, W.; Yih, S.; Kuo, C.; Chen, M.
2006-01-01
Full text: Full text: Following the massive adoption of digital Instrumentation and Control (I and C) system for nuclear power plant (NPP), various Software Safety Analysis (SSA) techniques are used to evaluate the NPP safety for adopting appropriate digital I and C system, and then to reduce risk to acceptable level. However, each technique has its specific advantage and disadvantage. If the two or more techniques can be complementarily incorporated, the SSA combination would be more acceptable. As a result, if proper evaluation criteria are available, the analyst can then choose appropriate technique combination to perform analysis on the basis of resources. This research evaluated the applicable software safety analysis techniques nowadays, such as, Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Markov chain modeling, Dynamic Flowgraph Methodology (DFM), and simulation-based model analysis; and then determined indexes in view of their characteristics, which include dynamic capability, completeness, achievability, detail, signal/ noise ratio, complexity, and implementation cost. These indexes may help the decision makers and the software safety analysts to choose the best SSA combination arrange their own software safety plan. By this proposed method, the analysts can evaluate various SSA combinations for specific purpose. According to the case study results, the traditional PHA + FMEA + FTA (with failure rate) + Markov chain modeling (without transfer rate) combination is not competitive due to the dilemma for obtaining acceptable software failure rates. However, the systematic architecture of FTA and Markov chain modeling is still valuable for realizing the software fault structure. The system centric techniques, such as DFM and Simulation-based model analysis, show the advantage on dynamic capability, achievability, detail, signal/noise ratio. However, their disadvantage are the completeness complexity
Research on digital multi-channel pulse height analysis techniques
International Nuclear Information System (INIS)
Xiao Wuyun; Wei Yixiang; Ai Xianyun; Ao Qi
2005-01-01
Multi-channel pulse height analysis techniques are developing in the direction of digitalization. Based on digital signal processing techniques, digital multi-channel analyzers are characterized by powerful pulse processing ability, high throughput, improved stability and flexibility. This paper analyzes key techniques of digital nuclear pulse processing. With MATLAB software, main algorithms are simulated, such as trapezoidal shaping, digital baseline estimation, digital pole-zero/zero-pole compensation, poles and zeros identification. The preliminary general scheme of digital MCA is discussed, as well as some other important techniques about its engineering design. All these lay the foundation of developing homemade digital nuclear spectrometers. (authors)
Sensitivity analysis and related analysis : A survey of statistical techniques
Kleijnen, J.P.C.
1995-01-01
This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical
Development of environmental sample analysis techniques for safeguards
International Nuclear Information System (INIS)
Magara, Masaaki; Hanzawa, Yukiko; Esaka, Fumitaka
1999-01-01
JAERI has been developing environmental sample analysis techniques for safeguards and preparing a clean chemistry laboratory with clean rooms. Methods to be developed are a bulk analysis and a particle analysis. In the bulk analysis, Inductively-Coupled Plasma Mass Spectrometer or Thermal Ionization Mass Spectrometer are used to measure nuclear materials after chemical treatment of sample. In the particle analysis, Electron Probe Micro Analyzer and Secondary Ion Mass Spectrometer are used for elemental analysis and isotopic analysis, respectively. The design of the clean chemistry laboratory has been carried out and construction will be completed by the end of March, 2001. (author)
Key-space analysis of double random phase encryption technique
Monaghan, David S.; Gopinathan, Unnikrishnan; Naughton, Thomas J.; Sheridan, John T.
2007-09-01
We perform a numerical analysis on the double random phase encryption/decryption technique. The key-space of an encryption technique is the set of possible keys that can be used to encode data using that technique. In the case of a strong encryption scheme, many keys must be tried in any brute-force attack on that technique. Traditionally, designers of optical image encryption systems demonstrate only how a small number of arbitrary keys cannot decrypt a chosen encrypted image in their system. However, this type of demonstration does not discuss the properties of the key-space nor refute the feasibility of an efficient brute-force attack. To clarify these issues we present a key-space analysis of the technique. For a range of problem instances we plot the distribution of decryption errors in the key-space indicating the lack of feasibility of a simple brute-force attack.
Nuclear techniques for bulk and surface analysis of materials
International Nuclear Information System (INIS)
D'Agostino, M.D.; Kamykowski, E.A.; Kuehne, F.J.; Padawer, G.M.; Schneid, E.J.; Schulte, R.L.; Stauber, M.C.; Swanson, F.R.
1978-01-01
A review is presented summarizing several nondestructive bulk and surface analysis nuclear techniques developed in the Grumman Research Laboratories. Bulk analysis techniques include 14-MeV-neutron activation analysis and accelerator-based neutron radiography. The surface analysis techniques include resonant and non-resonant nuclear microprobes for the depth profile analysis of light elements (H, He, Li, Be, C, N, O and F) in the surface of materials. Emphasis is placed on the description and discussion of the unique nuclear microprobe analytical capacibilities of immediate importance to a number of current problems facing materials specialists. The resolution and contrast of neutron radiography was illustrated with an operating heat pipe system. The figure shows that the neutron radiograph has a resolution of better than 0.04 cm with sufficient contrast to indicate Freon 21 on the inner capillaries of the heat pipe and pooling of the liquid at the bottom. (T.G.)
Bonsu, Kwadwo Osei; Owusu, Isaac Kofi; Buabeng, Kwame Ohene; Reidpath, Daniel D; Kadirvelu, Amudha
2017-04-01
Randomized control trials of statins have not demonstrated significant benefits in outcomes of heart failure (HF). However, randomized control trials may not always be generalizable. The aim was to determine whether statin and statin type-lipophilic or -hydrophilic improve long-term outcomes in Africans with HF. This was a retrospective longitudinal study of HF patients aged ≥18 years hospitalized at a tertiary healthcare center between January 1, 2009 and December 31, 2013 in Ghana. Patients were eligible if they were discharged from first admission for HF (index admission) and followed up to time of all-cause, cardiovascular, and HF mortality or end of study. Multivariable time-dependent Cox model and inverse-probability-of-treatment weighting of marginal structural model were used to estimate associations between statin treatment and outcomes. Adjusted hazard ratios were also estimated for lipophilic and hydrophilic statin compared with no statin use. The study included 1488 patients (mean age 60.3±14.2 years) with 9306 person-years of observation. Using the time-dependent Cox model, the 5-year adjusted hazard ratios with 95% CI for statin treatment on all-cause, cardiovascular, and HF mortality were 0.68 (0.55-0.83), 0.67 (0.54-0.82), and 0.63 (0.51-0.79), respectively. Use of inverse-probability-of-treatment weighting resulted in estimates of 0.79 (0.65-0.96), 0.77 (0.63-0.96), and 0.77 (0.61-0.95) for statin treatment on all-cause, cardiovascular, and HF mortality, respectively, compared with no statin use. Among Africans with HF, statin treatment was associated with significant reduction in mortality. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Seismic Imaging and Velocity Analysis Using a Pseudo Inverse to the Extended Born Approximation
Alali, Abdullah A.
2018-01-01
the correct model. The most commonly used technique is differential semblance optimization (DSO), which depends on applying an image extension and penalizing the energy in the non-physical extension. However, studies show that the conventional DSO gradient
Error Analysis in the Joint Event Location/Seismic Calibration Inverse Problem
National Research Council Canada - National Science Library
Rodi, William L
2006-01-01
This project is developing new mathematical and computational techniques for analyzing the uncertainty in seismic event locations, as induced by observational errors and errors in travel-time models...
Meta-analysis in a nutshell: Techniques and general findings
DEFF Research Database (Denmark)
Paldam, Martin
2015-01-01
The purpose of this article is to introduce the technique and main findings of meta-analysis to the reader, who is unfamiliar with the field and has the usual objections. A meta-analysis is a quantitative survey of a literature reporting estimates of the same parameter. The funnel showing...
48 CFR 15.404-1 - Proposal analysis techniques.
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Proposal analysis techniques. 15.404-1 Section 15.404-1 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION... assistance of other experts to ensure that an appropriate analysis is performed. (6) Recommendations or...
NMR and modelling techniques in structural and conformation analysis
Energy Technology Data Exchange (ETDEWEB)
Abraham, R J [Liverpool Univ. (United Kingdom)
1994-12-31
The use of Lanthanide Induced Shifts (L.I.S.) and modelling techniques in conformational analysis is presented. The use of Co{sup III} porphyrins as shift reagents is discussed, with examples of their use in the conformational analysis of some heterocyclic amines. (author) 13 refs., 9 figs.
Application of nuclear analysis techniques in ancient chinese porcelain
International Nuclear Information System (INIS)
Feng Songlin; Xu Qing; Feng Xiangqian; Lei Yong; Cheng Lin; Wang Yanqing
2005-01-01
Ancient ceramic was fired with porcelain clay. It contains various provenance information and age characteristic. It is the scientific foundation of studying Chinese porcelain to analyze and research the ancient ceramic with modern analysis methods. According to the property of nuclear analysis technique, its function and application are discussed. (authors)
SWOT ANALYSIS-MANAGEMENT TECHNIQUES TO STREAMLINE PUBLIC BUSINESS MANAGEMENT
Rodica IVORSCHI
2012-01-01
SWOT analysis is the most important management techniques for understanding the strategic position of an organization. Objective SWOT analysis is to recommend strategies to ensure the best alignment between internal and external environment, and choosing the right strategy can be benefi cial organization in order to adapt their strengths to opportunities, minimize risks and eliminate weaknesses.
SWOT ANALYSIS-MANAGEMENT TECHNIQUES TO STREAMLINE PUBLIC BUSINESS MANAGEMENT
Directory of Open Access Journals (Sweden)
Rodica IVORSCHI
2012-06-01
Full Text Available SWOT analysis is the most important management techniques for understanding the strategic position of an organization.Objective SWOT analysis is to recommend strategies to ensure the best alignment between internal and external environment, and choosing the right strategy can be beneficial organization in order to adapt their strengths to opportunities, minimize risks and eliminate weaknesses.
Kinematics analysis technique fouettes 720° classic ballet.
Directory of Open Access Journals (Sweden)
Li Bo
2011-07-01
Full Text Available Athletics practice proved that the more complex the item, the more difficult technique of the exercises. Fouettes at 720° one of the most difficult types of the fouettes. Its implementation is based on high technology during rotation of the performer. To perform this element not only requires good physical condition of the dancer, but also requires possession correct technique dancer. On the basis corresponding kinematic theory in this study, qualitative analysis and quantitative assessment of fouettes at 720 by the best Chinese dancers. For analysis, was taken the method of stereoscopic images and the theoretical analysis.
Palmer, Margarita; Gomis, Damià; Del Mar Flexas, Maria; Jordà, Gabriel; Naveira-Garabato, Alberto; Jullion, Loic; Tsubouchi, Takamasa
2010-05-01
The ESASSI-08 oceanographic cruise carried out in January 2008 was the most significant milestone of the ESASSI project. ESASSI is the Spanish component of the Synoptic Antarctic Shelf-Slope Interactions (SASSI) study, one of the core projects of the International Polar Year. Hydrographical and biochemical (oxygen, CFCs, nutrients, chlorophyll content, alkalinity, pH, DOC) data were obtained along 11 sections in the South Scotia Ridge (SSR) region, between Elephant and South Orkney Islands. One of the aims of the ESASSI project is to determine the northward outflow of cold and ventilated waters from the Weddell Sea into the Scotia Sea. For that purpose, the accurate estimation of mass, heat, salt, and oxygen transport over the Ridge is requested. An initial analysis of transports across the different sections was first obtained from CTD and ADCP data. The following step has been the application of an inverse method, in order to obtain a better estimation of the net flow for the different water masses present in the region. The set of property-conservation equations considered by the inverse model includes mass, heat and salinity fluxes. The "box" is delimited by the sections along the northern flank of the SSR, between Elephant Island and 50°W, the southern flank of the Ridge, between 51.5°W and 50°W, the 50°W meridian and a diagonal line between Elephant Island and 51.5°W, 61.75°S. Results show that the initial calculations of transports suffered of a significant volume imbalance, due to the inherent errors of ship-ADCP data, the complicated topography and the presence of strong tidal currents in some sections. We present the post-inversion property transports across the rim of the box (and their error bars) for the different water masses.
Luciani, S.; LeNiliot, C.
2008-11-01
Two-phase and boiling flow instabilities are complex, due to phase change and the existence of several interfaces. To fully understand the high heat transfer potential of boiling flows in microscale's geometry, it is vital to quantify these transfers. To perform this task, an experimental device has been designed to observe flow patterns. Analysis is made up by using an inverse method which allows us to estimate the local heat transfers while boiling occurs inside a microchannel. In our configuration, the direct measurement would impair the accuracy of the searched heat transfer coefficient because thermocouples implanted on the surface minichannels would disturb the established flow. In this communication, we are solving a 3D IHCP which consists in estimating using experimental data measurements the surface temperature and the surface heat flux in a minichannel during convective boiling under several gravity levels (g, 1g, 1.8g). The considered IHCP is formulated as a mathematical optimization problem and solved using the boundary element method (BEM).
Thorne, Lawrence R.
2011-01-01
I propose a novel approach to balancing equations that is applicable to all chemical-reaction equations; it is readily accessible to students via scientific calculators and basic computer spreadsheets that have a matrix-inversion application. The new approach utilizes the familiar matrix-inversion operation in an unfamiliar and innovative way; its purpose is not to identify undetermined coefficients as usual, but, instead, to compute a matrix null space (or matrix kernel). The null space then...
Iino, Yoichi; Kojima, Takeji
2012-08-01
This study investigated the validity of the top-down approach of inverse dynamics analysis in fast and large rotational movements of the trunk about three orthogonal axes of the pelvis for nine male collegiate students. The maximum angles of the upper trunk relative to the pelvis were approximately 47°, 49°, 32°, and 55° for lateral bending, flexion, extension, and axial rotation, respectively, with maximum angular velocities of 209°/s, 201°/s, 145°/s, and 288°/s, respectively. The pelvic moments about the axes during the movements were determined using the top-down and bottom-up approaches of inverse dynamics and compared between the two approaches. Three body segment inertial parameter sets were estimated using anthropometric data sets (Ae et al., Biomechanism 11, 1992; De Leva, J Biomech, 1996; Dumas et al., J Biomech, 2007). The root-mean-square errors of the moments and the absolute errors of the peaks of the moments were generally smaller than 10 N·m. The results suggest that the pelvic moment in motions involving fast and large trunk movements can be determined with a certain level of validity using the top-down approach in which the trunk is modeled as two or three rigid-link segments.
Nuclear analysis techniques as a component of thermoluminescence dating
Energy Technology Data Exchange (ETDEWEB)
Prescott, J.R.; Hutton, J.T.; Habermehl, M.A. [Adelaide Univ., SA (Australia); Van Moort, J. [Tasmania Univ., Sandy Bay, TAS (Australia)
1996-12-31
In luminescence dating, an age is found by first measuring dose accumulated since the event being dated, then dividing by the annual dose rate. Analyses of minor and trace elements performed by nuclear techniques have long formed an essential component of dating. Results from some Australian sites are reported to illustrate the application of nuclear techniques of analysis in this context. In particular, a variety of methods for finding dose rates are compared, an example of a site where radioactive disequilibrium is significant and a brief summary is given of a problem which was not resolved by nuclear techniques. 5 refs., 2 tabs.
Nuclear analysis techniques as a component of thermoluminescence dating
Energy Technology Data Exchange (ETDEWEB)
Prescott, J R; Hutton, J T; Habermehl, M A [Adelaide Univ., SA (Australia); Van Moort, J [Tasmania Univ., Sandy Bay, TAS (Australia)
1997-12-31
In luminescence dating, an age is found by first measuring dose accumulated since the event being dated, then dividing by the annual dose rate. Analyses of minor and trace elements performed by nuclear techniques have long formed an essential component of dating. Results from some Australian sites are reported to illustrate the application of nuclear techniques of analysis in this context. In particular, a variety of methods for finding dose rates are compared, an example of a site where radioactive disequilibrium is significant and a brief summary is given of a problem which was not resolved by nuclear techniques. 5 refs., 2 tabs.
Application of pattern recognition techniques to crime analysis
Energy Technology Data Exchange (ETDEWEB)
Bender, C.F.; Cox, L.A. Jr.; Chappell, G.A.
1976-08-15
The initial goal was to evaluate the capabilities of current pattern recognition techniques when applied to existing computerized crime data. Performance was to be evaluated both in terms of the system's capability to predict crimes and to optimize police manpower allocation. A relation was sought to predict the crime's susceptibility to solution, based on knowledge of the crime type, location, time, etc. The preliminary results of this work are discussed. They indicate that automatic crime analysis involving pattern recognition techniques is feasible, and that efforts to determine optimum variables and techniques are warranted. 47 figures (RWR)
Wieczorek, Piotr; Ligor, Magdalena; Buszewski, Bogusław
Electromigration techniques, including capillary electrophoresis (CE), are widely used for separation and identification of compounds present in food products. These techniques may also be considered as alternate and complementary with respect to commonly used analytical techniques, such as high-performance liquid chromatography (HPLC), or gas chromatography (GC). Applications of CE concern the determination of high-molecular compounds, like polyphenols, including flavonoids, pigments, vitamins, food additives (preservatives, antioxidants, sweeteners, artificial pigments) are presented. Also, the method developed for the determination of proteins and peptides composed of amino acids, which are basic components of food products, are studied. Other substances such as carbohydrates, nucleic acids, biogenic amines, natural toxins, and other contaminations including pesticides and antibiotics are discussed. The possibility of CE application in food control laboratories, where analysis of the composition of food and food products are conducted, is of great importance. CE technique may be used during the control of technological processes in the food industry and for the identification of numerous compounds present in food. Due to the numerous advantages of the CE technique it is successfully used in routine food analysis.
Review and classification of variability analysis techniques with clinical applications.
Bravi, Andrea; Longtin, André; Seely, Andrew J E
2011-10-10
Analysis of patterns of variation of time-series, termed variability analysis, represents a rapidly evolving discipline with increasing applications in different fields of science. In medicine and in particular critical care, efforts have focussed on evaluating the clinical utility of variability. However, the growth and complexity of techniques applicable to this field have made interpretation and understanding of variability more challenging. Our objective is to provide an updated review of variability analysis techniques suitable for clinical applications. We review more than 70 variability techniques, providing for each technique a brief description of the underlying theory and assumptions, together with a summary of clinical applications. We propose a revised classification for the domains of variability techniques, which include statistical, geometric, energetic, informational, and invariant. We discuss the process of calculation, often necessitating a mathematical transform of the time-series. Our aims are to summarize a broad literature, promote a shared vocabulary that would improve the exchange of ideas, and the analyses of the results between different studies. We conclude with challenges for the evolving science of variability analysis.
Review and classification of variability analysis techniques with clinical applications
2011-01-01
Analysis of patterns of variation of time-series, termed variability analysis, represents a rapidly evolving discipline with increasing applications in different fields of science. In medicine and in particular critical care, efforts have focussed on evaluating the clinical utility of variability. However, the growth and complexity of techniques applicable to this field have made interpretation and understanding of variability more challenging. Our objective is to provide an updated review of variability analysis techniques suitable for clinical applications. We review more than 70 variability techniques, providing for each technique a brief description of the underlying theory and assumptions, together with a summary of clinical applications. We propose a revised classification for the domains of variability techniques, which include statistical, geometric, energetic, informational, and invariant. We discuss the process of calculation, often necessitating a mathematical transform of the time-series. Our aims are to summarize a broad literature, promote a shared vocabulary that would improve the exchange of ideas, and the analyses of the results between different studies. We conclude with challenges for the evolving science of variability analysis. PMID:21985357
Inverse radiative transfer problems in two-dimensional heterogeneous media
International Nuclear Information System (INIS)
Tito, Mariella Janette Berrocal
2001-01-01
The analysis of inverse problems in participating media where emission, absorption and scattering take place has several relevant applications in engineering and medicine. Some of the techniques developed for the solution of inverse problems have as a first step the solution of the direct problem. In this work the discrete ordinates method has been used for the solution of the linearized Boltzmann equation in two dimensional cartesian geometry. The Levenberg - Marquardt method has been used for the solution of the inverse problem of internal source and absorption and scattering coefficient estimation. (author)
Automated thermal mapping techniques using chromatic image analysis
Buck, Gregory M.
1989-01-01
Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.
Using Machine Learning Techniques in the Analysis of Oceanographic Data
Falcinelli, K. E.; Abuomar, S.
2017-12-01
Acoustic Doppler Current Profilers (ADCPs) are oceanographic tools capable of collecting large amounts of current profile data. Using unsupervised machine learning techniques such as principal component analysis, fuzzy c-means clustering, and self-organizing maps, patterns and trends in an ADCP dataset are found. Cluster validity algorithms such as visual assessment of cluster tendency and clustering index are used to determine the optimal number of clusters in the ADCP dataset. These techniques prove to be useful in analysis of ADCP data and demonstrate potential for future use in other oceanographic applications.
Energy Technology Data Exchange (ETDEWEB)
Fraguela Collar, Andres; Oliveros Oliveros, Jose J.; Ivanovich Grebennikov, Alexandre [Benemerita Universidad Autonoma de Puebla, Puebla (Mexico)
2001-04-01
Techniques of the potential theory have been used to analyze the properties of the operator where the sources of current produced by electric activity in the cerebral cortex is associated with the measurements on the scalp of the electric potential generated by these sources. A medium conductor model which take in account the circle convolution of the brain has been used to prove the uniqueness of solution of the inverse problem of recuperation of activity cortical sources from electroencephalographic measurement on the scalp. This result of uniqueness is very important because we can to use algorithms of regularization. An other state the problem is presented to elaborate numerical solutions of the inverse problem using a set of discrete measurement. The stability of algorithms is showed in some examples. [Spanish] Tecnicas de la teoria de potencial han sido utilizadas para analizar las propiedades del operador que a las fuentes de corriente asociadas a la actividad electrica de las neuronas en la corteza cerebral, le hace corresponder la medicion de potencial electrico generado por dichas fuentes sobre el cuero cabelludo. Un modelo de medio conductor que toma en consideracion las circonvoluciones del cerebro ha sido utilizado para probar la unidad de solucion del problema inverso de recuperacion de las fuentes de actividad cortical a partir de las mediciones electroencefalograficas. Este resultado de unidad es fundamental ya que nos permite aplicar los metodos de regularizacion. Es presentado otro planteamiento del problema que nos permite construir soluciones numericas del problema inverso usando los datos de entrada discretos. La estabilidad de los algoritmos es ilustrada en algunos ejemplos numericos.
Wang, Tiejun; Franz, Trenton E.; Yue, Weifeng; Szilagyi, Jozsef; Zlotnik, Vitaly A.; You, Jinsheng; Chen, Xunhong; Shulski, Martha D.; Young, Aaron
2016-02-01
Despite the importance of groundwater recharge (GR), its accurate estimation still remains one of the most challenging tasks in the field of hydrology. In this study, with the help of inverse modeling, long-term (6 years) soil moisture data at 34 sites from the Automated Weather Data Network (AWDN) were used to estimate the spatial distribution of GR across Nebraska, USA, where significant spatial variability exists in soil properties and precipitation (P). To ensure the generality of this study and its potential broad applications, data from public domains and literature were used to parameterize the standard Hydrus-1D model. Although observed soil moisture differed significantly across the AWDN sites mainly due to the variations in P and soil properties, the simulations were able to capture the dynamics of observed soil moisture under different climatic and soil conditions. The inferred mean annual GR from the calibrated models varied over three orders of magnitude across the study area. To assess the uncertainties of the approach, estimates of GR and actual evapotranspiration (ETa) from the calibrated models were compared to the GR and ETa obtained from other techniques in the study area (e.g., remote sensing, tracers, and regional water balance). Comparison clearly demonstrated the feasibility of inverse modeling and large-scale (>104 km2) soil moisture monitoring networks for estimating GR. In addition, the model results were used to further examine the impacts of climate and soil on GR. The data showed that both P and soil properties had significant impacts on GR in the study area with coarser soils generating higher GR; however, different relationships between GR and P emerged at the AWDN sites, defined by local climatic and soil conditions. In general, positive correlations existed between annual GR and P for the sites with coarser-textured soils or under wetter climatic conditions. With the rapidly expanding soil moisture monitoring networks around the
Directory of Open Access Journals (Sweden)
Joel Sereno
2010-01-01
Full Text Available Inverse kinematics is the process of converting a Cartesian point in space into a set of joint angles to more efficiently move the end effector of a robot to a desired orientation. This project investigates the inverse kinematics of a robotic hand with fingers under various scenarios. Assuming the parameters of a provided robot, a general equation for the end effector point was calculated and used to plot the region of space that it can reach. Further, the benefits obtained from the addition of a prismatic joint versus an extra variable angle joint were considered. The results confirmed that having more movable parts, such as prismatic points and changing angles, increases the effective reach of a robotic hand.
Lambrakos, S. G.
2018-04-01
Inverse thermal analysis of Ti-6Al-4V friction stir welds is presented that demonstrates application of a methodology using numerical-analytical basis functions and temperature-field constraint conditions. This analysis provides parametric representation of friction-stir-weld temperature histories that can be adopted as input data to computational procedures for prediction of solid-state phase transformations and mechanical response. These parameterized temperature histories can be used for inverse thermal analysis of friction stir welds having process conditions similar those considered here. Case studies are presented for inverse thermal analysis of friction stir welds that use three-dimensional constraint conditions on calculated temperature fields, which are associated with experimentally measured transformation boundaries and weld-stir-zone cross sections.
National Research Council Canada - National Science Library
Neander, David
2002-01-01
.... Subsequent analysis included forward acoustic modeling to calculate predicted raypaths. Observed arrivals were then associated with modeled raypaths, extracting observed travel times over the 17 months time series...
Revealing stacking sequences in inverse opals by microradian X-ray diffraction
Sinitskii, A.; Abramova, V.; Grigorieva, N.; Grigoriev, S.; Snigirev, A.; Byelov, D.; Petukhov, A.V.
2010-01-01
We present the results of the structural analysis of inverse opal photonic crystals by microradian X-ray diffraction. Inverse opals based on different oxide materials (TiO2, SiO2 and Fe2O3) were fabricated by templating polystyrene colloidal crystal films grown by the vertical deposition technique.
Windows forensic analysis toolkit advanced analysis techniques for Windows 7
Carvey, Harlan
2012-01-01
Now in its third edition, Harlan Carvey has updated "Windows Forensic Analysis Toolkit" to cover Windows 7 systems. The primary focus of this edition is on analyzing Windows 7 systems and on processes using free and open-source tools. The book covers live response, file analysis, malware detection, timeline, and much more. The author presents real-life experiences from the trenches, making the material realistic and showing the why behind the how. New to this edition, the companion and toolkit materials are now hosted online. This material consists of electronic printable checklists, cheat sheets, free custom tools, and walk-through demos. This edition complements "Windows Forensic Analysis Toolkit, 2nd Edition", (ISBN: 9781597494229), which focuses primarily on XP. It includes complete coverage and examples on Windows 7 systems. It contains Lessons from the Field, Case Studies, and War Stories. It features companion online material, including electronic printable checklists, cheat sheets, free custom tools, ...
Energy Technology Data Exchange (ETDEWEB)
Shashi, V.; Allinson, P.S.; Golden, W.L.; Kelly, T.E. [Univ. of Virginia, Charlottesville, VA (United States)
1994-09-01
Recent studies in yeast have shown that telomeres rather than centromeres lead in chromosome movement just prior to meiosis and may have a role in recombination. Cytological studies of meiosis in Drosophila and mice have shown that in pericentric inversion heterozygotes there is lack of loop formation, with recobmination seen only outside the inversion. In a family with Duchenne muscular dystrophy (DMD) we recognized that only affected males and carrier females had a pericentric X chromosome inversion (inv X(p11.4;q26)). Since the short arm inversion breakpoint was proximal to the DMD locus, it could not be implicated in the mutational event causing DMD. There was no history of infertility, recurrent miscarriages or liveborn unbalanced females to suggest there was recombination within the inversion. We studied 22 members over three generations to understand the pattern of meiotic recombination between the normal and the inverted X chromosome. In total, 17 meioses involving the inverted X chromosome in females were studied by cytogenetic analysis and 16 CA repeat polymorphisms along the length of the X chromosome. Results: (a) There was complete concordance between the segregation of the DMD mutation and the inverted X chromosome. (b) On DNA analysis, there was complete absence of recombination within the inverted segment. We also found no recombination at the DMD locus. Recombination was seen only at Xp22 and Xq27-28. (c) Recombination was seen in the same individual at both Xp22 and Xq27-28 without recombination otherwise. Conclusions: (1) Pericentric X inversions reduce the genetic map length of the chromosome, with the physical map length being normal. (2) Meiotic X chromosome pairing in this family is initiated at the telomeres. (3) Following telomeric pairing in pericentric X chromosome inversions, there is inhibition of recombination within the inversion and adjacent regions.
Conference on Techniques of Nuclear and Conventional Analysis and Applications
International Nuclear Information System (INIS)
2012-01-01
Full text : With their wide scope, particularly in the areas of environment, geology, mining, industry and life sciences; analysis techniques are of great importance in research as fundamental and applied. The Conference on Techniques for Nuclear and Conventional Analysis and Applications (TANCA) are Registered in the national strategy of opening of the University and national research centers on their local, national and international levels. This conference aims to: Promoting nuclear and conventional analytical techniques; Contribute to the creation of synergy between the different players involved in these techniques include, Universities, Research Organizations, Regulatory Authorities, Economic Operators, NGOs and others; Inform and educate potential users of the performance of these techniques; Strengthen exchanges and links between researchers, industry and policy makers; Implement a program of inter-laboratory comparison between Moroccan one hand, and their foreign counterparts on the other; Contribute to the research training of doctoral students and postdoctoral scholars. Given the relevance and importance of the issues related to environment and impact on cultural heritage, this fourth edition of TANCA is devoted to the application of analytical techniques for conventional and nuclear Questions ied to environment and its impact on cultural heritage.
Kim, Seong-Eun; Roberts, John A; Eisenmenger, Laura B; Aldred, Booth W; Jamil, Osama; Bolster, Bradley D; Bi, Xiaoming; Parker, Dennis L; Treiman, Gerald S; McNally, J Scott
2017-02-01
Carotid artery imaging is important in the clinical management of patients at risk for stroke. Carotid intraplaque hemorrhage (IPH) presents an important diagnostic challenge. 3D magnetization prepared rapid acquisition gradient echo (MPRAGE) has been shown to accurately image carotid IPH; however, this sequence can be limited due to motion- and flow-related artifact. The purpose of this work was to develop and evaluate an improved 3D carotid MPRAGE sequence for IPH detection. We hypothesized that a radial-based k-space trajectory sequence such as "Stack of Stars" (SOS) incorporated with inversion recovery preparation would offer reduced motion sensitivity and more robust flow suppression by oversampling of central k-space. A total of 31 patients with carotid disease (62 carotid arteries) were imaged at 3T magnetic resonance imaging (MRI) with 3D IR-prep Cartesian and SOS sequences. Image quality was determined between SOS and Cartesian MPRAGE in 62 carotid arteries using t-tests and multivariable linear regression. Kappa analysis was used to determine interrater reliability. In all, 25 among 62 carotid plaques had carotid IPH by consensus from the reviewers on SOS compared to 24 on Cartesian sequence. Image quality was significantly higher with SOS compared to Cartesian (mean 3.74 vs. 3.11, P SOS acquisition yielded sharper image features with less motion (19.4% vs. 45.2%, P SOS (kappa = 0.89), higher than that of Cartesian (kappa = 0.84). By minimizing flow and motion artifacts and retaining high interrater reliability, the SOS MPRAGE has important advantages over Cartesian MPRAGE in carotid IPH detection. 1 J. Magn. Reson. Imaging 2017;45:410-417. © 2016 International Society for Magnetic Resonance in Medicine.
DEFF Research Database (Denmark)
Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben
2017-01-01
The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo......-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations...
Yang, Hongxin; Su, Fulin
2018-01-01
We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.
The application of value analysis techniques for complex problems
International Nuclear Information System (INIS)
Chiquelin, W.R.; Cossel, S.C.; De Jong, V.J.; Halverson, T.W.
1986-01-01
This paper discusses the application of the Value Analysis technique to the transuranic package transporter (TRUPACT). A team representing five different companies or organizations with diverse technical backgrounds was formed to analyze and recommend improvements. The results were a 38% systems-wide savings, if incorporated, and a shipping container which is volumetrically and payload efficient as well as user friendly. The Value Analysis technique is a proven tool widely used in many diverse areas both in the government and the private sector. Value Analysis uses functional diagramming of a piece of equipment or process to discretely identify every facet of the item being analyzed. A standard set of questions is then asked: What is it?, What does it do?, What does it cost?, What else will do the task?, and What would that cost? Using logic and a disciplined approach, the result of the Value Analysis performs the necessary functions at a high quality and the lowest overall cost
A comparative analysis of soft computing techniques for gene prediction.
Goel, Neelam; Singh, Shailendra; Aseri, Trilok Chand
2013-07-01
The rapid growth of genomic sequence data for both human and nonhuman species has made analyzing these sequences, especially predicting genes in them, very important and is currently the focus of many research efforts. Beside its scientific interest in the molecular biology and genomics community, gene prediction is of considerable importance in human health and medicine. A variety of gene prediction techniques have been developed for eukaryotes over the past few years. This article reviews and analyzes the application of certain soft computing techniques in gene prediction. First, the problem of gene prediction and its challenges are described. These are followed by different soft computing techniques along with their application to gene prediction. In addition, a comparative analysis of different soft computing techniques for gene prediction is given. Finally some limitations of the current research activities and future research directions are provided. Copyright © 2013 Elsevier Inc. All rights reserved.
Comparing dynamical systems concepts and techniques for biomechanical analysis
van Emmerik, Richard E.A.; Ducharme, Scott W.; Amado, Avelino C.; Hamill, Joseph
2016-01-01
Traditional biomechanical analyses of human movement are generally derived from linear mathematics. While these methods can be useful in many situations, they do not describe behaviors in human systems that are predominately nonlinear. For this reason, nonlinear analysis methods based on a dynamical systems approach have become more prevalent in recent literature. These analysis techniques have provided new insights into how systems (1) maintain pattern stability, (2) transition into new stat...
Reliability Analysis Techniques for Communication Networks in Nuclear Power Plant
International Nuclear Information System (INIS)
Lim, T. J.; Jang, S. C.; Kang, H. G.; Kim, M. C.; Eom, H. S.; Lee, H. J.
2006-09-01
The objectives of this project is to investigate and study existing reliability analysis techniques for communication networks in order to develop reliability analysis models for nuclear power plant's safety-critical networks. It is necessary to make a comprehensive survey of current methodologies for communication network reliability. Major outputs of this study are design characteristics of safety-critical communication networks, efficient algorithms for quantifying reliability of communication networks, and preliminary models for assessing reliability of safety-critical communication networks
Analytical techniques for wine analysis: An African perspective; a review
International Nuclear Information System (INIS)
Villiers, André de; Alberts, Phillipus; Tredoux, Andreas G.J.; Nieuwoudt, Hélène H.
2012-01-01
Highlights: ► Analytical techniques developed for grape and wine analysis in Africa are reviewed. ► The utility of infrared spectroscopic methods is demonstrated. ► An overview of separation of wine constituents by GC, HPLC, CE is presented. ► Novel LC and GC sample preparation methods for LC and GC are presented. ► Emerging methods for grape and wine analysis in Africa are discussed. - Abstract: Analytical chemistry is playing an ever-increasingly important role in the global wine industry. Chemical analysis of wine is essential in ensuring product safety and conformity to regulatory laws governing the international market, as well as understanding the fundamental aspects of grape and wine production to improve manufacturing processes. Within this field, advanced instrumental analysis methods have been exploited more extensively in recent years. Important advances in instrumental analytical techniques have also found application in the wine industry. This review aims to highlight the most important developments in the field of instrumental wine and grape analysis in the African context. The focus of this overview is specifically on the application of advanced instrumental techniques, including spectroscopic and chromatographic methods. Recent developments in wine and grape analysis and their application in the African context are highlighted, and future trends are discussed in terms of their potential contribution to the industry.
Analytical techniques for wine analysis: An African perspective; a review
Energy Technology Data Exchange (ETDEWEB)
Villiers, Andre de, E-mail: ajdevill@sun.ac.za [Department of Chemistry and Polymer Science, Stellenbosch University, Private Bag X1, Matieland 7602, Stellenbosch (South Africa); Alberts, Phillipus [Department of Chemistry and Polymer Science, Stellenbosch University, Private Bag X1, Matieland 7602, Stellenbosch (South Africa); Tredoux, Andreas G.J.; Nieuwoudt, Helene H. [Institute for Wine Biotechnology, Department of Viticulture and Oenology, Stellenbosch University, Private Bag X1, Matieland 7602, Stellenbosch (South Africa)
2012-06-12
Highlights: Black-Right-Pointing-Pointer Analytical techniques developed for grape and wine analysis in Africa are reviewed. Black-Right-Pointing-Pointer The utility of infrared spectroscopic methods is demonstrated. Black-Right-Pointing-Pointer An overview of separation of wine constituents by GC, HPLC, CE is presented. Black-Right-Pointing-Pointer Novel LC and GC sample preparation methods for LC and GC are presented. Black-Right-Pointing-Pointer Emerging methods for grape and wine analysis in Africa are discussed. - Abstract: Analytical chemistry is playing an ever-increasingly important role in the global wine industry. Chemical analysis of wine is essential in ensuring product safety and conformity to regulatory laws governing the international market, as well as understanding the fundamental aspects of grape and wine production to improve manufacturing processes. Within this field, advanced instrumental analysis methods have been exploited more extensively in recent years. Important advances in instrumental analytical techniques have also found application in the wine industry. This review aims to highlight the most important developments in the field of instrumental wine and grape analysis in the African context. The focus of this overview is specifically on the application of advanced instrumental techniques, including spectroscopic and chromatographic methods. Recent developments in wine and grape analysis and their application in the African context are highlighted, and future trends are discussed in terms of their potential contribution to the industry.
Evolution of the sedimentation technique for particle size distribution analysis
International Nuclear Information System (INIS)
Maley, R.
1998-01-01
After an introduction on the significance of particle size measurements, sedimentation methods are described, with emphasis on the evolution of the gravitational approach. The gravitational technique based on mass determination by X-ray adsorption allows fast analysis by automation and easy data handling, in addition to providing the accuracy required by quality control and research applications [it
Comparative Analysis of Some Techniques in the Biological ...
African Journals Online (AJOL)
The experiments involved the simulation of conditions of a major spill by pouring crude oil on the cells from perforated cans and the in-situ bioremediation of the polluted soils using the techniques that consisted in the manipulation of different variables within the soil environment. The analysis of soil characteristics after a ...
Tailored Cloze: Improved with Classical Item Analysis Techniques.
Brown, James Dean
1988-01-01
The reliability and validity of a cloze procedure used as an English-as-a-second-language (ESL) test in China were improved by applying traditional item analysis and selection techniques. The 'best' test items were chosen on the basis of item facility and discrimination indices, and were administered as a 'tailored cloze.' 29 references listed.…
The Recoverability of P-Technique Factor Analysis
Molenaar, Peter C. M.; Nesselroade, John R.
2009-01-01
It seems that just when we are about to lay P-technique factor analysis finally to rest as obsolete because of newer, more sophisticated multivariate time-series models using latent variables--dynamic factor models--it rears its head to inform us that an obituary may be premature. We present the results of some simulations demonstrating that even…
Zakaria, M. A.; Majeed, A. P. P. A.; Taha, Z.; Alim, M. M.; Baarath, K.
2018-03-01
The movement of a lower limb exoskeleton requires a reasonably accurate control method to allow for an effective gait therapy session to transpire. Trajectory tracking is a nontrivial means of passive rehabilitation technique to correct the motion of the patients’ impaired limb. This paper proposes an inverse predictive model that is coupled together with the forward kinematics of the exoskeleton to estimate the behaviour of the system. A conventional PID control system is used to converge the required joint angles based on the desired input from the inverse predictive model. It was demonstrated through the present study, that the inverse predictive model is capable of meeting the trajectory demand with acceptable error tolerance. The findings further suggest the ability of the predictive model of the exoskeleton to predict a correct joint angle command to the system.
Spectroscopic analysis technique for arc-welding process control
Mirapeix, Jesús; Cobo, Adolfo; Conde, Olga; Quintela, María Ángeles; López-Higuera, José-Miguel
2005-09-01
The spectroscopic analysis of the light emitted by thermal plasmas has found many applications, from chemical analysis to monitoring and control of industrial processes. Particularly, it has been demonstrated that the analysis of the thermal plasma generated during arc or laser welding can supply information about the process and, thus, about the quality of the weld. In some critical applications (e.g. the aerospace sector), an early, real-time detection of defects in the weld seam (oxidation, porosity, lack of penetration, ...) is highly desirable as it can reduce expensive non-destructive testing (NDT). Among others techniques, full spectroscopic analysis of the plasma emission is known to offer rich information about the process itself, but it is also very demanding in terms of real-time implementations. In this paper, we proposed a technique for the analysis of the plasma emission spectrum that is able to detect, in real-time, changes in the process parameters that could lead to the formation of defects in the weld seam. It is based on the estimation of the electronic temperature of the plasma through the analysis of the emission peaks from multiple atomic species. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, we employ the LPO (Linear Phase Operator) sub-pixel algorithm to accurately estimate the central wavelength of the peaks (allowing an automatic identification of each atomic species) and cubic-spline interpolation of the noisy data to obtain the intensity and width of the peaks. Experimental tests on TIG-welding using fiber-optic capture of light and a low-cost CCD-based spectrometer, show that some typical defects can be easily detected and identified with this technique, whose typical processing time for multiple peak analysis is less than 20msec. running in a conventional PC.
Analysis of compressive fracture in rock using statistical techniques
Energy Technology Data Exchange (ETDEWEB)
Blair, S.C.
1994-12-01
Fracture of rock in compression is analyzed using a field-theory model, and the processes of crack coalescence and fracture formation and the effect of grain-scale heterogeneities on macroscopic behavior of rock are studied. The model is based on observations of fracture in laboratory compression tests, and incorporates assumptions developed using fracture mechanics analysis of rock fracture. The model represents grains as discrete sites, and uses superposition of continuum and crack-interaction stresses to create cracks at these sites. The sites are also used to introduce local heterogeneity. Clusters of cracked sites can be analyzed using percolation theory. Stress-strain curves for simulated uniaxial tests were analyzed by studying the location of cracked sites, and partitioning of strain energy for selected intervals. Results show that the model implicitly predicts both development of shear-type fracture surfaces and a strength-vs-size relation that are similar to those observed for real rocks. Results of a parameter-sensitivity analysis indicate that heterogeneity in the local stresses, attributed to the shape and loading of individual grains, has a first-order effect on strength, and that increasing local stress heterogeneity lowers compressive strength following an inverse power law. Peak strength decreased with increasing lattice size and decreasing mean site strength, and was independent of site-strength distribution. A model for rock fracture based on a nearest-neighbor algorithm for stress redistribution is also presented and used to simulate laboratory compression tests, with promising results.
International Nuclear Information System (INIS)
Frick, Klaus; Marnitz, Philipp; Munk, Axel
2012-01-01
This paper is concerned with a novel regularization technique for solving linear ill-posed operator equations in Hilbert spaces from data that are corrupted by white noise. We combine convex penalty functionals with extreme-value statistics of projections of the residuals on a given set of sub-spaces in the image space of the operator. We prove general consistency and convergence rate results in the framework of Bregman divergences which allows for a vast range of penalty functionals. Various examples that indicate the applicability of our approach will be discussed. We will illustrate in the context of signal and image processing that the presented method constitutes a locally adaptive reconstruction method. (paper)
Study of analysis techniques of thermoluminescent dosimeters response
International Nuclear Information System (INIS)
Castro, Walber Amorim
2002-01-01
The Personal Monitoring Service of the Centro Regional de Ciencias Nucleares uses in its dosemeter the TLD 700 material . The TLD's analysis is carried out using a Harshaw-Bicron model 6600 automatic reading system. This system uses dry air instead of the traditional gaseous nitrogen. This innovation brought advantages to the service but introduced uncertainties in the reference of the detectors; one of these was observed for doses below 0,5 mSv. In this work different techniques of analysis of the TLD response were investigated and compared, involving dose values in this interval. These techniques include thermal pre-treatment, and different kinds of the glow curves analysis methods were investigated. Obtained results showed the necessity of developing a specific software that permits the automatic background subtraction for the glow curves for each dosemeter . This software was developed and it bean tested. Preliminary results showed the software increase the response reproducibility. (author)
Sensitivity analysis technique for application to deterministic models
International Nuclear Information System (INIS)
Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.
1987-01-01
The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method
Multivariate Analysis Techniques for Optimal Vision System Design
DEFF Research Database (Denmark)
Sharifzadeh, Sara
The present thesis considers optimization of the spectral vision systems used for quality inspection of food items. The relationship between food quality, vision based techniques and spectral signature are described. The vision instruments for food analysis as well as datasets of the food items...... used in this thesis are described. The methodological strategies are outlined including sparse regression and pre-processing based on feature selection and extraction methods, supervised versus unsupervised analysis and linear versus non-linear approaches. One supervised feature selection algorithm...... (SSPCA) and DCT based characterization of the spectral diffused reflectance images for wavelength selection and discrimination. These methods together with some other state-of-the-art statistical and mathematical analysis techniques are applied on datasets of different food items; meat, diaries, fruits...
Multiple predictor smoothing methods for sensitivity analysis: Description of techniques
International Nuclear Information System (INIS)
Storlie, Curtis B.; Helton, Jon C.
2008-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
International Nuclear Information System (INIS)
Bashkatov, A N; Genina, Elina A; Kochubei, V I; Tuchin, Valerii V
2006-01-01
Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)
Use of the Comprehensive Inversion method for Swarm satellite data analysis
DEFF Research Database (Denmark)
Sabaka, T. J.; Tøffner-Clausen, Lars; Olsen, Nils
2013-01-01
An advanced algorithm, known as the “Comprehensive Inversion” (CI), is presented for the analysis of Swarm measurements to generate a consistent set of Level-2 data products to be delivered by the Swarm “Satellite Constellation Application and Research Facility” (SCARF) to the European Space Agency...
DATA ANALYSIS TECHNIQUES IN SERVICE QUALITY LITERATURE: ESSENTIALS AND ADVANCES
Directory of Open Access Journals (Sweden)
Mohammed naved Khan
2013-05-01
Full Text Available Academic and business researchers have for long debated on the most appropriate data analysis techniques that can be employed in conducting empirical researches in the domain of services marketing. On the basis of an exhaustive review of literature, the present paper attempts to provide a concise and schematic portrayal of generally followed data analysis techniques in the field of services quality literature. Collectively, the extant literature suggests that there is a growing trend among researchers to rely on higher order multivariate techniques viz. confirmatory factor analysis, structural equation modeling etc. to generate and analyze complex models, while at times ignoring very basic and yet powerful procedures such as mean, t-Test, ANOVA and correlation. The marked shift in orientation of researchers towards using sophisticated analytical techniques can largely beattributed to the competition within the community of researchers in social sciences in general and those working in the area of service quality in particular as also growing demands of reviewers ofjournals. From a pragmatic viewpoint, it is expected that the paper will serve as a useful source of information and provide deeper insights to academic researchers, consultants, and practitionersinterested in modelling patterns of service quality and arriving at optimal solutions to increasingly complex management problems.
Silva, M E T; Brandão, S; Parente, M P L; Mascarenhas, T; Natal Jorge, R M
2017-06-01
Pelvic disorders can be associated with changes in the biomechanical properties in the muscle, ligaments and/or connective tissue form fascia and ligaments. In this sense, the study of their mechanical behavior is important to understand the structure and function of these biological soft tissues. The aim of this study was to establish the biomechanical properties of the pelvic floor muscles of continent and incontinent women, using an inverse finite element analysis (FEA). The numerical models, including the pubovisceral muscle and pelvic bones were built from magnetic resonance (MR) images acquired at rest. The numerical simulation of Valsalva maneuver was based on the finite element method and the material constants were determined for different constitutive models (Neo-Hookean, Mooney-Rivlin and Yeoh) using an iterative process. The material constants (MPa) for Neo-Hookean (c 1 ) were 0.039 ± 0.022 and 0.024 ± 0.004 for continent vs. incontinent women. For Mooney-Rivlin (c 1 ) the values obtained were 0.026 ± 0.010 vs. 0.016 ± 0.003, and for Yeoh (c 1 ) the values obtained were 0.031 ± 0.023 vs. 0.016 ± 0.002, (p continent women. The results were also similar between MRI and numerical simulations (40.27% vs. 42.17% for Neo-Hookean, 39.87% for Mooney-Rivlin and 41.61% for Yeoh). Using an inverse FEA coupled with MR images allowed to obtain the in vivo biomechanical properties of the pelvic floor muscles, leading to a relationship between them for the continent and incontinent women in a non-invasive manner.
Practical applications of activation analysis and other nuclear techniques
International Nuclear Information System (INIS)
Lyon, W.S.
1982-01-01
Neeutron activation analysis (NAA) is a versatile, sensitive multielement, usually nondestructive analytical technique used to determine elemental concentrations in a variety of materials. Samples are irradiated with neutrons in a nuclear reactor, removed, and for the nondestructive technique, the induced radioactivity measured. This measurement of γ rays emitted from specific radionuclides makes possible the quantitative determination of elements present. The method is described, advantages and disadvantages listed and a number of examples of its use given. Two other nuclear methods, particle induced x-ray emission and synchrotron produced x-ray fluorescence are also briefly discussed
yan, LIU Jun; hua, SONG Xiang; Yan, LIU
2017-11-01
The article uses the Fast Lagrangian Analysis of Continua in 3 Dimensions (FLAC3D) to make an analysis of the deformation characteristics of the structural plane, which is based on a real rock foundation pit in Jinan city. It makes an inverse analysis of the strength of the surface structure and the occurrence of the parameters by Mohr-Coulomb strength criterion value criterion in the way of numerical simulation, which explores the change of stress field of x-z oblique section of pit wall and the relation between the exposed height of structural plane and the critical cohesion, the exposed height and critical inclination angle of the structure surface. We can find that when the foundation pit is in the critical stable state and the inclination angle of the structural plane is constant, the critical cohesive force of the structural plane increases with the increase of the exposed surface height. And when the foundation pit in the critical stability of the situation and the structural surface of the cohesive force is constant, the structural surface exposed height increases and the structural angle of inclination is declining. The conclusion can provide theoretical basis for the design and construction of the rock foundation pit with structural plane.
Inverted fractal analysis of TiO{sub x} thin layers grown by inverse pulsed laser deposition
Energy Technology Data Exchange (ETDEWEB)
Égerházi, L., E-mail: egerhazi.laszlo@gmail.com [University of Szeged, Faculty of Medicine, Department of Medical Physics and Informatics, Korányi fasor 9., H-6720 Szeged (Hungary); Smausz, T. [University of Szeged, Faculty of Science, Department of Optics and Quantum Electronics, Dóm tér 9., H-6720 Szeged (Hungary); Bari, F. [University of Szeged, Faculty of Medicine, Department of Medical Physics and Informatics, Korányi fasor 9., H-6720 Szeged (Hungary)
2013-08-01
Inverted fractal analysis (IFA), a method developed for fractal analysis of scanning electron microscopy images of cauliflower-like thin films is presented through the example of layers grown by inverse pulsed laser deposition (IPLD). IFA uses the integrated fractal analysis module (FracLac) of the image processing software ImageJ, and an objective thresholding routine that preserves the characteristic features of the images, independently of their brightness and contrast. IFA revealed f{sub D} = 1.83 ± 0.01 for TiO{sub x} layers grown at 5–50 Pa background pressures. For a series of images, this result was verified by evaluating the scaling of the number of still resolved features on the film, counted manually. The value of f{sub D} not only confirms the fractal structure of TiO{sub x} IPLD thin films, but also suggests that the aggregation of plasma species in the gas atmosphere may have only limited contribution to the deposition.
Rosid, M. S.; Augusta, F. F.; Haidar, M. W.
2018-05-01
In general, carbonate secondary pore structure is very complex due to the significant diagenesis process. Therefore, the determination of carbonate secondary pore types is an important factor which is related to study of production. This paper mainly deals not only to figure out the secondary pores types, but also to predict the distribution of the secondary pore types of carbonate reservoir. We apply Differential Effective Medium (DEM) for analyzing pore types of carbonate rocks. The input parameter of DEM inclusion model is fraction of porosity and the output parameters are bulk moduli and shear moduli as a function of porosity, which is used as input parameter for creating Vp and Vs modelling. We also apply seismic post-stack inversion technique that is used to map the pore type distribution from 3D seismic data. Afterward, we create porosity cube which is better to use geostatistical method due to the complexity of carbonate reservoir. Thus, the results of this study might show the secondary porosity distribution of carbonate reservoir at “FR” field. In this case, North – Northwest of study area are dominated by interparticle pores and crack pores. Hence, that area has highest permeability that hydrocarbon can be more accumulated.
Nuclear techniques of analysis in diamond synthesis and annealing
Energy Technology Data Exchange (ETDEWEB)
Jamieson, D. N.; Prawer, S.; Gonon, P.; Walker, R.; Dooley, S.; Bettiol, A.; Pearce, J. [Melbourne Univ., Parkville, VIC (Australia). School of Physics
1996-12-31
Nuclear techniques of analysis have played an important role in the study of synthetic and laser annealed diamond. These measurements have mainly used ion beam analysis with a focused MeV ion beam in a nuclear microprobe system. A variety of techniques have been employed. One of the most important is nuclear elastic scattering, sometimes called non-Rutherford scattering, which has been used to accurately characterise diamond films for thickness and composition. This is possible by the use of a database of measured scattering cross sections. Recently, this work has been extended and nuclear elastic scattering cross sections for both natural boron isotopes have been measured. For radiation damaged diamond, a focused laser annealing scheme has been developed which produces near complete regrowth of MeV phosphorus implanted diamonds. In the laser annealed regions, proton induced x-ray emission has been used to show that 50 % of the P atoms occupy lattice sites. This opens the way to produce n-type diamond for microelectronic device applications. All these analytical applications utilize a focused MeV microbeam which is ideally suited for diamond analysis. This presentation reviews these applications, as well as the technology of nuclear techniques of analysis for diamond with a focused beam. 9 refs., 6 figs.
Reliability analysis of large scaled structures by optimization technique
International Nuclear Information System (INIS)
Ishikawa, N.; Mihara, T.; Iizuka, M.
1987-01-01
This paper presents a reliability analysis based on the optimization technique using PNET (Probabilistic Network Evaluation Technique) method for the highly redundant structures having a large number of collapse modes. This approach makes the best use of the merit of the optimization technique in which the idea of PNET method is used. The analytical process involves the minimization of safety index of the representative mode, subjected to satisfaction of the mechanism condition and of the positive external work. The procedure entails the sequential performance of a series of the NLP (Nonlinear Programming) problems, where the correlation condition as the idea of PNET method pertaining to the representative mode is taken as an additional constraint to the next analysis. Upon succeeding iterations, the final analysis is achieved when a collapse probability at the subsequent mode is extremely less than the value at the 1st mode. The approximate collapse probability of the structure is defined as the sum of the collapse probabilities of the representative modes classified by the extent of correlation. Then, in order to confirm the validity of the proposed method, the conventional Monte Carlo simulation is also revised by using the collapse load analysis. Finally, two fairly large structures were analyzed to illustrate the scope and application of the approach. (orig./HP)
Nuclear techniques of analysis in diamond synthesis and annealing
Energy Technology Data Exchange (ETDEWEB)
Jamieson, D N; Prawer, S; Gonon, P; Walker, R; Dooley, S; Bettiol, A; Pearce, J [Melbourne Univ., Parkville, VIC (Australia). School of Physics
1997-12-31
Nuclear techniques of analysis have played an important role in the study of synthetic and laser annealed diamond. These measurements have mainly used ion beam analysis with a focused MeV ion beam in a nuclear microprobe system. A variety of techniques have been employed. One of the most important is nuclear elastic scattering, sometimes called non-Rutherford scattering, which has been used to accurately characterise diamond films for thickness and composition. This is possible by the use of a database of measured scattering cross sections. Recently, this work has been extended and nuclear elastic scattering cross sections for both natural boron isotopes have been measured. For radiation damaged diamond, a focused laser annealing scheme has been developed which produces near complete regrowth of MeV phosphorus implanted diamonds. In the laser annealed regions, proton induced x-ray emission has been used to show that 50 % of the P atoms occupy lattice sites. This opens the way to produce n-type diamond for microelectronic device applications. All these analytical applications utilize a focused MeV microbeam which is ideally suited for diamond analysis. This presentation reviews these applications, as well as the technology of nuclear techniques of analysis for diamond with a focused beam. 9 refs., 6 figs.
Development of fault diagnostic technique using reactor noise analysis
International Nuclear Information System (INIS)
Park, Jin Ho; Kim, J. S.; Oh, I. S.; Ryu, J. S.; Joo, Y. S.; Choi, S.; Yoon, D. B.
1999-04-01
The ultimate goal of this project is to establish the analysis technique to diagnose the integrity of reactor internals using reactor noise. The reactor noise analyses techniques for the PWR and CANDU NPP(Nuclear Power Plants) were established by which the dynamic characteristics of reactor internals and SPND instrumentations could be identified, and the noise database corresponding to each plant(both Korean and foreign one) was constructed and compared. Also the change of dynamic characteristics of the Ulchin 1 and 2 reactor internals were simulated under presumed fault conditions. Additionally portable reactor noise analysis system was developed so that real time noise analysis could directly be able to be performed at plant site. The reactor noise analyses techniques developed and the database obtained from the fault simulation, can be used to establish a knowledge based expert system to diagnose the NPP's abnormal conditions. And the portable reactor noise analysis system may be utilized as a substitute for plant IVMS(Internal Vibration Monitoring System). (author)
Noble Gas Measurement and Analysis Technique for Monitoring Reprocessing Facilities
International Nuclear Information System (INIS)
William S. Charlton
1999-01-01
An environmental monitoring technique using analysis of stable noble gas isotopic ratios on-stack at a reprocessing facility was developed. This technique integrates existing technologies to strengthen safeguards at reprocessing facilities. The isotopic ratios are measured using a mass spectrometry system and are compared to a database of calculated isotopic ratios using a Bayesian data analysis method to determine specific fuel parameters (e.g., burnup, fuel type, fuel age, etc.). These inferred parameters can be used by investigators to verify operator declarations. A user-friendly software application (named NOVA) was developed for the application of this technique. NOVA included a Visual Basic user interface coupling a Bayesian data analysis procedure to a reactor physics database (calculated using the Monteburns 3.01 code system). The integrated system (mass spectrometry, reactor modeling, and data analysis) was validated using on-stack measurements during the reprocessing of target fuel from a U.S. production reactor and gas samples from the processing of EBR-II fast breeder reactor driver fuel. These measurements led to an inferred burnup that matched the declared burnup with sufficient accuracy and consistency for most safeguards applications. The NOVA code was also tested using numerous light water reactor measurements from the literature. NOVA was capable of accurately determining spent fuel type, burnup, and fuel age for these experimental results. Work should continue to demonstrate the robustness of this system for production, power, and research reactor fuels
Nuclear techniques of analysis in diamond synthesis and annealing
International Nuclear Information System (INIS)
Jamieson, D. N.; Prawer, S.; Gonon, P.; Walker, R.; Dooley, S.; Bettiol, A.; Pearce, J.
1996-01-01
Nuclear techniques of analysis have played an important role in the study of synthetic and laser annealed diamond. These measurements have mainly used ion beam analysis with a focused MeV ion beam in a nuclear microprobe system. A variety of techniques have been employed. One of the most important is nuclear elastic scattering, sometimes called non-Rutherford scattering, which has been used to accurately characterise diamond films for thickness and composition. This is possible by the use of a database of measured scattering cross sections. Recently, this work has been extended and nuclear elastic scattering cross sections for both natural boron isotopes have been measured. For radiation damaged diamond, a focused laser annealing scheme has been developed which produces near complete regrowth of MeV phosphorus implanted diamonds. In the laser annealed regions, proton induced x-ray emission has been used to show that 50 % of the P atoms occupy lattice sites. This opens the way to produce n-type diamond for microelectronic device applications. All these analytical applications utilize a focused MeV microbeam which is ideally suited for diamond analysis. This presentation reviews these applications, as well as the technology of nuclear techniques of analysis for diamond with a focused beam. 9 refs., 6 figs
Development of fault diagnostic technique using reactor noise analysis
Energy Technology Data Exchange (ETDEWEB)
Park, Jin Ho; Kim, J. S.; Oh, I. S.; Ryu, J. S.; Joo, Y. S.; Choi, S.; Yoon, D. B
1999-04-01
The ultimate goal of this project is to establish the analysis technique to diagnose the integrity of reactor internals using reactor noise. The reactor noise analyses techniques for the PWR and CANDU NPP(Nuclear Power Plants) were established by which the dynamic characteristics of reactor internals and SPND instrumentations could be identified, and the noise database corresponding to each plant(both Korean and foreign one) was constructed and compared. Also the change of dynamic characteristics of the Ulchin 1 and 2 reactor internals were simulated under presumed fault conditions. Additionally portable reactor noise analysis system was developed so that real time noise analysis could directly be able to be performed at plant site. The reactor noise analyses techniques developed and the database obtained from the fault simulation, can be used to establish a knowledge based expert system to diagnose the NPP's abnormal conditions. And the portable reactor noise analysis system may be utilized as a substitute for plant IVMS(Internal Vibration Monitoring System). (author)
New trends in sample preparation techniques for environmental analysis.
Ribeiro, Cláudia; Ribeiro, Ana Rita; Maia, Alexandra S; Gonçalves, Virgínia M F; Tiritan, Maria Elizabeth
2014-01-01
Environmental samples include a wide variety of complex matrices, with low concentrations of analytes and presence of several interferences. Sample preparation is a critical step and the main source of uncertainties in the analysis of environmental samples, and it is usually laborious, high cost, time consuming, and polluting. In this context, there is increasing interest in developing faster, cost-effective, and environmentally friendly sample preparation techniques. Recently, new methods have been developed and optimized in order to miniaturize extraction steps, to reduce solvent consumption or become solventless, and to automate systems. This review attempts to present an overview of the fundamentals, procedure, and application of the most recently developed sample preparation techniques for the extraction, cleanup, and concentration of organic pollutants from environmental samples. These techniques include: solid phase microextraction, on-line solid phase extraction, microextraction by packed sorbent, dispersive liquid-liquid microextraction, and QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe).
Model order reduction techniques with applications in finite element analysis
Qu, Zu-Qing
2004-01-01
Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...
Novel technique for coal pyrolysis and hydrogenation production analysis
Energy Technology Data Exchange (ETDEWEB)
Pfefferle, L.D.
1990-01-01
The overall objective of this study is to establish vacuum ultraviolet photoionization-MS and VUV pulsed EI-MS as useful tools for a simpler and more accurate direct mass spectrometric measurement of a broad range of hydrocarbon compounds in complex mixtures for ultimate application to the study of the kinetics of coal hydrogenation and pyrolysis processes. The VUV-MS technique allows ionization of a broad range of species with minimal fragmentation. Many compounds of interest can be detected with the 118 nm wavelength, but additional compound selectivity is achievable by tuning the wavelength of the photo-ionization source in the VUV. Resonant four wave mixing techniques in Hg vapor will allow near continuous tuning from about 126 to 106 nm. This technique would facilitate the scientific investigation of coal upgrading processes such as pyrolysis and hydrogenation by allowing accurate direct analysis of both stable and intermediate reaction products.
Small area analysis using micro-diffraction techniques
International Nuclear Information System (INIS)
Goehner, Raymond P.; Tissot, Ralph G. Jr.; Michael, Joseph R.
2000-01-01
An overall trend toward smaller electronic packages and devices makes it increasingly important and difficult to obtain meaningful diffraction information from small areas. X-ray micro-diffraction, electron back-scattered diffraction (EBSD) and Kossel are micro-diffraction techniques used for crystallographic analysis including texture, phase identification and strain measurements. X-ray micro-diffraction primarily is used for phase analysis and residual strain measurements. X-ray micro-diffraction primarily is used for phase analysis and residual strain measurements of areas between 10 microm to 100 microm. For areas this small glass capillary optics are used for producing a usable collimated x-ray beam. These optics are designed to reflect x-rays below the critical angle therefore allowing for larger solid acceptance angle at the x-ray source resulting in brighter smaller x-ray beams. The determination of residual strain using micro-diffraction techniques is very important to the semiconductor industry. Residual stresses have caused voiding of the interconnect metal which then destroys electrical continuity. Being able to determine the residual stress helps industry to predict failures from the aging effects of interconnects due to this stress voiding. Stress measurements would be impossible using a conventional x-ray diffractometer; however, utilizing a 30 microm glass capillary these small areas are readily assessable for analysis. Kossel produces a wide angle diffraction pattern from fluorescent x-rays generated in the sample by an e-beam in a SEM. This technique can yield very precise lattice parameters for determining strain. Fig. 2 shows a Kossel pattern from a Ni specimen. Phase analysis on small areas is also possible using an energy dispersive spectrometer (EBSD) and x-ray micro-diffraction techniques. EBSD has the advantage of allowing the user to observe the area of interest using the excellent imaging capabilities of the SEM. An EDS detector has been
Modular techniques for dynamic fault-tree analysis
Patterson-Hine, F. A.; Dugan, Joanne B.
1992-01-01
It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.
A review of residual stress analysis using thermoelastic techniques
Energy Technology Data Exchange (ETDEWEB)
Robinson, A F; Dulieu-Barton, J M; Quinn, S [University of Southampton, School of Engineering Sciences, Highfield, Southampton, SO17 1BJ (United Kingdom); Burguete, R L [Airbus UK Ltd., New Filton House, Filton, Bristol, BS99 7AR (United Kingdom)
2009-08-01
Thermoelastic Stress Analysis (TSA) is a full-field technique for experimental stress analysis that is based on infra-red thermography. The technique has proved to be extremely effective for studying elastic stress fields and is now well established. It is based on the measurement of the temperature change that occurs as a result of a stress change. As residual stress is essentially a mean stress it is accepted that the linear form of the TSA relationship cannot be used to evaluate residual stresses. However, there are situations where this linear relationship is not valid or departures in material properties due to manufacturing procedures have enabled evaluations of residual stresses. The purpose of this paper is to review the current status of using a TSA based approach for the evaluation of residual stresses and to provide some examples of where promising results have been obtained.
A review of residual stress analysis using thermoelastic techniques
International Nuclear Information System (INIS)
Robinson, A F; Dulieu-Barton, J M; Quinn, S; Burguete, R L
2009-01-01
Thermoelastic Stress Analysis (TSA) is a full-field technique for experimental stress analysis that is based on infra-red thermography. The technique has proved to be extremely effective for studying elastic stress fields and is now well established. It is based on the measurement of the temperature change that occurs as a result of a stress change. As residual stress is essentially a mean stress it is accepted that the linear form of the TSA relationship cannot be used to evaluate residual stresses. However, there are situations where this linear relationship is not valid or departures in material properties due to manufacturing procedures have enabled evaluations of residual stresses. The purpose of this paper is to review the current status of using a TSA based approach for the evaluation of residual stresses and to provide some examples of where promising results have been obtained.
Technique Triangulation for Validation in Directed Content Analysis
Directory of Open Access Journals (Sweden)
Áine M. Humble PhD
2009-09-01
Full Text Available Division of labor in wedding planning varies for first-time marriages, with three types of couples—traditional, transitional, and egalitarian—identified, but nothing is known about wedding planning for remarrying individuals. Using semistructured interviews, the author interviewed 14 couples in which at least one person had remarried and used directed content analysis to investigate the extent to which the aforementioned typology could be transferred to this different context. In this paper she describes how a triangulation of analytic techniques provided validation for couple classifications and also helped with moving beyond “blind spots” in data analysis. Analytic approaches were the constant comparative technique, rank order comparison, and visual representation of coding, using MAXQDA 2007's tool called TextPortraits.
A BWR 24-month cycle analysis using multicycle techniques
International Nuclear Information System (INIS)
Hartley, K.D.
1993-01-01
Boiling water reactor (BWR) fuel cycle design analyses have become increasingly challenging in the past several years. As utilities continue to seek improved capacity factors, reduced power generation costs, and reduced outage costs, longer cycle lengths and fuel design optimization become important considerations. Accurate multicycle analysis techniques are necessary to determine the viability of fuel designs and cycle operating strategies to meet reactor operating requirements, e.g., meet thermal and reactivity margin constraints, while minimizing overall fuel cycle costs. Siemens Power Corporation (SPC), Nuclear Division, has successfully employed multi-cycle analysis techniques with realistic rodded cycle depletions to demonstrate equilibrium fuel cycle performance in 24-month cycles. Analyses have been performed by a BWR/5 reactor, at both rated and uprated power conditions
Ion beam analysis techniques applied to large scale pollution studies
Energy Technology Data Exchange (ETDEWEB)
Cohen, D D; Bailey, G; Martin, J; Garton, D; Noorman, H; Stelcer, E; Johnson, P [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)
1994-12-31
Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.
Ion beam analysis techniques applied to large scale pollution studies
Energy Technology Data Exchange (ETDEWEB)
Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)
1993-12-31
Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.
Analysis of Cell Phone Usage Using Correlation Techniques
T S R MURTHY; D. SIVA RAMA KRISHNA
2011-01-01
The present paper is a sample survey analysis, examined based on correlation techniques. The usage ofmobile phones is clearly almost un-avoidable these days and as such the authors have made a systematicsurvey through a well prepared questionnaire on making use of mobile phones to the maximum extent.These samples are various economical groups across a population of over one-lakh people. The resultsare scientifically categorized and interpreted to match the ground reality.
Analysis of diagnostic calorimeter data by the transfer function technique
Energy Technology Data Exchange (ETDEWEB)
Delogu, R. S., E-mail: rita.delogu@igi.cnr.it; Pimazzoni, A.; Serianni, G. [Consorzio RFX, Corso Stati Uniti, 35127 Padova (Italy); Poggi, C.; Rossi, G. [Università degli Studi di Padova, Via 8 Febbraio 1848, 35122 Padova (Italy)
2016-02-15
This paper describes the analysis procedure applied to the thermal measurements on the rear side of a carbon fibre composite calorimeter with the purpose of reconstructing the energy flux due to an ion beam colliding on the front side. The method is based on the transfer function technique and allows a fast analysis by means of the fast Fourier transform algorithm. Its efficacy has been tested both on simulated and measured temperature profiles: in all cases, the energy flux features are well reproduced and beamlets are well resolved. Limits and restrictions of the method are also discussed, providing strategies to handle issues related to signal noise and digital processing.
FDTD technique based crosstalk analysis of bundled SWCNT interconnects
International Nuclear Information System (INIS)
Duksh, Yograj Singh; Kaushik, Brajesh Kumar; Agarwal, Rajendra P.
2015-01-01
The equivalent electrical circuit model of a bundled single-walled carbon nanotube based distributed RLC interconnects is employed for the crosstalk analysis. The accurate time domain analysis and crosstalk effect in the VLSI interconnect has emerged as an essential design criteria. This paper presents a brief description of the numerical method based finite difference time domain (FDTD) technique that is intended for estimation of voltages and currents on coupled transmission lines. For the FDTD implementation, the stability of the proposed model is strictly restricted by the Courant condition. This method is used for the estimation of crosstalk induced propagation delay and peak voltage in lossy RLC interconnects. Both functional and dynamic crosstalk effects are analyzed in the coupled transmission line. The effect of line resistance on crosstalk induced delay, and peak voltage under dynamic and functional crosstalk is also evaluated. The FDTD analysis and the SPICE simulations are carried out at 32 nm technology node for the global interconnects. It is observed that the analytical results obtained using the FDTD technique are in good agreement with the SPICE simulation results. The crosstalk induced delay, propagation delay, and peak voltage obtained using the FDTD technique shows average errors of 4.9%, 3.4% and 0.46%, respectively, in comparison to SPICE. (paper)
Characterization of decommissioned reactor internals: Monte Carlo analysis technique
International Nuclear Information System (INIS)
Reid, B.D.; Love, E.F.; Luksic, A.T.
1993-03-01
This study discusses computer analysis techniques for determining activation levels of irradiated reactor component hardware to yield data for the Department of Energy's Greater-Than-Class C Low-Level Radioactive Waste Program. The study recommends the Monte Carlo Neutron/Photon (MCNP) computer code as the best analysis tool for this application and compares the technique to direct sampling methodology. To implement the MCNP analysis, a computer model would be developed to reflect the geometry, material composition, and power history of an existing shutdown reactor. MCNP analysis would then be performed using the computer model, and the results would be validated by comparison to laboratory analysis results from samples taken from the shutdown reactor. The report estimates uncertainties for each step of the computational and laboratory analyses; the overall uncertainty of the MCNP results is projected to be ±35%. The primary source of uncertainty is identified as the material composition of the components, and research is suggested to address that uncertainty
Lo, Yen-Li; Pan, Wen-Harn; Hsu, Wan-Lun; Chien, Yin-Chu; Chen, Jen-Yang; Hsu, Mow-Ming; Lou, Pei-Jen; Chen, I-How; Hildesheim, Allan; Chen, Chien-Jen
2016-01-01
Evidence on the association between dietary component, dietary pattern and nasopharyngeal carcinoma (NPC) is scarce. A major challenge is the high degree of correlation among dietary constituents. We aimed to identify dietary pattern associated with NPC and to illustrate the dose-response relationship between the identified dietary pattern scores and the risk of NPC. Taking advantage of a matched NPC case-control study, data from a total of 319 incident cases and 319 matched controls were analyzed. Dietary pattern was derived employing partial least square discriminant analysis (PLS-DA) performed on energy-adjusted food frequencies derived from a 66-item food-frequency questionnaire. Odds ratios (ORs) and 95% confidence intervals (CIs) were estimated with multiple conditional logistic regression models, linking pattern scores and NPC risk. A high score of the PLS-DA derived pattern was characterized by high intakes of fruits, milk, fresh fish, vegetables, tea, and eggs ordered by loading values. We observed that one unit increase in the scores was associated with a significantly lower risk of NPC (ORadj = 0.73, 95% CI = 0.60-0.88) after controlling for potential confounders. Similar results were observed among Epstein-Barr virus seropositive subjects. An NPC protective diet is indicated with more phytonutrient-rich plant foods (fruits, vegetables), milk, other protein-rich foods (in particular fresh fish and eggs), and tea. This information may be used to design potential dietary regimen for NPC prevention.
Analysis of regularized inversion of data corrupted by white Gaussian noise
International Nuclear Information System (INIS)
Kekkonen, Hanne; Lassas, Matti; Siltanen, Samuli
2014-01-01
Tikhonov regularization is studied in the case of linear pseudodifferential operator as the forward map and additive white Gaussian noise as the measurement error. The measurement model for an unknown function u(x) is m(x) = Au(x) + δ ε (x), where δ > 0 is the noise magnitude. If ε was an L 2 -function, Tikhonov regularization gives an estimate T α (m) = u∈H r arg min { ||Au-m|| L 2 2 + α||u|| H r 2 } for u where α = α(δ) is the regularization parameter. Here penalization of the Sobolev norm ||u|| H r covers the cases of standard Tikhonov regularization (r = 0) and first derivative penalty (r = 1). Realizations of white Gaussian noise are almost never in L 2 , but do belong to H s with probability one if s < 0 is small enough. A modification of Tikhonov regularization theory is presented, covering the case of white Gaussian measurement noise. Furthermore, the convergence of regularized reconstructions to the correct solution as δ → 0 is proven in appropriate function spaces using microlocal analysis. The convergence of the related finite-dimensional problems to the infinite-dimensional problem is also analysed. (paper)
International Nuclear Information System (INIS)
Choi, Byeong Kyoo; Lee, In Sun; Seo, Joon Beom; Lee, Jin Seong; Song, Koun Sik; Lim, Tae Hwan
2002-01-01
To study the impact of inversion of soft-copy chest radiographs on the detection of small solitary pulmonary nodules using a high-resolution monitor. The study group consisted of 80 patients who had undergone posterior chest radiography; 40 had a solitary noncalcified pulmonary nodule approximately 1 cm in diameter, and 40 were control subjects. Standard and inverse digital images using the inversion tool on a PACS system were displayed on high-resolution monitors (2048x2560x8 bit). Ten radiologists were requested to rank each image using a five-point scale (1=definitely negative, 3=equivocal or indeterminate, 5=definite nodule), and the data were interpreted using receiver operating characteristic (ROC) analysis. The area under the ROC curve for pooled data of standard image sets was significantly larger than that of inverse image sets (0.8893 and 0.8095, respectively; p 0.05). For detecting small solitary pulmonary nodules, inverse digital images were significantly inferior to standard digital images
Cosburn, K.; Roy, M.; Rowe, C. A.; Guardincerri, E.
2017-12-01
Obtaining accurate static and time-dependent shallow subsurface density structure beneath volcanic, hydrogeologic, and tectonic targets can help illuminate active processes of fluid flow and magma transport. A limitation of using surface gravity measurements for such imaging is that these observations are vastly underdetermined and non-unique. In order to hone in on a more accurate solution, other data sets are needed to provide constraints, typically seismic or borehole observations. The spatial resolution of these techniques, however, is relatively poor, and a novel solution to this problem in recent years has been to use attenuation of the cosmic ray muon flux, which provides an independent constraint on density. In this study we present a joint inversion of gravity and cosmic ray muon flux observations to infer the density structure of a target rock volume at a well-characterized site near Los Alamos, New Mexico, USA. We investigate the shallow structure of a mesa formed by the Quaternary ash-flow tuffs on the Pajarito Plateau, flanking the Jemez volcano in New Mexico. Gravity measurements were made using a Lacoste and Romberg D meter on the surface of the mesa and inside a tunnel beneath the mesa. Muon flux measurements were also made at the mesa surface and at various points within the same tunnel using a muon detector having an acceptance region of 45 degrees from the vertical and a track resolution of several milliradians. We expect the combination of muon and gravity data to provide us with enhanced resolution as well as the ability to sense deeper structures in our region of interest. We use Bayesian joint inversion techniques on the gravity-muon dataset to test these ideas, building upon previous work using gravity inversion alone to resolve density structure in our study area. Both the regional geology and geometry of our study area is well-known and we assess the inferred density structure from our gravity-muon joint inversion within this known
Different techniques of multispectral data analysis for vegetation fraction retrieval
Kancheva, Rumiana; Georgiev, Georgi
2012-07-01
Vegetation monitoring is one of the most important applications of remote sensing technologies. In respect to farmlands, the assessment of crop condition constitutes the basis of growth, development, and yield processes monitoring. Plant condition is defined by a set of biometric variables, such as density, height, biomass amount, leaf area index, and etc. The canopy cover fraction is closely related to these variables, and is state-indicative of the growth process. At the same time it is a defining factor of the soil-vegetation system spectral signatures. That is why spectral mixtures decomposition is a primary objective in remotely sensed data processing and interpretation, specifically in agricultural applications. The actual usefulness of the applied methods depends on their prediction reliability. The goal of this paper is to present and compare different techniques for quantitative endmember extraction from soil-crop patterns reflectance. These techniques include: linear spectral unmixing, two-dimensional spectra analysis, spectral ratio analysis (vegetation indices), spectral derivative analysis (red edge position), colorimetric analysis (tristimulus values sum, chromaticity coordinates and dominant wavelength). The objective is to reveal their potential, accuracy and robustness for plant fraction estimation from multispectral data. Regression relationships have been established between crop canopy cover and various spectral estimators.
Gas chromatographic isolation technique for compound-specific radiocarbon analysis
International Nuclear Information System (INIS)
Uchida, M.; Kumamoto, Y.; Shibata, Y.; Yoneda, M.; Morita, M.; Kawamura, K.
2002-01-01
Full text: We present here a gas chromatographic isolation technique for the compound-specific radiocarbon analysis of biomarkers from the marine sediments. The biomarkers of fatty acids, hydrocarbon and sterols were isolated with enough amount for radiocarbon analysis using a preparative capillary gas chromatograph (PCGC) system. The PCGC systems used here is composed of an HP 6890 GC with FID, a cooled injection system (CIS, Gerstel, Germany), a zero-dead-volume effluent splitter, and a cryogenic preparative collection device (PFC, Gerstel). For AMS analysis, we need to separate and recover sufficient quantity of target individual compounds (>50 μgC). Yields of target compounds from C 14 n-alkanes to C 40 to C 30 n-alkanes and approximately that of 80% for higher molecular weights compounds more than C 30 n-alkanes. Compound specific radiocarbon analysis of organic compounds, as well as compound-specific stable isotope analysis, provide valuable information on the origins and carbon cycling in marine system. Above PCGC conditions, we applied compound-specific radiocarbon analysis to the marine sediments from western north Pacific, which showed the possibility of a useful chronology tool for estimating the age of sediment using organic matter in paleoceanographic study, in the area where enough amounts of planktonic foraminifera for radiocarbon analysis by accelerator mass spectrometry (AMS) are difficult to obtain due to dissolution of calcium carbonate. (author)
Image Analysis Technique for Material Behavior Evaluation in Civil Structures
Moretti, Michele; Rossi, Gianluca
2017-01-01
The article presents a hybrid monitoring technique for the measurement of the deformation field. The goal is to obtain information about crack propagation in existing structures, for the purpose of monitoring their state of health. The measurement technique is based on the capture and analysis of a digital image set. Special markers were used on the surface of the structures that can be removed without damaging existing structures as the historical masonry. The digital image analysis was done using software specifically designed in Matlab to follow the tracking of the markers and determine the evolution of the deformation state. The method can be used in any type of structure but is particularly suitable when it is necessary not to damage the surface of structures. A series of experiments carried out on masonry walls of the Oliverian Museum (Pesaro, Italy) and Palazzo Silvi (Perugia, Italy) have allowed the validation of the procedure elaborated by comparing the results with those derived from traditional measuring techniques. PMID:28773129
Liang, Wei; Murakawa, Hidekazu
2014-01-01
Welding-induced deformation not only negatively affects dimension accuracy but also degrades the performance of product. If welding deformation can be accurately predicted beforehand, the predictions will be helpful for finding effective methods to improve manufacturing accuracy. Till now, there are two kinds of finite element method (FEM) which can be used to simulate welding deformation. One is the thermal elastic plastic FEM and the other is elastic FEM based on inherent strain theory. The former only can be used to calculate welding deformation for small or medium scale welded structures due to the limitation of computing speed. On the other hand, the latter is an effective method to estimate the total welding distortion for large and complex welded structures even though it neglects the detailed welding process. When the elastic FEM is used to calculate the welding-induced deformation for a large structure, the inherent deformations in each typical joint should be obtained beforehand. In this paper, a new method based on inverse analysis was proposed to obtain the inherent deformations for weld joints. Through introducing the inherent deformations obtained by the proposed method into the elastic FEM based on inherent strain theory, we predicted the welding deformation of a panel structure with two longitudinal stiffeners. In addition, experiments were carried out to verify the simulation results.
Intersections, ideals, and inversion
International Nuclear Information System (INIS)
Vasco, D.W.
1998-01-01
Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly one dimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons
Intersections, ideals, and inversion
Energy Technology Data Exchange (ETDEWEB)
Vasco, D.W.
1998-10-01
Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly onedimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons.
International Nuclear Information System (INIS)
Poon, Ian M; Xia Ping; Weinberg, Vivien; Sultanem, Khalil; Akazawa, Clayton C.; Akazawa, Pamela C.; Verhey, Lynn; Quivey, Jeanne Marie; Lee, Nancy
2007-01-01
Purpose: To compare dose-volume histograms of target volumes and organs at risk in 57 patients with nasopharyngeal carcinoma (NPC) with inverse- (IP) or forward-planned (FP) intensity-modulated radiation treatment (IMRT). Methods and Materials: The DVHs of 57 patients with NPC with IMRT with or without chemotherapy were reviewed. Thirty-one patients underwent IP IMRT, and 26 patients underwent FP IMRT. Treatment goals were to prescribe a minimum dose of 66-70 Gy for gross tumor volume and 59.4 Gy for planning target volume to greater than 95% of the volume. Multiple selected end points were used to compare dose-volume histograms of the targets, including minimum, mean, and maximum doses; percentage of target volume receiving less than 90% (1-V90%), less than 95% (1-V95%), and greater than 105% (1-V105%). Dose-volume histograms of organs at risk were evaluated with characteristic end points. Results: Both planning methods provided excellent target coverage with no statistically significant differences found, although a trend was suggested in favor of improved target coverage with IP IMRT in patients with T3/T4 NPC (p = 0.10). Overall, IP IMRT statistically decreased the dose to the parotid gland, temporomandibular joint, brain stem, and spinal cord overall, whereas IP led to a dose decrease to the middle/inner ear in only the T1/T2 subgroup. Conclusions: Use of IP and FP IMRT can lead to good target coverage while maintaining critical structures within tolerance. The IP IMRT selectively spared these critical organs to a greater degree and should be considered the standard of treatment in patients with NPC, particularly those with T3/T4. The FP IMRT is an effective second option in centers with limited IP IMRT capacity. As a modification of conformal techniques, the human/departmental resources to incorporate FP-IMRT should be nominal
Lanci, Luca; Kissel, Catherine; Leonhardt, Roman; Laj, Carlo
2008-08-01
Based on 5 published marine high-resolution sedimentary records of the Iceland Basin Excursion [IBE; Channell, J.E.T., Hodell, D.A., Lehman, B., 1997. Relative geomagnetic paleointensity and ∂ 18O at ODP Site 983/Gardar Drift, North Atlantic since 350 ka. Earth Planet. Sci. Lett. 153, 103-118; Laj, C., Kissel, C., Roberts, A., 2006. Geomagnetic field behavior during the Iceland Basin and Laschamp geomagnetic excursions: a simple transitional field geometry? Geochem. Geophys. Geosystems. 7, Q03004, doi:10.1029/2005GC001122] dated around 186-190 kyr, we present models of the excursional geomagnetic field at the Earth's surface using two different approaches. First a spherical harmonics analysis is performed after synchronization of the records using their paleointensity profiles. Second, we have used an iterative Bayesian inversion procedure, calibrated using the single volcanic data available so far. Both modeling approaches suffer from imperfections of the paleomagnetic signals and mostly from the still poor geographical distribution of detailed records, presently available only from the North Atlantic and the West Pacific. For these reasons, our modeling results should only be regarded as preliminary models of the geomagnetic field during the IBE, susceptible to improvements when including results from future paleomagnetic studies. Nevertheless, both approaches show distinct similarities and are stable against moderate variations of modeling parameters. The general picture is that of a dipole field undergoing a strong reduction, but remaining higher than the non-dipole field all through the excursional process, except for a very short interval of time corresponding to the dipole minimum at the center of the excursion. On the other hand, some differences exist between the results of the two models with each other and with the real data when the virtual geomagnetic pole (VGP) paths are considered. The non-dipole field does not appear to undergo very significant
Fault tree technique: advances in probabilistic and logical analysis
International Nuclear Information System (INIS)
Clarotti, C.A.; Amendola, A.; Contini, S.; Squellati, G.
1982-01-01
Fault tree reliability analysis is used for assessing the risk associated to systems of increasing complexity (phased mission systems, systems with multistate components, systems with non-monotonic structure functions). Much care must be taken to make sure that fault tree technique is not used beyond its correct validity range. To this end a critical review of mathematical foundations of reliability fault tree analysis is carried out. Limitations are enlightened and potential solutions to open problems are suggested. Moreover an overview is given on the most recent developments in the implementation of an integrated software (SALP-MP, SALP-NOT, SALP-CAFT Codes) for the analysis of a wide class of systems
Temperature analysis of laser ignited metalized material using spectroscopic technique
Bassi, Ishaan; Sharma, Pallavi; Daipuriya, Ritu; Singh, Manpreet
2018-05-01
The temperature measurement of the laser ignited aluminized Nano energetic mixture using spectroscopy has a great scope in in analysing the material characteristic and combustion analysis. The spectroscopic analysis helps to do in depth study of combustion of materials which is difficult to do using standard pyrometric methods. Laser ignition was used because it consumes less energy as compared to electric ignition but ignited material dissipate the same energy as dissipated by electric ignition and also with the same impact. Here, the presented research is primarily focused on the temperature analysis of energetic material which comprises of explosive material mixed with nano-material and is ignited with the help of laser. Spectroscopy technique is used here to estimate the temperature during the ignition process. The Nano energetic mixture used in the research does not comprise of any material that is sensitive to high impact.
Improvement and verification of fast reactor safety analysis techniques
International Nuclear Information System (INIS)
Jackson, J.F.
1975-01-01
An initial analysis of the KIWI-TNT experiment using the VENUS-II disassembly code has been completed. The calculated fission energy release agreed with the experimental value to within about 3 percent. An initial model for analyzing the SNAPTRAN-2 core disassembly experiment was also developed along with an appropriate equation-of-state. The first phase of the VENUS-II/PAD comparison study was completed through the issuing of a preliminary report describing the results. A new technique to calculate a P-V-work curve as a function of the degree of core expansion following a disassembly excursion has been developed. The technique provides results that are consistent with the ANL oxide-fuel equation-of-state in VENUS-II. Evaluation and check-out of this new model are currently in progress
On discriminant analysis techniques and correlation structures in high dimensions
DEFF Research Database (Denmark)
Clemmensen, Line Katrine Harder
This paper compares several recently proposed techniques for performing discriminant analysis in high dimensions, and illustrates that the various sparse methods dier in prediction abilities depending on their underlying assumptions about the correlation structures in the data. The techniques...... the methods in two: Those who assume independence between the variables and thus use a diagonal estimate of the within-class covariance matrix, and those who assume dependence between the variables and thus use an estimate of the within-class covariance matrix, which also estimates the correlations between...... variables. The two groups of methods are compared and the pros and cons are exemplied using dierent cases of simulated data. The results illustrate that the estimate of the covariance matrix is an important factor with respect to choice of method, and the choice of method should thus be driven by the nature...
Some problems of calibration technique in charged particle activation analysis
International Nuclear Information System (INIS)
Krasnov, N.N.; Zatolokin, B.V.; Konstantinov, I.O.
1977-01-01
It is shown that three different approaches to calibration technique based on the use of average cross-section, equivalent target thickness and thick target yield are adequate. Using the concept of thick target yield, a convenient charged particle activation equation is obtained. The possibility of simultaneous determination of two impurities, from which the same isotope is formed, is pointed out. The use of the concept of thick target yield facilitates the derivation of a simple formula for an absolute and comparative methods of analysis. The methodical error does not exceed 10%. Calibration technique and determination of expected sensitivity based on the thick target yield concept is also very convenient because experimental determination of thick target yield values is a much simpler procedure than getting activation curve or excitation function. (T.G.)
Fréour , Sylvain; Gloaguen , David; François , Marc; Guillén , Ronald
2006-01-01
International audience; The scope of this work is the determination of the coefficients of thermal expansion of the Ti-17 beta-phase. A rigorous inverse thermo-elastic self-consistent scale transition inicro-mechanical model extended to multi-phase materials was used. The experimental data required for the application of the inverse method were obtained from both the available literature and especially dedicated X-ray diffraction lattice strain measurements performed on the studied (alpha + b...
Energy Technology Data Exchange (ETDEWEB)
Freour, S. [GeM, Institut de Recherche en Genie Civil et Mecanique (UMR CNRS 6183), Universite de Nantes, Ecole Centrale de Nantes, 37 Boulevard de l' Universite, BP 406, 44 602 Saint-Nazaire cedex (France)]. E-mail: freour@crttsn.univ-nantes.fr; Gloaguen, D. [GeM, Institut de Recherche en Genie Civil et Mecanique (UMR CNRS 6183), Universite de Nantes, Ecole Centrale de Nantes, 37 Boulevard de l' Universite, BP 406, 44 602 Saint-Nazaire cedex (France); Francois, M. [Laboratoire des Systemes Mecaniques et d' Ingenierie Simultanee (LASMIS FRE CNRS 2719), Universite de Technologie de Troyes, 12 Rue Marie Curie, BP 2060, 10010 Troyes (France); Guillen, R. [GeM, Institut de Recherche en Genie Civil et Mecanique (UMR CNRS 6183), Universite de Nantes, Ecole Centrale de Nantes, 37 Boulevard de l' Universite, BP 406, 44 602 Saint-Nazaire cedex (France)
2006-04-15
scope of this work is the determination of the coefficients of thermal expansion of the Ti-17 {beta}-phase. A rigorous inverse thermo-elastic self-consistent scale transition micro-mechanical model extended to multi-phase materials was used. The experimental data required for the application of the inverse method were obtained from both the available literature and especially dedicated X-ray diffraction lattice strain measurements performed on the studied ({alpha} + {beta}) two-phase titanium alloy.
Ion beam analysis and spectrometry techniques for Cultural Heritage studies
International Nuclear Information System (INIS)
Beck, L.
2013-01-01
The implementation of experimental techniques for the characterisation of Cultural heritage materials has to take into account some requirements. The complexity of these past materials requires the development of new techniques of examination and analysis, or the transfer of technologies developed for the study of advanced materials. In addition, due to precious aspect of artwork it is also necessary to use the non-destructive methods, respecting the integrity of objects. It is for this reason that the methods using radiations and/or particles play a important role in the scientific study of art history and archaeology since their discovery. X-ray and γ-ray spectrometry as well as ion beam analysis (IBA) are analytical tools at the service of Cultural heritage. This report mainly presents experimental developments for IBA: PIXE, RBS/EBS and NRA. These developments were applied to the study of archaeological composite materials: layered materials or mixtures composed of organic and non-organic phases. Three examples are shown: evolution of silvering techniques for the production of counterfeit coinage during the Roman Empire and in the 16. century, the characterization of composites or mixed mineral/organic compounds such as bone and paint. In these last two cases, the combination of techniques gave original results on the proportion of both phases: apatite/collagen in bone, pigment/binder in paintings. Another part of this report is then dedicated to the non-invasive/non-destructive characterization of prehistoric pigments, in situ, for rock art studies in caves and in the laboratory. Finally, the perspectives of this work are presented. (author) [fr
Development of flow injection analysis technique for uranium estimation
International Nuclear Information System (INIS)
Paranjape, A.H.; Pandit, S.S.; Shinde, S.S.; Ramanujam, A.; Dhumwad, R.K.
1991-01-01
Flow injection analysis is increasingly used as a process control analytical technique in many industries. It involves injection of the sample at a constant rate into a steady flowing stream of reagent and passing this mixture through a suitable detector. This paper describes the development of such a system for the analysis of uranium (VI) and (IV) and its gross gamma activity. It is amenable for on-line or automated off-line monitoring of uranium and its activity in process streams. The sample injection port is suitable for automated injection of radioactive samples. The performance of the system has been tested for the colorimetric response of U(VI) samples at 410 nm in the range of 35 to 360mg/ml in nitric acid medium using Metrohm 662 Photometer and a recorder as detector assembly. The precision of the method is found to be better than +/- 0.5%. This technique with certain modifications is used for the analysis of U(VI) in the range 0.1-3mg/ailq. by alcoholic thiocynate procedure within +/- 1.5% precision. Similarly the precision for the determination of U(IV) in the range 15-120 mg at 650 nm is found to be better than 5%. With NaI well-type detector in the flow line, the gross gamma counting of the solution under flow is found to be within a precision of +/- 5%. (author). 4 refs., 2 figs., 1 tab
Burnout prediction using advance image analysis coal characterization techniques
Energy Technology Data Exchange (ETDEWEB)
Edward Lester; Dave Watts; Michael Cloke [University of Nottingham, Nottingham (United Kingdom). School of Chemical Environmental and Mining Engineering
2003-07-01
The link between petrographic composition and burnout has been investigated previously by the authors. However, these predictions were based on 'bulk' properties of the coal, including the proportion of each maceral or the reflectance of the macerals in the whole sample. Combustion studies relating burnout with microlithotype analysis, or similar, remain less common partly because the technique is more complex than maceral analysis. Despite this, it is likely that any burnout prediction based on petrographic characteristics will become more accurate if it includes information about the maceral associations and the size of each particle. Chars from 13 coals, 106-125 micron size fractions, were prepared using a Drop Tube Furnace (DTF) at 1300{degree}C and 200 millisecond and 1% Oxygen. These chars were then refired in the DTF at 1300{degree}C 5% oxygen and residence times of 200, 400 and 600 milliseconds. The progressive burnout of each char was compared with the characteristics of the initial coals. This paper presents an extension of previous studies in that it relates combustion behaviour to coals that have been characterized on a particle by particle basis using advanced image analysis techniques. 13 refs., 7 figs.
Directory of Open Access Journals (Sweden)
Florian Schumacher
2016-01-01
Full Text Available Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth’s interior remains of high interest in Earth sciences. Here, we give a description from a user’s and programmer’s perspective of the highly modular, flexible and extendable software package ASKI–Analysis of Sensitivity and Kernel Inversion–recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski.
Analysis of Cultural Heritage by Accelerator Techniques and Analytical Imaging
Ide-Ektessabi, Ari; Toque, Jay Arre; Murayama, Yusuke
2011-12-01
In this paper we present the result of experimental investigation using two very important accelerator techniques: (1) synchrotron radiation XRF and XAFS; and (2) accelerator mass spectrometry and multispectral analytical imaging for the investigation of cultural heritage. We also want to introduce a complementary approach to the investigation of artworks which is noninvasive and nondestructive that can be applied in situ. Four major projects will be discussed to illustrate the potential applications of these accelerator and analytical imaging techniques: (1) investigation of Mongolian Textile (Genghis Khan and Kublai Khan Period) using XRF, AMS and electron microscopy; (2) XRF studies of pigments collected from Korean Buddhist paintings; (3) creating a database of elemental composition and spectral reflectance of more than 1000 Japanese pigments which have been used for traditional Japanese paintings; and (4) visible light-near infrared spectroscopy and multispectral imaging of degraded malachite and azurite. The XRF measurements of the Japanese and Korean pigments could be used to complement the results of pigment identification by analytical imaging through spectral reflectance reconstruction. On the other hand, analysis of the Mongolian textiles revealed that they were produced between 12th and 13th century. Elemental analysis of the samples showed that they contained traces of gold, copper, iron and titanium. Based on the age and trace elements in the samples, it was concluded that the textiles were produced during the height of power of the Mongol empire, which makes them a valuable cultural heritage. Finally, the analysis of the degraded and discolored malachite and azurite demonstrates how multispectral analytical imaging could be used to complement the results of high energy-based techniques.
Symbolic manipulation techniques for vibration analysis of laminated elliptic plates
Andersen, C. M.; Noor, A. K.
1977-01-01
A computational scheme is presented for the free vibration analysis of laminated composite elliptic plates. The scheme is based on Hamilton's principle, the Rayleigh-Ritz technique and symmetry considerations and is implemented with the aid of the MACSYMA symbolic manipulation system. The MACYSMA system, through differentiation, integration, and simplification of analytic expressions, produces highly-efficient FORTRAN code for the evaluation of the stiffness and mass coefficients. Multiple use is made of this code to obtain not only the frequencies and mode shapes of the plate, but also the derivatives of the frequencies with respect to various material and geometric parameters.
Data Analysis Techniques for a Lunar Surface Navigation System Testbed
Chelmins, David; Sands, O. Scott; Swank, Aaron
2011-01-01
NASA is interested in finding new methods of surface navigation to allow astronauts to navigate on the lunar surface. In support of the Vision for Space Exploration, the NASA Glenn Research Center developed the Lunar Extra-Vehicular Activity Crewmember Location Determination System and performed testing at the Desert Research and Technology Studies event in 2009. A significant amount of sensor data was recorded during nine tests performed with six test subjects. This paper provides the procedure, formulas, and techniques for data analysis, as well as commentary on applications.
The application of radiotracer technique for preconcentration neutron activation analysis
International Nuclear Information System (INIS)
Wang Xiaolin; Chen Yinliang; Sun Ying; Fu Yibei
1995-01-01
The application of radiotracer technique for preconcentration neutron activation analysis (Pre-NAA) are studied and the method for determination of chemical yield of Pre-NAA is developed. This method has been applied to determination of gold, iridium and rhenium in steel and rock samples and the contents of noble metal are in the range of 1-20 ng·g -1 (sample). In addition, the accuracy difference caused by determination of chemical yield between RNAA and Pre-NAA are also discussed