Wiener filter applied to a neutrongraphic system
Crispim, V.R.; Lopes, R.T.; Borges, J.C.
1986-01-01
The randon characteristics of the image formation process influence the spatial image obtained in a neutrongraphy. Several methods can be used to optimize this image, though estimation of the noise added to the original signal. This work deals with the optimal filtering technique, using Wiener's filter. A simulation is made, where the signal (spatial resolution function) has a Lorentz's form, and ten kinds of random noise with increasing R.M.S. are generated and individually added to the original signal. Wiener's filter is applied to different noise amplitudes and the behaviour of the spatial resolution function for our system is also analysed. (Author) [pt
Izumidani, Masakiyo; Tanno, Kazuo.
1978-01-01
Purpose: To enable automatic filter operation and facilitate back-washing operation by back-washing filters used in a bwr nuclear power plant utilizing an exhaust gas from a ventilator or air conditioner. Method: Exhaust gas from an exhaust pipe of an ventilator or air conditioner is pressurized in a compressor and then introduced in a back-washing gas tank. Then, the exhaust gas pressurized to a predetermined pressure is blown from the inside to the outside of a filter to thereby separate impurities collected on the filter elements and introduce them to a waste tank. (Furukawa, Y.)
Beltrán, Blanca; Avivar, Jessica; Mola, Montserrat; Ferrer, Laura; Cerdà, Víctor; Leal, Luz O
2013-09-03
A new automated, sensitive, and fast system for the simultaneous online isolation and preconcentration of lead and strontium by sorption on a microcolumn packed with Sr-resin using an inductively coupled plasma mass spectrometry (ICP-MS) detector was developed, hyphenating lab-on-valve (LOV) and multisyringe flow injection analysis (MSFIA). Pb and Sr are directly retained on the sorbent column and eluted with a solution of 0.05 mol L(-1) ammonium oxalate. The detection limits achieved were 0.04 ng for lead and 0.03 ng for strontium. Mass calibration curves were used since the proposed system allows the use of different sample volumes for preconcentration. Mass linear working ranges were between 0.13 and 50 ng and 0.1 and 50 ng for lead and strontium, respectively. The repeatability of the method, expressed as RSD, was 2.1% and 2.7% for Pb and Sr, respectively. Environmental samples such as rainwater and airborne particulate (PM10) filters as well as a certified reference material SLRS-4 (river water) were satisfactorily analyzed obtaining recoveries between 90 and 110% for both elements. The main features of the LOV-MSFIA-ICP-MS system proposed are the capability to renew solid phase extraction at will in a fully automated way, the remarkable stability of the column which can be reused up to 160 times, and the potential to perform isotopic analysis.
Spectral analysis and filter theory in applied geophysics
Buttkus, Burkhard
2000-01-01
This book is intended to be an introduction to the fundamentals and methods of spectral analysis and filter theory and their appli cations in geophysics. The principles and theoretical basis of the various methods are described, their efficiency and effectiveness eval uated, and instructions provided for their practical application. Be sides the conventional methods, newer methods arediscussed, such as the spectral analysis ofrandom processes by fitting models to the ob served data, maximum-entropy spectral analysis and maximum-like lihood spectral analysis, the Wiener and Kalman filtering methods, homomorphic deconvolution, and adaptive methods for nonstation ary processes. Multidimensional spectral analysis and filtering, as well as multichannel filters, are given extensive treatment. The book provides a survey of the state-of-the-art of spectral analysis and fil ter theory. The importance and possibilities ofspectral analysis and filter theory in geophysics for data acquisition, processing an...
Condition Monitoring of a Process Filter Applying Wireless Vibration Analysis
Pekka KOSKELA
2011-05-01
Full Text Available This paper presents a novel wireless vibration-based method for monitoring the degree of feed filter clogging. In process industry, these filters are applied to prevent impurities entering the process. During operation, the filters gradually become clogged, decreasing the feed flow and, in the worst case, preventing it. The cleaning of the filter should therefore be carried out predictively in order to avoid equipment damage and unnecessary process downtime. The degree of clogging is estimated by first calculating the time domain indices from low frequency accelerometer samples and then taking the median of the processed values. Nine different statistical quantities are compared based on the estimation accuracy and criteria for operating in resource-constrained environments with particular focus on energy efficiency. The initial results show that the method is able to detect the degree of clogging, and the approach may be applicable to filter clogging monitoring.
Samuel Boudet
2014-01-01
Full Text Available Muscle artifacts constitute one of the major problems in electroencephalogram (EEG examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings.
DEMONSTRATION BULLETIN: COLLOID POLISHING FILTER METHOD - FILTER FLOW TECHNOLOGY, INC.
The Filter Flow Technology, Inc. (FFT) Colloid Polishing Filter Method (CPFM) was tested as a transportable, trailer mounted, system that uses sorption and chemical complexing phenomena to remove heavy metals and nontritium radionuclides from water. Contaminated waters can be pro...
Method and apparatus for a self-cleaning filter
Diebold, James P.; Lilley, Arthur; Browne, III, Kingsbury; Walt, Robb Ray; Duncan, Dustin; Walker, Michael; Steele, John; Fields, Michael
2010-11-16
A method and apparatus for removing fine particulate matter from a fluid stream without interrupting the overall process or flow. The flowing fluid inflates and expands the flexible filter, and particulate is deposited on the filter media while clean fluid is permitted to pass through the filter. This filter is cleaned when the fluid flow is stopped, the filter collapses, and a force is applied to distort the flexible filter media to dislodge the built-up filter cake. The dislodged filter cake falls to a location that allows undisrupted flow of the fluid after flow is restored. The shed particulate is removed to a bin for periodic collection. A plurality of filter cells can operate independently or in concert, in parallel, or in series to permit cleaning the filters without shutting off the overall fluid flow. The self-cleaning filter is low cost, has low power consumption, and exhibits low differential pressures.
Method and apparatus for a self-cleaning filter
Diebold, James P.; Lilley, Arthur; Browne, III, Kingsbury; Walt, Robb Ray; Duncan, Dustin; Walker, Michael; Steele, John; Fields, Michael
2013-09-10
A method and apparatus for removing fine particulate matter from a fluid stream without interrupting the overall process or flow. The flowing fluid inflates and expands the flexible filter, and particulate is deposited on the filter media while clean fluid is permitted to pass through the filter. This filter is cleaned when the fluid flow is stopped, the filter collapses, and a force is applied to distort the flexible filter media to dislodge the built-up filter cake. The dislodged filter cake falls to a location that allows undisrupted flow of the fluid after flow is restored. The shed particulate is removed to a bin for periodic collection. A plurality of filter cells can operate independently or in concert, in parallel, or in series to permit cleaning the filters without shutting off the overall fluid flow. The self-cleaning filter is low cost, has low power consumption, and exhibits low differential pressures.
Su Xiaoxing; Zhang Chuanzeng; Ma Tianxue; Wang Yuesheng
2012-01-01
When three-dimensional (3D) phononic band structures are calculated by using the finite difference time domain (FDTD) method with a relatively small number of iterations, the results can be effectively improved by post-processing the FDTD time series (FDTD-TS) based on the filter diagonalization method (FDM), instead of the classical fast Fourier transform. In this paper, we propose a way to further improve the performance of the FDM-based post-processing method by introducing a relatively large number of observing points to record the FDTD-TS. To this end, the existing scheme of FDTD-TS preprocessing is modified. With the new preprocessing scheme, the processing efficiency of a single FDTD-TS can be improved significantly, and thus the entire post-processing method can have sufficiently high efficiency even when a relatively large number of observing points are used. The feasibility of the proposed method for improvement is verified by the numerical results.
Particle Filtering Applied to Musical Tempo Tracking
Macleod Malcolm D
2004-01-01
Full Text Available This paper explores the use of particle filters for beat tracking in musical audio examples. The aim is to estimate the time-varying tempo process and to find the time locations of beats, as defined by human perception. Two alternative algorithms are presented, one which performs Rao-Blackwellisation to produce an almost deterministic formulation while the second is a formulation which models tempo as a Brownian motion process. The algorithms have been tested on a large and varied database of examples and results are comparable with the current state of the art. The deterministic algorithm gives the better performance of the two algorithms.
Specific filters applied in nuclear medicine services
Ramos, Vitor S.; Crispim, Verginia R., E-mail: verginia@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear; Brandao, Luis E.B. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ) Rio de Janeiro, RJ (Brazil)
2011-07-01
In Nuclear Medicine, radioiodine, in various chemical forms, is a key tracer used in diagnostic practices and/or therapy. Due to its high volatility, medical professionals may incorporate radioactive iodine during the preparation of the dose to be administered to the patient. In radioactive iodine therapy doses ranging from 3.7 to 7.4 GBq per patient are employed. Thus, aiming at reducing the risk of occupational contamination, we developed a low cost filter to be installed at the exit of the exhaust system where doses of radioactive iodine are fractionated, using domestic technology. The effectiveness of radioactive iodine retention by silver impregnated silica [10%] crystals and natural activated carbon was verified using radiotracer techniques. The results showed that natural activated carbon is effective for I{sub 2} capture for a large or small amount of substrate but its use is restricted due to its low flash point (150 deg C). Besides, when poisoned by organic solvents, this flash point may become lower, causing explosions if absorbing large amounts of nitrates. To hold the CH{sub 3}I gas, it was necessary to increase the volume of natural activated carbon since it was not absorbed by SiO{sub 2} + Ag crystals. We concluded that, for an exhaust flow range of (306 {+-} 4) m{sup 3}/h, a double stage filter using SiO{sub 2} + Ag in the first stage and natural activated carbon in the second is sufficient to meet radiological safety requirements. (author)
Linear filtering applied to Monte Carlo criticality calculations
Morrison, G.W.; Pike, D.H.; Petrie, L.M.
1975-01-01
A significant improvement in the acceleration of the convergence of the eigenvalue computed by Monte Carlo techniques has been developed by applying linear filtering theory to Monte Carlo calculations for multiplying systems. A Kalman filter was applied to a KENO Monte Carlo calculation of an experimental critical system consisting of eight interacting units of fissile material. A comparison of the filter estimate and the Monte Carlo realization was made. The Kalman filter converged in five iterations to 0.9977. After 95 iterations, the average k-eff from the Monte Carlo calculation was 0.9981. This demonstrates that the Kalman filter has the potential of reducing the calculational effort of multiplying systems. Other examples and results are discussed
Applying Kalman filtering to investigate tropospheric effects in VLBI
Soja, Benedikt; Nilsson, Tobias; Karbon, Maria; Heinkelmann, Robert; Liu, Li; Lu, Cuixian; Andres Mora-Diaz, Julian; Raposo-Pulido, Virginia; Xu, Minghui; Schuh, Harald
2014-05-01
account the errors due to the atomic clocks at the stations, the troposphere, and white noise processes. The simulated data as well as actual observational data from the two-week CONT11 campaign are analyzed using the Kalman filter, focusing on the tropospheric effects. The results of the different strategies are compared with solutions applying the classical least-squares method. An advantage of the Kalman filter is the possibility of easily integrating additional external information. It is expected that by including tropospheric delays from GNSS, water vapor radiometers, or ray-traced delays from numerical weather prediction models, the accuracy of the VLBI solution could be improved.
A rigid porous filter and filtration method
Chiang, Ta-Kuan; Straub, Douglas, Straub L.; Dennis, Richard A.
1998-12-01
The present invention involves a porous rigid filter comprising a plurality of concentric filtration elements having internal flow passages and forming external flow passages there between. The present invention also involves a pressure vessel containing the filter for the removal of particulate from high pressure particulate containing gases, and further involves a method for using the filter to remove such particulate. The present filter has the advantage of requiring fewer filter elements due to the high surface area- to-volume ratio provided by the filter, requires a reduced pressure vessel size, and exhibits enhanced mechanical design properties, improved cleaning properties, configuration options, modularity and ease of fabrication.
Supplementary test method for carbon filters
Normann, B.; Pettersson, S.-O.
1980-11-01
A test method for carbon filters using freon to detect leakage is described. The filters are used in nuclear power plants and in air-raid shelters to separate radioactive iodine.Sampling and detection limits are described and a proposal for a complete equipment is made.(G.B.)
APPLYING OF COLLABORATIVE FILTERING ALGORITHM FOR PROCESSING OF MEDICAL DATA
Карина Владимировна МЕЛЬНИК
2015-05-01
Full Text Available The problem of improving of effectiveness of medical facility for implementation of social project is considered. There are different approaches to solve this problem, some of which require additional funding, which is usually absent. Therefore, it was proposed to use the approach of processing and application of patients’ data from medical records. The selection of a representative sample of patients was carried out using the technique of collaborative filtering. Review of the methods of collaborative filtering is performed, which showed that there are three main groups of methods. The first group calculates various measures of similarity between the object. The second group is data mining techniques. The third group of methods is a hybrid approach. The Gower coefficient for calculation of similarity measure of medical records of patients is considered in the article. A model of risk assessment of diseases based on collaborative filtering techniques is developed.
Linear filtering applied to safeguards of nuclear material
Pike, D.H.; Morrison, G.W.; Holland, C.W.
1975-01-01
In regard to the problem of nuclear materials theft or diversion in the fuel cycle, a method is needed to detect continual thefts of relatively small amounts of material. It is suggested that Kalman filtering techniques be used. A hypothetical material flow situation is used to illustrate the technique; losses could be detected in as few as 5 months. (DLC)
Comparison of testing methods for particulate filters
Ullmann, W.; Przyborowski, S.
1983-01-01
Four testing methods for particulate filters were compared by using the test rigs of the National Board of Nuclear Safety and Radiation Protection: 1) Measurement of filter penetration P as a function of particle size d by using a polydisperse NaC1 test aerosol and a scintillation particle counter; 2) Modified sodium flame test for measurement of total filter penetration P for various polydisperse NaC1 test aerosols; 3) Measurement of total filter penetration P for a polydisperse NaC1 test aerosol labelled with short-lived radon daughter products; 4) Measurement of total filter penetration P for a special paraffin oil test aerosol (oil fog test used in FRG according DIN 24 184, test aerosol A). The investigations were carried out on sheets of glass fibre paper (five grades of paper). Detailed information about the four testing methods and the used particle size distributions is given. The different results of the various methods are the base for the discussion of the most important parameters which influence the filter penetration P. The course of the function P=f(d) shows the great influence of the particle size. As expected there was also found a great dependence both from the test aerosol as well as from the principle and the measuring range of the aerosol-measuring device. The differences between the results of the various test methods are greater the lower the penetration. The use of NaCl test aerosol with various particle size distributions gives great differences for the respective penetration values. On the basis of these results and the values given by Dorman conclusions are made about the investigation of particulate filters both for the determination of filter penetration P as well as for the leak test of installed filters
Applied Bayesian hierarchical methods
Congdon, P
2010-01-01
... . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Posterior Inference from Bayes Formula . . . . . . . . . . . . 1.3 Markov Chain Monte Carlo Sampling in Relation to Monte Carlo Methods: Obtaining Posterior...
Methods of applied mathematics
Hildebrand, Francis B
1992-01-01
This invaluable book offers engineers and physicists working knowledge of a number of mathematical facts and techniques not commonly treated in courses in advanced calculus, but nevertheless extremely useful when applied to typical problems in many different fields. It deals principally with linear algebraic equations, quadratic and Hermitian forms, operations with vectors and matrices, the calculus of variations, and the formulations and theory of linear integral equations. Annotated problems and exercises accompany each chapter.
Studies on Hepa filter test methods
Lee, S.H.; Jon, K.S.; Park, W.J.; Ryoo, R.
1981-01-01
The purpose of this study is to compare testing methods of the HEPA filter adopted in other countries with each other, and to design and construct a test duct system to establish testing methods. The American D.O.P. test method, the British NaCl test method and several other independently developed methods are compared. It is considered that the D.O.P. method is most suitable for in-plant and leak tests
Regularization by fractional filter methods and data smoothing
Klann, E; Ramlau, R
2008-01-01
This paper is concerned with the regularization of linear ill-posed problems by a combination of data smoothing and fractional filter methods. For the data smoothing, a wavelet shrinkage denoising is applied to the noisy data with known error level δ. For the reconstruction, an approximation to the solution of the operator equation is computed from the data estimate by fractional filter methods. These fractional methods are based on the classical Tikhonov and Landweber method, but avoid, at least partially, the well-known drawback of oversmoothing. Convergence rates as well as numerical examples are presented
Q-Method Extended Kalman Filter
Zanetti, Renato; Ainscough, Thomas; Christian, John; Spanos, Pol D.
2012-01-01
A new algorithm is proposed that smoothly integrates non-linear estimation of the attitude quaternion using Davenport s q-method and estimation of non-attitude states through an extended Kalman filter. The new method is compared to a similar existing algorithm showing its similarities and differences. The validity of the proposed approach is confirmed through numerical simulations.
Advanced Filtering Techniques Applied to Spaceflight, Phase II
National Aeronautics and Space Administration — IST-Rolla developed two nonlinear filters for spacecraft orbit determination during the Phase I contract. The theta-D filter and the cost based filter, CBF, were...
Restricted Kalman Filtering Theory, Methods, and Application
Pizzinga, Adrian
2012-01-01
In statistics, the Kalman filter is a mathematical method whose purpose is to use a series of measurements observed over time, containing random variations and other inaccuracies, and produce estimates that tend to be closer to the true unknown values than those that would be based on a single measurement alone. This Brief offers developments on Kalman filtering subject to general linear constraints. There are essentially three types of contributions: new proofs for results already established; new results within the subject; and applications in investment analysis and macroeconomics, where th
Adaptive Control Using Residual Mode Filters Applied to Wind Turbines
Frost, Susan A.; Balas, Mark J.
2011-01-01
Many dynamic systems containing a large number of modes can benefit from adaptive control techniques, which are well suited to applications that have unknown parameters and poorly known operating conditions. In this paper, we focus on a model reference direct adaptive control approach that has been extended to handle adaptive rejection of persistent disturbances. We extend this adaptive control theory to accommodate problematic modal subsystems of a plant that inhibit the adaptive controller by causing the open-loop plant to be non-minimum phase. We will augment the adaptive controller using a Residual Mode Filter (RMF) to compensate for problematic modal subsystems, thereby allowing the system to satisfy the requirements for the adaptive controller to have guaranteed convergence and bounded gains. We apply these theoretical results to design an adaptive collective pitch controller for a high-fidelity simulation of a utility-scale, variable-speed wind turbine that has minimum phase zeros.
Applying Maxi-adjustment to Adaptive Information Filtering Agents
Lau, Raymond; ter Hofstede, Arthur H. M.; Bruza, Peter D.
2000-01-01
Learning and adaptation is a fundamental property of intelligent agents. In the context of adaptive information filtering, a filtering agent's beliefs about a user's information needs have to be revised regularly with reference to the user's most current information preferences. This learning and adaptation process is essential for maintaining the agent's filtering performance. The AGM belief revision paradigm provides a rigorous foundation for modelling rational and minimal changes to an age...
Lacouture Parodi, Yesenia; Rubak, Per
2011-01-01
for crosstalk cancellation filters applied to different loudspeaker configurations has not yet been addressed systematically. A study of three different inversion techniques applied to several loudspeaker arrangements is documented. Least-squares approximations in the frequency and time domains are evaluated...... along with a crosstalk canceler based on minimum-phase approximation with a frequency-independent delay. The three methods were applied to loudspeaker configurations with two channels and the least-squares approaches to configurations with four channels. Several different span angles and elevations were...
Dynamic data filtering system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-04-29
A computer-implemented dynamic data filtering system and method for selectively choosing operating data of a monitored asset that modifies or expands a learned scope of an empirical model of normal operation of the monitored asset while simultaneously rejecting operating data of the monitored asset that is indicative of excessive degradation or impending failure of the monitored asset, and utilizing the selectively chosen data for adaptively recalibrating the empirical model to more accurately monitor asset aging changes or operating condition changes of the monitored asset.
Statistical analysis and Kalman filtering applied to nuclear materials accountancy
Annibal, P.S.
1990-08-01
Much theoretical research has been carried out on the development of statistical methods for nuclear material accountancy. In practice, physical, financial and time constraints mean that the techniques must be adapted to give an optimal performance in plant conditions. This thesis aims to bridge the gap between theory and practice, to show the benefits to be gained from a knowledge of the facility operation. Four different aspects are considered; firstly, the use of redundant measurements to reduce the error on the estimate of the mass of heavy metal in an 'accountancy tank' is investigated. Secondly, an analysis of the calibration data for the same tank is presented, establishing bounds for the error and suggesting a means of reducing them. Thirdly, a plant-specific method of producing an optimal statistic from the input, output and inventory data, to help decide between 'material loss' and 'no loss' hypotheses, is developed and compared with existing general techniques. Finally, an application of the Kalman Filter to materials accountancy is developed, to demonstrate the advantages of state-estimation techniques. The results of the analyses and comparisons illustrate the importance of taking into account a complete and accurate knowledge of the plant operation, measurement system, and calibration methods, to derive meaningful results from statistical tests on materials accountancy data, and to give a better understanding of critical random and systematic error sources. The analyses were carried out on the head-end of the Fast Reactor Reprocessing Plant, where fuel from the prototype fast reactor is cut up and dissolved. However, the techniques described are general in their application. (author)
Kalman filtering applied to a reagent feed system
Griffin, C.D.; Croson, D.V.; Feeley, J.J.
1988-01-01
Using a Kalman filter solves a troublesome measurement noise problem and, at the same time, improves nuclear safety by detecting leaks to the process' feed tanks. To demonstrate how this technology of optimal estimation can be exploited, this article presents a systematic plan and example of how a Kalman filter was proven in industrial use on a reagent analyzer. A process to recycle uranium from spent fuel elements uses a reagent stream containing boron to dissolve the fuel. The boron is the neutron poison that prevents a nuclear chain reaction during the uranium dissolution. The purpose of the Kalman filter for this system is to reduce the uncertainty in the boron concentration measurement. The filter also provides incipient fault detection by estimating the unmeasured state of any unpoisoned solution, which would dilute the boron solution, entering the feed vessel
The dry filter method for passive filtered venting of the containment
Freis, Daniel; Tietsch, Wolfgang; Obenland, Ralf; Kroes, Bert; Martinsteg, Hans
2013-01-01
Filtered Venting is a mitigative emergency measure to protect the containment from pressure failure in case of a severe accident. Filtered vent systems which are based on the Dry Filter Method (DFM) are proven technology, work completely passive, meet all functional requirements and show excellent performance with respect to filter efficiency. With such a system the release of radioactive fission products to the environment can be effectively minimized. Short and long term land contaminations can be avoided. (orig.)
Method for cleaning the filter pockets of dust gas filter systems
Margraf, A
1975-05-07
The invention deals with a method to clean filter pockets filled with dust gas. By a periodic to and fro air jet attached to a scavenging blower, a pulsed fluttering movement of the filter surface is obtained which releases the outer layers of dust. The charging of the filter pockets with scavenging air to clean the filter material can be carried out immediately on the pulsed admission with suitable time control.
New prediction methods for collaborative filtering
Hasan BULUT
2016-05-01
Full Text Available Companies, in particular e-commerce companies, aims to increase customer satisfaction, hence in turn increase their profits, using recommender systems. Recommender Systems are widely used nowadays and they provide strategic advantages to the companies that use them. These systems consist of different stages. In the first stage, the similarities between the active user and other users are computed using the user-product ratings matrix. Then, the neighbors of the active user are found from these similarities. In prediction calculation stage, the similarities computed at the first stage are used to generate the weight vector of the closer neighbors. Neighbors affect the prediction value by the corresponding value of the weight vector. In this study, we developed two new methods for the prediction calculation stage which is the last stage of collaborative filtering. The performance of these methods are measured with evaluation metrics used in the literature and compared with other studies in this field.
A new greedy search method for the design of digital IIR filter
Ranjit Kaur
2015-07-01
Full Text Available A new greedy search method is applied in this paper to design the optimal digital infinite impulse response (IIR filter. The greedy search method is based on binary successive approximation (BSA and evolutionary search (ES. The suggested greedy search method optimizes the magnitude response and the phase response simultaneously and also finds the lowest order of the filter. The order of the filter is controlled by a control gene whose value is also optimized along with the filter coefficients to obtain optimum order of designed IIR filter. The stability constraints of IIR filter are taken care of during the design procedure. To determine the trade-off relationship between conflicting objectives in the non-inferior domain, the weighting method is exploited. The proposed approach is effectively applied to solve the multiobjective optimization problems of designing the digital low-pass (LP, high-pass (HP, bandpass (BP, and bandstop (BS filters. It has been demonstrated that this technique not only fulfills all types of filter performance requirements, but also the lowest order of the filter can be found. The computational experiments show that the proposed approach gives better digital IIR filters than the existing evolutionary algorithm (EA based methods.
Investigation on filter method for smoothing spiral phase plate
Zhang, Yuanhang; Wen, Shenglin; Luo, Zijian; Tang, Caixue; Yan, Hao; Yang, Chunlin; Liu, Mincai; Zhang, Qinghua; Wang, Jian
2018-03-01
Spiral phase plate (SPP) for generating vortex hollow beams has high efficiency in various applications. However, it is difficult to obtain an ideal spiral phase plate because of its continuous-varying helical phase and discontinued phase step. This paper describes the demonstration of continuous spiral phase plate using filter methods. The numerical simulations indicate that different filter method including spatial domain filter, frequency domain filter has unique impact on surface topography of SPP and optical vortex characteristics. The experimental results reveal that the spatial Gaussian filter method for smoothing SPP is suitable for Computer Controlled Optical Surfacing (CCOS) technique and obtains good optical properties.
Particle Filtering Methods for Incorporating Intelligence Updates
2017-03-01
past time steps. 3.2.1 Particle Filtering through Bayesian Bootstrap Sampling Although SIS helps resolve the computational and complexity issues...variables. This insight was called the Bayesian bootstrap filter, or more commonly called the particle filter. Multiple particles are sampled from an...2012) 16 maps of drug flow into the United States. Business Insider Online, (July 8), http://www.businessinsider.com/16-maps-of-drug-flow-into-the
Wan Yang
2014-04-01
Full Text Available A variety of filtering methods enable the recursive estimation of system state variables and inference of model parameters. These methods have found application in a range of disciplines and settings, including engineering design and forecasting, and, over the last two decades, have been applied to infectious disease epidemiology. For any system of interest, the ideal filter depends on the nonlinearity and complexity of the model to which it is applied, the quality and abundance of observations being entrained, and the ultimate application (e.g. forecast, parameter estimation, etc.. Here, we compare the performance of six state-of-the-art filter methods when used to model and forecast influenza activity. Three particle filters--a basic particle filter (PF with resampling and regularization, maximum likelihood estimation via iterated filtering (MIF, and particle Markov chain Monte Carlo (pMCMC--and three ensemble filters--the ensemble Kalman filter (EnKF, the ensemble adjustment Kalman filter (EAKF, and the rank histogram filter (RHF--were used in conjunction with a humidity-forced susceptible-infectious-recovered-susceptible (SIRS model and weekly estimates of influenza incidence. The modeling frameworks, first validated with synthetic influenza epidemic data, were then applied to fit and retrospectively forecast the historical incidence time series of seven influenza epidemics during 2003-2012, for 115 cities in the United States. Results suggest that when using the SIRS model the ensemble filters and the basic PF are more capable of faithfully recreating historical influenza incidence time series, while the MIF and pMCMC do not perform as well for multimodal outbreaks. For forecast of the week with the highest influenza activity, the accuracies of the six model-filter frameworks are comparable; the three particle filters perform slightly better predicting peaks 1-5 weeks in the future; the ensemble filters are more accurate predicting peaks in
A Kalman filter technique applied for medical image reconstruction
Goliaei, S.; Ghorshi, S.; Manzuri, M. T.; Mortazavi, M.
2011-01-01
Medical images contain information about vital organic tissues inside of human body and are widely used for diagnoses of disease or for surgical purposes. Image reconstruction is essential for medical images for some applications such as suppression of noise or de-blurring the image in order to provide images with better quality and contrast. Due to vital rule of image reconstruction in medical sciences the corresponding algorithms with better efficiency and higher speed is desirable. Most algorithms in image reconstruction are operated on frequency domain such as the most popular one known as filtered back projection. In this paper we introduce a Kalman filter technique which is operated in time domain for medical image reconstruction. Results indicated that as the number of projection increases in both normal collected ray sum and the collected ray sum corrupted by noise the quality of reconstructed image becomes better in terms of contract and transparency. It is also seen that as the number of projection increases the error index decreases.
Comparison of high efficiency particulate filter testing methods
1985-01-01
High Efficiency Particulate Air (HEPA) filters are used for the removal of submicron size particulates from air streams. In nuclear industry they are used as an important engineering safeguard to prevent the release of air borne radioactive particulates to the environment. HEPA filters used in the nuclear industry should therefore be manufactured and operated under strict quality control. There are three levels of testing HEPA filters: i) testing of the filter media; ii) testing of the assembled filter including filter media and filter housing; and iii) on site testing of the complete filter installation before putting into operation and later for the purpose of periodic control. A co-ordinated research programme on particulate filter testing methods was taken up by the Agency and contracts were awarded to the Member Countries, Belgium, German Democratic Republic, India and Hungary. The investigations carried out by the participants of the present co-ordinated research programme include the results of the nowadays most frequently used HEPA filter testing methods both for filter medium test, rig test and in-situ test purposes. Most of the experiments were carried out at ambient temperature and humidity, but indications were given to extend the investigations to elevated temperature and humidity in the future for the purpose of testing the performance of HEPA filter under severe conditions. A major conclusion of the co-ordinated research programme was that it was not possible to recommend one method as a reference method for in situ testing of high efficiency particulate air filters. Most of the present conventional methods are adequate for current requirements. The reasons why no method is to be recommended were multiple, ranging from economical aspects, through incompatibility of materials to national regulations
Stock price estimation using ensemble Kalman Filter square root method
Karya, D. F.; Katias, P.; Herlambang, T.
2018-04-01
Shares are securities as the possession or equity evidence of an individual or corporation over an enterprise, especially public companies whose activity is stock trading. Investment in stocks trading is most likely to be the option of investors as stocks trading offers attractive profits. In determining a choice of safe investment in the stocks, the investors require a way of assessing the stock prices to buy so as to help optimize their profits. An effective method of analysis which will reduce the risk the investors may bear is by predicting or estimating the stock price. Estimation is carried out as a problem sometimes can be solved by using previous information or data related or relevant to the problem. The contribution of this paper is that the estimates of stock prices in high, low, and close categorycan be utilized as investors’ consideration for decision making in investment. In this paper, stock price estimation was made by using the Ensemble Kalman Filter Square Root method (EnKF-SR) and Ensemble Kalman Filter method (EnKF). The simulation results showed that the resulted estimation by applying EnKF method was more accurate than that by the EnKF-SR, with an estimation error of about 0.2 % by EnKF and an estimation error of 2.6 % by EnKF-SR.
Methods for Signal Filtering in NMR Tomography
Gescheidtová, E.; Kubásek, R.; Bartušek, Karel
2006-01-01
Roč. 4, č. 1 (2006), 3404:1-10 ISSN 1738-9682 Institutional research plan: CEZ:AV0Z20650511 Keywords : FID signal * pre-emphasis * gradient pulse * bank of digital filters * threshold Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering
A New Method for the Deposition of Metallic Silver on Porous Ceramic Water Filters
Kathryn N. Jackson
2018-01-01
Full Text Available A new method of silver application to a porous ceramic water filter used for point-of-use water treatment is developed. We evaluated filter performance for filters manufactured by the conventional method of painting an aqueous suspension of silver nanoparticles onto the filter and filters manufactured with a new method that applies silver nitrate to the clay-water-sawdust mixture prior to pressing and firing the filter. Filters were evaluated using miscible displacement flow-through experiments with pulse and continuous-feed injections of E. coli. Flow characteristics were quantified by tracer experiments using [3H]H2O. Experiments using pulse injections of E. coli showed similar performance in breakthrough curves between the two application methods. Long-term challenge tests performed with a continuous feed of E. coli and growth medium resulted in similar log removal rates, but the removal rate by nanosilver filters decreased over time. Silver nitrate filters provided consistent removal with lower silver levels in the effluent and effective bacterial disinfection. Results from continued use with synthetic groundwater over 4 weeks, with a pulse injection of E. coli at 2 and 4 weeks, support similar conclusions—nanosilver filters perform better initially, but after 4 weeks of use, nanosilver filters suffer larger decreases in performance. Results show that including silver nitrate in the mixing step may effectively reduce costs, improve silver retention in the filter, increase effective lifespan, and maintain effective pathogen removal while also eliminating the risk of exposure to inhalation of silver nanoparticles by workers in developing-world filter production facilities.
Median Filtering Methods for Non-volcanic Tremor Detection
Damiao, L. G.; Nadeau, R. M.; Dreger, D. S.; Luna, B.; Zhang, H.
2016-12-01
Various properties of median filtering over time and space are used to address challenges posed by the Non-volcanic tremor detection problem. As part of a "Big-Data" effort to characterize the spatial and temporal distribution of ambient tremor throughout the Northern San Andreas Fault system, continuous seismic data from multiple seismic networks with contrasting operational characteristics and distributed over a variety of regions are being used. Automated median filtering methods that are flexible enough to work consistently with these data are required. Tremor is characterized by a low-amplitude, long-duration signal-train whose shape is coherent at multiple stations distributed over a large area. There are no consistent phase arrivals or mechanisms in a given tremor's signal and even the durations and shapes among different tremors vary considerably. A myriad of masquerading noise, anthropogenic and natural-event signals must also be discriminated in order to obtain accurate tremor detections. We present here results of the median methods applied to data from four regions of the San Andreas Fault system in northern California (Geysers Geothermal Field, Napa, Bitterwater and Parkfield) to illustrate the ability of the methods to detect tremor under diverse conditions.
Research based on matlab method of digital trapezoidal shaping filter
Zhou Qinghua; Zhang Ruanyu; Li Taihua
2008-01-01
In order to develop digital shaping system fast and conveniently, the paper presents the method of optimizing the trapezoidal shaping filter's parameters by using MATLAB, and discusses the affections of the parameters to the shaping result by this method. (authors)
An improved filtered spherical harmonic method for transport calculations
Ahrens, C.; Merton, S.
2013-01-01
Motivated by the work of R. G. McClarren, C. D. Hauck, and R. B. Lowrie on a filtered spherical harmonic method, we present a new filter for such numerical approximations to the multi-dimensional transport equation. In several test problems, we demonstrate that the new filter produces results with significantly less Gibbs phenomena than the filter used by McClarren, Hauck and Lowrie. This reduction in Gibbs phenomena translates into propagation speeds that more closely match the correct propagation speed and solutions that have fewer regions where the scalar flux is negative. (authors)
Evaluation of harmonic detection methods for active power filter applications
Asiminoaei, Lucian; Blaabjerg, Frede; Hansen, Steffan
2005-01-01
In the attempt to minimize the harmonic disturbances created by the non-linear loads the choice of the active power filters comes out to improve the filtering efficiency and to solve many issues existing with classical passive filters. One of the key points for a proper implementation of an active...... theories. Then, the work here proposes a simulation setup that decouples the harmonic reference generator from the active filter model and its controller. In this way the selected methods can be equally analyzed and compared with respect to their performance, which helps anticipating possible...
Method for filtering radon from a gas system
Sowinski, R.F.
1992-01-01
This patent describes a method of filtering, adjacent to an end user-customer's residence, or business in which at least a single gas appliance is located, a natural gas stream in which benz-a-anthracene has been concentrated at sufficient levels to be a health threat in a natural gas gathering and distributing network. It comprises introducing the natural gas stream to a filter selected from a group that includes impingement, passing the filtered natural gas stream to the customer's gas appliance wherein safe use of the energy associated with the stream occurs, periodically and safely removing the filter for disposing of captured benz-a-anthracene, inserting a new filter in place of the removed filter of step
Manufacturing method of the bronze metallic filters
Krivij, N.; Suwardjo, W.; Garcia, L.; Cores, A.; Formoso, A.
1997-01-01
Granulated (spherical) powders of bronze have been produced by spraying molten metal with gas at high pressure in the experimental industrial installation belonging to the Metallurgical Research Centre (CIME) in Havana City. A physical-chemical and technological characterisation of the spherical bronze powder has been carried out and the optimum parameters have been determined from these powders. The mechanical properties of these filters can satisfactorily rival in applications such as in motor transport goods, in industry and agriculture. (AUthor)
Multivariate localization methods for ensemble Kalman filtering
S. Roh; M. Jun; I. Szunyogh; M. G. Genton
2015-01-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of ...
Swain, Sushree Diptimayee; Ray, Pravat Kumar; Mohanty, K. B.
2016-06-01
This research paper discover the design of a shunt Passive Power Filter (PPF) in Hybrid Series Active Power Filter (HSAPF) that employs a novel analytic methodology which is superior than FFT analysis. This novel approach consists of the estimation, detection and classification of the signals. The proposed method is applied to estimate, detect and classify the power quality (PQ) disturbance such as harmonics. This proposed work deals with three methods: the harmonic detection through wavelet transform method, the harmonic estimation by Kalman Filter algorithm and harmonic classification by decision tree method. From different type of mother wavelets in wavelet transform method, the db8 is selected as suitable mother wavelet because of its potency on transient response and crouched oscillation at frequency domain. In harmonic compensation process, the detected harmonic is compensated through Hybrid Series Active Power Filter (HSAPF) based on Instantaneous Reactive Power Theory (IRPT). The efficacy of the proposed method is verified in MATLAB/SIMULINK domain and as well as with an experimental set up. The obtained results confirm the superiority of the proposed methodology than FFT analysis. This newly proposed PPF is used to make the conventional HSAPF more robust and stable.
Comparative study of in-situ filter test methods
Marshall, M.; Stevens, D.C.
1981-01-01
Available methods of testing high efficiency particulate aerosol (HEPA) filters in-situ have been reviewed. In order to understand the relationship between the results produced by different methods a selection has been compared. Various pieces of equipment for generating and detecting aerosols have been tested and their suitability assessed. Condensation-nuclei, DOP (di-octyl phthalate) and sodium-flame in-situ filter test methods have been studied, using the 500 cfm (9000 m 3 /h) filter test rig at Harwell and in the field. Both the sodium-flame and DOP methods measure the penetration through leaks and filter material. However the measured penetration through filtered leaks depends on the aerosol size distribution and the detection method. Condensation-nuclei test methods can only be used to measure unfiltered leaks since condensation nuclei have a very low penetration through filtered leaks. A combination of methods would enable filtered and unfiltered leaks to be measured. A condensation-nucleus counter using n-butyl alcohol as the working fluid has the advantage of being able to detect any particle up to 1 μm in diameter, including DOP, and so could be used for this purpose. A single-particle counter has not been satisfactory because of interference from particles leaking into systems under extract, particularly downstream of filters, and because the concentration of the input aerosol has to be severely limited. The sodium-flame method requires a skilled operator and may cause safety and corrosion problems. The DOP method using a total light scattering detector has so far been the most satisfactory. It is fairly easy to use, measures reasonably low values of penetration and gives rapid results. DOP has had no adverse effect on HEPA filters over a long series of tests
Study of different filtering techniques applied to spectra from airborne gamma spectrometry
Wilhelm, Emilien; Gutierrez, Sebastien; Reboli, Anne; Menard, Stephanie; Nourreddine, Abdel-Mjid [Commissariat a l' Energie Atomique et aux energies alternatives - CEA, DAM, DIF F-91297 Arpajon (France); Arbor, Nicolas [Institut Pluridisciplinaire Hubert Curien, UMR 7178 Universite de Strasbourg-CNRS, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France)
2015-07-01
One of the features of spectra obtained by airborne gamma spectrometry is low counting statistics due to the short acquisition time (1 s) and the large source-detector distance (40 m). It leads to considerable uncertainty in radionuclide identification and determination of their respective activities from the windows method recommended by the IAEA, especially for low-level radioactivity. The present work compares the results obtained with filters in terms of errors of the filtered spectra with the window method and over the whole gamma energy range. The results are used to determine which filtering technique is the most suitable in combination with some method for total stripping of the spectrum. (authors)
Applied Formal Methods for Elections
Wang, Jian
development time, or second dynamically, i.e. monitoring while an implementation is used during an election, or after the election is over, for forensic analysis. This thesis contains two chapters on this subject: the chapter Analyzing Implementations of Election Technologies describes a technique...... process. The chapter Measuring Voter Lines describes an automated data collection method for measuring voters' waiting time, and discusses statistical models designed to provide an understanding of the voter behavior in polling stations....
Robotic fish tracking method based on suboptimal interval Kalman filter
Tong, Xiaohong; Tang, Chao
2017-11-01
Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.
Removal method of radium in mine water by filter sand
Taki, Tomihiro; Naganuma, Masaki
2003-01-01
Trace radium is contained in mine water from the old mine road in Ningyo-Toge Environmental Engineering Center, JNC. We observed that filter sand with hydrated manganese oxide adsorbed radium in the mine water safely for long time. The removal method of radium by filter sand cladding with hydrated manganese oxide was studied. The results showed that radium was removed continuously and last for a long time from mine water with sodium hypochlorite solution by passing through the filter sand cladding with hydrated manganese. Only sodium hypochlorite solution was used. When excess of it was added, residue chlorine was used as chlorine disinfection. Filter sand cladding with hydrated manganese on the market can remove radium in the mine water. The removal efficiency of radium is the same as the radium coprecipitation method added with barium chloride. The cost is much lower than the ordinary methods. Amount of waste decreased to about 1/20 of the coprecipitation method. (S.Y.)
Applied Formal Methods for Elections
Wang, Jian
Information technology is changing the way elections are organized. Technology renders the electoral process more efficient, but things could also go wrong: Voting software is complex, it consists of over thousands of lines of code, which makes it error-prone. Technical problems may cause delays...... bounded model-checking and satisfiability modulo theories (SMT) solvers can be used to check these criteria. Voter Experience: Technology profoundly affects the voter experience. These effects need to be measured and the data should be used to make decisions regarding the implementation of the electoral...... at polling stations, or even delay the announcement of the final result. This thesis describes a set of methods to be used, for example, by system developers, administrators, or decision makers to examine election technologies, social choice algorithms and voter experience. Technology: Verifiability refers...
Motion estimation using point cluster method and Kalman filter.
Senesh, M; Wolf, A
2009-05-01
The most frequently used method in a three dimensional human gait analysis involves placing markers on the skin of the analyzed segment. This introduces a significant artifact, which strongly influences the bone position and orientation and joint kinematic estimates. In this study, we tested and evaluated the effect of adding a Kalman filter procedure to the previously reported point cluster technique (PCT) in the estimation of a rigid body motion. We demonstrated the procedures by motion analysis of a compound planar pendulum from indirect opto-electronic measurements of markers attached to an elastic appendage that is restrained to slide along the rigid body long axis. The elastic frequency is close to the pendulum frequency, as in the biomechanical problem, where the soft tissue frequency content is similar to the actual movement of the bones. Comparison of the real pendulum angle to that obtained by several estimation procedures--PCT, Kalman filter followed by PCT, and low pass filter followed by PCT--enables evaluation of the accuracy of the procedures. When comparing the maximal amplitude, no effect was noted by adding the Kalman filter; however, a closer look at the signal revealed that the estimated angle based only on the PCT method was very noisy with fluctuation, while the estimated angle based on the Kalman filter followed by the PCT was a smooth signal. It was also noted that the instantaneous frequencies obtained from the estimated angle based on the PCT method is more dispersed than those obtained from the estimated angle based on Kalman filter followed by the PCT method. Addition of a Kalman filter to the PCT method in the estimation procedure of rigid body motion results in a smoother signal that better represents the real motion, with less signal distortion than when using a digital low pass filter. Furthermore, it can be concluded that adding a Kalman filter to the PCT procedure substantially reduces the dispersion of the maximal and minimal
Filter assessment applied to analytical reconstruction for industrial third-generation tomography
Velo, Alexandre F.; Martins, Joao F.T.; Oliveira, Adriano S.; Carvalho, Diego V.S.; Faria, Fernando S.; Hamada, Margarida M.; Mesquita, Carlos H., E-mail: afvelo@usp.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2015-07-01
Multiphase systems are structures that contain a mixture of solids, liquids and gases inside a chemical reactor or pipes in a dynamic process. These systems are found in chemical, food, pharmaceutical and petrochemical industries. The gamma ray computed tomography (CT) system has been applied to visualize the distribution of multiphase systems without interrupting production. CT systems have been used to improve design, operation and troubleshooting of industrial processes. Computer tomography for multiphase processes is being developed at several laboratories. It is well known that scanning systems demand high processing time, limited set of data projections and views to obtain an image. Because of it, the image quality is dependent on the number of projection, number of detectors, acquisition time and reconstruction time. A phantom containing air, iron and aluminum was used on the third generation industrial tomography with 662 keV ({sup 137}Cs) radioactive source. It was applied the Filtered Back Projection algorithm to reconstruct the images. An efficient tomography is dependent of the image quality, thus the objective of this research was to apply different types of filters on the analytical algorithm and compare each other using the figure of merit denominated root mean squared error (RMSE), the filter that presents lower value of RMSE has better quality. On this research, five types of filters were used: Ram-Lak, Shepp-Logan, Cosine, Hamming and Hann filters. As results, all filters presented lower values of RMSE, that means the filters used have low stand deviation compared to the mass absorption coefficient, however, the Hann filter presented better RMSE and CNR compared to the others. (author)
Filter assessment applied to analytical reconstruction for industrial third-generation tomography
Velo, Alexandre F.; Martins, Joao F.T.; Oliveira, Adriano S.; Carvalho, Diego V.S.; Faria, Fernando S.; Hamada, Margarida M.; Mesquita, Carlos H.
2015-01-01
Multiphase systems are structures that contain a mixture of solids, liquids and gases inside a chemical reactor or pipes in a dynamic process. These systems are found in chemical, food, pharmaceutical and petrochemical industries. The gamma ray computed tomography (CT) system has been applied to visualize the distribution of multiphase systems without interrupting production. CT systems have been used to improve design, operation and troubleshooting of industrial processes. Computer tomography for multiphase processes is being developed at several laboratories. It is well known that scanning systems demand high processing time, limited set of data projections and views to obtain an image. Because of it, the image quality is dependent on the number of projection, number of detectors, acquisition time and reconstruction time. A phantom containing air, iron and aluminum was used on the third generation industrial tomography with 662 keV ( 137 Cs) radioactive source. It was applied the Filtered Back Projection algorithm to reconstruct the images. An efficient tomography is dependent of the image quality, thus the objective of this research was to apply different types of filters on the analytical algorithm and compare each other using the figure of merit denominated root mean squared error (RMSE), the filter that presents lower value of RMSE has better quality. On this research, five types of filters were used: Ram-Lak, Shepp-Logan, Cosine, Hamming and Hann filters. As results, all filters presented lower values of RMSE, that means the filters used have low stand deviation compared to the mass absorption coefficient, however, the Hann filter presented better RMSE and CNR compared to the others. (author)
Filter-based reconstruction methods for tomography
Pelt, D.M.
2016-01-01
In X-ray tomography, a three-dimensional image of the interior of an object is computed from multiple X-ray images, acquired over a range of angles. Two types of methods are commonly used to compute such an image: analytical methods and iterative methods. Analytical methods are computationally
Multivariate localization methods for ensemble Kalman filtering
Roh, S.
2015-12-03
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.
2015-05-08
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.
2015-12-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
Multivariate localization methods for ensemble Kalman filtering
Roh, S.; Jun, M.; Szunyogh, I.; Genton, Marc G.
2015-01-01
In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.
无
2007-01-01
Representing earthquake ground motion as time varying ARMA model, the instantaneous spectrum can only be determined by the time varying coefficients of the corresponding ARMA model. In this paper, unscented Kalman filter is applied to estimate the time varying coefficients. The comparison between the estimation results of unscented Kalman filter and Kalman filter methods shows that unscented Kalman filter can more precisely represent the distribution of the spectral peaks in time-frequency plane than Kalman filter, and its time and frequency resolution is finer which ensures its better ability to track the local properties of earthquake ground motions and to identify the systems with nonlinearity or abruptness. Moreover, the estimation results of ARMA models with different orders indicate that the theoretical frequency resolving power ofARMA model which was usually ignored in former studies has great effect on the estimation precision of instantaneous spectrum and it should be taken as one of the key factors in order selection of ARMA model.
Monte Carlo method applied to medical physics
Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.
2000-01-01
The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)
Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua
2017-05-01
With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.
Cui Jia
2017-05-01
Full Text Available With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.
Variance-to-mean method generalized by linear difference filter technique
Hashimoto, Kengo; Ohsaki, Hiroshi; Horiguchi, Tetsuo; Yamane, Yoshihiro; Shiroya, Seiji
1998-01-01
The conventional variance-to-mean method (Feynman-α method) seriously suffers the divergency of the variance under such a transient condition as a reactor power drift. Strictly speaking, then, the use of the Feynman-α is restricted to a steady state. To apply the method to more practical uses, it is desirable to overcome this kind of difficulty. For this purpose, we propose an usage of higher-order difference filter technique to reduce the effect of the reactor power drift, and derive several new formulae taking account of the filtering. The capability of the formulae proposed was demonstrated through experiments in the Kyoto University Critical Assembly. The experimental results indicate that the divergency of the variance can be effectively suppressed by the filtering technique, and that the higher-order filter becomes necessary with increasing variation rate in power
In-plane Material Filters for the Discrete Material Optimization Method
Sørensen, Rene; Lund, Erik
2015-01-01
, because the projection filter is a non-linear function of the design variables, the projected variables have to be re-scaled in a final so-called normalization filter. This is done to prevent the optimizer in creating superior, but non-physical pseudo-materials. The method is demonstrated on a series......This paper presents in-plane material filters for the Discrete Material Optimization method used for optimizing laminated composite structures. The filters make it possible for engineers to specify a minimum length scale which governs the minimum size of areas with constant material continuity....... Consequently, engineers can target the available production methods, and thereby increase its manufacturability while the optimizer is free to determine which material to apply together with an optimum location, shape, and size of these areas with constant material continuity. By doing so, engineers no longer...
Adaptive Kalman Filter Applied to Vision Based Head Gesture Tracking for Playing Video Games
Mohammadreza Asghari Oskoei
2017-11-01
Full Text Available This paper proposes an adaptive Kalman filter (AKF to improve the performance of a vision-based human machine interface (HMI applied to a video game. The HMI identifies head gestures and decodes them into corresponding commands. Face detection and feature tracking algorithms are used to detect optical flow produced by head gestures. Such approaches often fail due to changes in head posture, occlusion and varying illumination. The adaptive Kalman filter is applied to estimate motion information and reduce the effect of missing frames in a real-time application. Failure in head gesture tracking eventually leads to malfunctioning game control, reducing the scores achieved, so the performance of the proposed vision-based HMI is examined using a game scoring mechanism. The experimental results show that the proposed interface has a good response time, and the adaptive Kalman filter improves the game scores by ten percent.
Parodi, Yesenia Lacouture
2008-01-01
Several approaches to render binaural signals through loudspeakers have been proposed previously. Some studies had focused on the optimum loudspeaker arrangement while others had proposed efficient filters. However, to our knowledge, the identification of optimal parameters for inverse methods ap...... loudspeaker arrangements. Least square approximations in frequency and time domain are evaluated along with a crosstalk canceler based on minimum-phase approximation. Filter parameters, such as length and regularization, are varied and simulated for different span and elevation angles....
Adaptive Subband Filtering Method for MEMS Accelerometer Noise Reduction
Piotr PIETRZAK
2008-12-01
Full Text Available Silicon microaccelerometers can be considered as an alternative to high-priced piezoelectric sensors. Unfortunately, relatively high noise floor of commercially available MEMS (Micro-Electro-Mechanical Systems sensors limits the possibility of their usage in condition monitoring systems of rotating machines. The solution of this problem is the method of signal filtering described in the paper. It is based on adaptive subband filtering employing Adaptive Line Enhancer. For filter weights adaptation, two novel algorithms have been developed. They are based on the NLMS algorithm. Both of them significantly simplify its software and hardware implementation and accelerate the adaptation process. The paper also presents the software (Matlab and hardware (FPGA implementation of the proposed noise filter. In addition, the results of the performed tests are reported. They confirm high efficiency of the solution.
Filtering of SPECT reconstructions made using Bellini's attenuation correction method
Glick, S.J.; Penney, B.C.; King, M.A.
1991-01-01
This paper evaluates a three-dimensional (3D) Wiener filter which is used to restore SPECT reconstructions which were made using Bellini's method of attenuation correction. Its performance is compared to that of several pre-reconstruction filers: the one-dimensional (1D) Butterworth, the two-dimensional (2D) Butterworth, and a 2D Wiener filer. A simulation study is used to compare the four filtering methods. An approximation to a clinical liver spleen study was used as the source distribution and algorithm which accounts for the depth and distance dependent blurring in SPECT was used to compute noise free projections. To study the effect of filtering method on tumor detection accuracy, a 2 cm diameter, cool spherical tumor (40% contrast) was placed at a known, but random, location with the liver. Projection sets for ten tumor locations were computed and five noise realizations of each set were obtained by introducing Poisson noise. The simulated projections were either: filtered with the 1D or 2D Butterworth or the 2D Wiener and then reconstructed using Bellini's intrinsic attenuation correction, or reconstructed first, then filtered with the 3D Wiener. The criteria used for comparison were: normalized mean square error (NMSE), cold spot contrast, and accuracy of tumor detection with an automated numerical method. Results indicate that restorations obtained with 3D Wiener filtering yielded significantly higher lesion contrast and lower NMSE values compared to the other methods of processing. The Wiener restoration filters and the 2D Butterworth all provided similar measures of detectability, which were noticeably higher than that obtained with 1D Butterworth smoothing
B. Kuldeep
2015-06-01
Full Text Available Fractional calculus has recently been identified as a very important mathematical tool in the field of signal processing. Digital filters designed by fractional derivatives give more accurate frequency response in the prescribed frequency region. Digital filters are most important part of multi-rate filter bank systems. In this paper, an improved method based on fractional derivative constraints is presented for the design of two-channel quadrature mirror filter (QMF bank. The design problem is formulated as minimization of L2 error of filter bank transfer function in passband, stopband interval and at quadrature frequency, and then Lagrange multiplier method with fractional derivative constraints is applied to solve it. The proposed method is then successfully applied for the design of two-channel QMF bank with higher order filter taps. Performance of the QMF bank design is then examined through study of various parameters such as passband error, stopband error, transition band error, peak reconstruction error (PRE, stopband attenuation (As. It is found that, the good design can be obtained with the change of number and value of fractional derivative constraint coefficients.
Wu Jingqin.
1989-01-01
Yang Chizhong filtering and inferential measurement method is a new method used for variable statistics of ore deposits. In order to apply this theory to estimate the uranium ore reserves under the circumstances of regular or irregular prospecting grids, small ore bodies, less sampling points, and complex occurrence, the author has used this method to estimate the ore reserves in five ore bodies of two deposits and achieved satisfactory results. It is demonstrated that compared with the traditional block measurement method, this method is simple and clear in formula, convenient in application, rapid in calculation, accurate in results, less expensive, and high economic benefits. The procedure and experience in the application of this method and the preliminary evaluation of its results are mainly described
Angland, P.; Haberberger, D.; Ivancic, S. T.; Froula, D. H.
2017-01-01
Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of the χ2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.
C. Guo
2017-07-01
Full Text Available Determining the attitude of satellite at the time of imaging then establishing the mathematical relationship between image points and ground points is essential in high-resolution remote sensing image mapping. Star tracker is insensitive to the high frequency attitude variation due to the measure noise and satellite jitter, but the low frequency attitude motion can be determined with high accuracy. Gyro, as a short-term reference to the satellite’s attitude, is sensitive to high frequency attitude change, but due to the existence of gyro drift and integral error, the attitude determination error increases with time. Based on the opposite noise frequency characteristics of two kinds of attitude sensors, this paper proposes an on-orbit attitude estimation method of star sensors and gyro based on Complementary Filter (CF and Unscented Kalman Filter (UKF. In this study, the principle and implementation of the proposed method are described. First, gyro attitude quaternions are acquired based on the attitude kinematics equation. An attitude information fusion method is then introduced, which applies high-pass filtering and low-pass filtering to the gyro and star tracker, respectively. Second, the attitude fusion data based on CF are introduced as the observed values of UKF system in the process of measurement updating. The accuracy and effectiveness of the method are validated based on the simulated sensors attitude data. The obtained results indicate that the proposed method can suppress the gyro drift and measure noise of attitude sensors, improving the accuracy of the attitude determination significantly, comparing with the simulated on-orbit attitude and the attitude estimation results of the UKF defined by the same simulation parameters.
Andreh, Angga Muhamad; Subiyanto, Sunardiyo, Said
2017-01-01
Development of non-linear loading in the application of industry and distribution system and also harmonic compensation becomes important. Harmonic pollution is an urgent problem in increasing power quality. The main contribution of the study is the modeling approach used to design a shunt active filter and the application of the cascade multilevel inverter topology to improve the power quality of electrical energy. In this study, shunt active filter was aimed to eliminate dominant harmonic component by injecting opposite currents with the harmonic component system. The active filter was designed by shunt configuration with cascaded multilevel inverter method controlled by PID controller and SPWM. With this shunt active filter, the harmonic current can be reduced so that the current wave pattern of the source is approximately sinusoidal. Design and simulation were conducted by using Power Simulator (PSIM) software. Shunt active filter performance experiment was conducted on the IEEE four bus test system. The result of shunt active filter installation on the system (IEEE four bus) could reduce THD current from 28.68% to 3.09%. With this result, the active filter can be applied as an effective method to reduce harmonics.
Method and system for training dynamic nonlinear adaptive filters which have embedded memory
Rabinowitz, Matthew (Inventor)
2002-01-01
Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.
Stabilizing the thermal lattice Boltzmann method by spatial filtering.
Gillissen, J J J
2016-10-01
We propose to stabilize the thermal lattice Boltzmann method by filtering the second- and third-order moments of the collision operator. By means of the Chapman-Enskog expansion, we show that the additional numerical diffusivity diminishes in the low-wavnumber limit. To demonstrate the enhanced stability, we consider a three-dimensional thermal lattice Boltzmann system involving 33 discrete velocities. Filtering extends the linear stability of this thermal lattice Boltzmann method to 10-fold smaller transport coefficients. We further demonstrate that the filtering does not compromise the accuracy of the hydrodynamics by comparing simulation results to reference solutions for a number of standardized test cases, including natural convection in two dimensions.
Design and analysis of planar printed microwave and PBG filters using an FDTD method
Tong, M.S.; Lu, Y.L.; Chen, Y.C.
2004-01-01
In this paper, various planar printed microwave and photonic band-gap (PBG) filters have been designed and analyzed by applying the finite difference time domain method, together with an unsplit-anisotropic perfectly matched layer technique as treatments of boundary conditions. The implemented so...
Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher
2017-09-01
Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.
Simplified Method for Groundwater Treatment Using Dilution and Ceramic Filter
Musa, S.; Ariff, N. A.; Kadir, M. N. Abdul; Denan, F.
2016-07-01
Groundwater is one of the natural resources that is not susceptible to pollutants. However, increasing activities of municipal, industrial, agricultural or extreme land use activities have resulted in groundwater contamination as occured at the Research Centre for Soft Soil Malaysia (RECESS), Universiti Tun Hussein Onn Malaysia (UTHM). Thus, aims of this study is to treat groundwater by using rainwater and simple ceramic filter as a treatment agent. The treatment uses rain water dilution, ceramic filters and combined method of dilute and filtering as an alternate treatment which are simple and more practical compared to modern or chemical methods. The water went through dilution treatment processes able to get rid of 57% reduction compared to initial condition. Meanwhile, the water that passes through the filtering process successfully get rid of as much as 86% groundwater parameters where only chloride does not pass the standard. Favorable results for the combination methods of dilution and filtration methods that can succesfully eliminate 100% parameters that donot pass the standards of the Ministry of Health and the Interim National Drinking Water Quality Standard such as those found in groundwater in RECESS, UTHM especially sulfate and chloride. As a result, it allows the raw water that will use clean drinking water and safe. It also proves that the method used in this study is very effective in improving the quality of groundwater.
Ranking filter methods for concentrating pathogens in lake water
Accurately comparing filtration methods for concentrating waterborne pathogens is difficult because of two important water matrix effects on recovery measurements, the effect on PCR quantification and the effect on filter performance. Regarding the first effect, we show how to create a control water...
Fuzzy adaptive Kalman filter for indoor mobile target positioning with INS/WSN integrated method
杨海; 李威; 罗成名
2015-01-01
Pure inertial navigation system (INS) has divergent localization errors after a long time. In order to compensate the disadvantage, wireless sensor network (WSN) associated with the INS was applied to estimate the mobile target positioning. Taking traditional Kalman filter (KF) as the framework, the system equation of KF was established by the INS and the observation equation of position errors was built by the WSN. Meanwhile, the observation equation of velocity errors was established by the velocity difference between the INS and WSN, then the covariance matrix of Kalman filter measurement noise was adjusted with fuzzy inference system (FIS), and the fuzzy adaptive Kalman filter (FAKF) based on the INS/WSN was proposed. The simulation results show that the FAKF method has better accuracy and robustness than KF and EKF methods and shows good adaptive capacity with time-varying system noise. Finally, experimental results further prove that FAKF has the fast convergence error, in comparison with KF and EKF methods.
Kobayashi, Shinji; Sakasai, Akira; Koide, Yoshihiko; Sakamoto, Yoshiteru; Kamada, Yutaka; Hatae, Takaki; Oyama, Naoyuki; Miura, Yukitoshi
2003-01-01
Recent developments and results of fast charge exchange recombination spectroscopy (CXRS) using interference filter method are reported. In order to measure the rapid change of the ion temperature and rotation velocity under collapse or transition phenomena with high-time resolution, two types of interference filter systems were applied to the CXRS diagnostics on the JT-60U Tokamak. One can determine the Doppler broadening and Doppler shift of the CXR emission using three interference filters having slightly different center wavelengths. A rapid estimation method of the temperature ad rotation velocity without non-linear least square fitting is presented. The modification of the three-filters system enables us to improve the minimum time resolution up to 0.8 ms, which is better than that of 16.7 ms for the conventional CXRS system using the CCD detector in JT-60U. The other system having seven wavelength channels is newly fabricated to crosscheck the results obtained by the three-filters assembly, that is, to verify that the CXR emission forms a Gaussian profile under collapse phenomena. In a H-mode discharge having giant edge localized modes, the results obtained by the two systems are compared. The applicability of the three-filters system to the measurement of rapid changes in temperature and rotation velocity is demonstrated. (author)
Erlangga, Mokhammad Puput [Geophysical Engineering, Institut Teknologi Bandung, Ganesha Street no.10 Basic Science B Buliding fl.2-3 Bandung, 40132, West Java Indonesia puput.erlangga@gmail.com (Indonesia)
2015-04-16
Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, in case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.
[Montessori method applied to dementia - literature review].
Brandão, Daniela Filipa Soares; Martín, José Ignacio
2012-06-01
The Montessori method was initially applied to children, but now it has also been applied to people with dementia. The purpose of this study is to systematically review the research on the effectiveness of this method using Medical Literature Analysis and Retrieval System Online (Medline) with the keywords dementia and Montessori method. We selected lo studies, in which there were significant improvements in participation and constructive engagement, and reduction of negative affects and passive engagement. Nevertheless, systematic reviews about this non-pharmacological intervention in dementia rate this method as weak in terms of effectiveness. This apparent discrepancy can be explained because the Montessori method may have, in fact, a small influence on dimensions such as behavioral problems, or because there is no research about this method with high levels of control, such as the presence of several control groups or a double-blind study.
A method for reducing energy dependence of thermoluminescence dosimeter response by means of filters
Bapat, V.N.
1980-01-01
This work describes the application of the method of partial surface shielding for reducing the energy dependence of the X-ray and γ-ray response of a dosimeter containing a CaSO 4 :Dy thermoluminescent phosphor mixed with KCl. in pellet form. Results are given of approximate computation of filter combinations that accomplish this aim, and of experimental verifications. Incorporation of the described filter combination makes it possible to use this relatively sensitive dosimeter for environmental radiation monitoring. A similar approach could be applied to any type of dosimeter in the form of a thin pellet or wafer. (author)
Active filtering applied to radiographic images unfolded by the Richardson-Lucy algorithm
Almeida, Gevaldo L. de; Silvani, Maria Ines; Lopes, Ricardo T.
2011-01-01
Degradation of images caused by systematic uncertainties can be reduced when one knows the features of the spoiling agent. Typical uncertainties of this kind arise in radiographic images due to the non - zero resolution of the detector used to acquire them, and from the non-punctual character of the source employed in the acquisition, or from the beam divergence when extended sources are used. Both features blur the image, which, instead of a single point exhibits a spot with a vanishing edge, reproducing hence the point spread function - PSF of the system. Once this spoiling function is known, an inverse problem approach, involving inversion of matrices, can then be used to retrieve the original image. As these matrices are generally ill-conditioned, due to statistical fluctuation and truncation errors, iterative procedures should be applied, such as the Richardson-Lucy algorithm. This algorithm has been applied in this work to unfold radiographic images acquired by transmission of thermal neutrons and gamma-rays. After this procedure, the resulting images undergo an active filtering which fairly improves their final quality at a negligible cost in terms of processing time. The filter ruling the process is based on the matrix of the correction factors for the last iteration of the deconvolution procedure. Synthetic images degraded with a known PSF, and undergone to the same treatment, have been used as benchmark to evaluate the soundness of the developed active filtering procedure. The deconvolution and filtering algorithms have been incorporated to a Fortran program, written to deal with real images, generate the synthetic ones and display both. (author)
Full, William E.; Eppler, Duane T.
1993-01-01
The effectivity of multichannel Wiener filters to improve images obtained with passive microwave systems was investigated by applying Wiener filters to passive microwave images of first-year sea ice. Four major parameters which define the filter were varied: the lag or pixel offset between the original and the desired scenes, filter length, the number of lines in the filter, and the weight applied to the empirical correlation functions. The effect of each variable on the image quality was assessed by visually comparing the results. It was found that the application of multichannel Wiener theory to passive microwave images of first-year sea ice resulted in visually sharper images with enhanced textural features and less high-frequency noise. However, Wiener filters induced a slight blocky grain to the image and could produce a type of ringing along scan lines traversing sharp intensity contrasts.
TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL
N. Zhu
2016-06-01
Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.
Geostatistical methods applied to field model residuals
Maule, Fox; Mosegaard, K.; Olsen, Nils
consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...
Regularization of DT-MR images using a successive Fermat median filtering method.
Kwon, Kiwoon; Kim, Dongyoun; Kim, Sunghee; Park, Insung; Jeong, Jaewon; Kim, Taehwan; Hong, Cheolpyo; Han, Bongsoo
2008-05-21
Tractography using diffusion tensor magnetic resonance imaging (DT-MRI) is a method to determine the architecture of axonal fibers in the central nervous system by computing the direction of greatest diffusion in the white matter of the brain. To reduce the noise in DT-MRI measurements, a tensor-valued median filter, which is reported to be denoising and structure preserving in the tractography, is applied. In this paper, we proposed the successive Fermat (SF) method, successively using Fermat point theory for a triangle contained in the two-dimensional plane, as a median filtering method. We discussed the error analysis and numerical study about the SF method for phantom and experimental data. By considering the computing time and the image quality aspects of the numerical study simultaneously, we showed that the SF method is much more efficient than the simple median (SM) and gradient descents (GD) methods.
Regularization of DT-MR images using a successive Fermat median filtering method
Kwon, Kiwoon; Kim, Dongyoun; Kim, Sunghee; Park, Insung; Jeong, Jaewon; Kim, Taehwan; Hong, Cheolpyo; Han, Bongsoo
2008-01-01
Tractography using diffusion tensor magnetic resonance imaging (DT-MRI) is a method to determine the architecture of axonal fibers in the central nervous system by computing the direction of greatest diffusion in the white matter of the brain. To reduce the noise in DT-MRI measurements, a tensor-valued median filter, which is reported to be denoising and structure preserving in the tractography, is applied. In this paper, we proposed the successive Fermat (SF) method, successively using Fermat point theory for a triangle contained in the two-dimensional plane, as a median filtering method. We discussed the error analysis and numerical study about the SF method for phantom and experimental data. By considering the computing time and the image quality aspects of the numerical study simultaneously, we showed that the SF method is much more efficient than the simple median (SM) and gradient descents (GD) methods
Regularization of DT-MR images using a successive Fermat median filtering method
Kwon, Kiwoon; Kim, Dongyoun; Kim, Sunghee; Park, Insung; Jeong, Jaewon; Kim, Taehwan [Department of Biomedical Engineering, Yonsei University, Wonju, 220-710 (Korea, Republic of); Hong, Cheolpyo; Han, Bongsoo [Department of Radiological Science, Yonsei University, Wonju, 220-710 (Korea, Republic of)], E-mail: bshan@yonsei.ac.kr
2008-05-21
Tractography using diffusion tensor magnetic resonance imaging (DT-MRI) is a method to determine the architecture of axonal fibers in the central nervous system by computing the direction of greatest diffusion in the white matter of the brain. To reduce the noise in DT-MRI measurements, a tensor-valued median filter, which is reported to be denoising and structure preserving in the tractography, is applied. In this paper, we proposed the successive Fermat (SF) method, successively using Fermat point theory for a triangle contained in the two-dimensional plane, as a median filtering method. We discussed the error analysis and numerical study about the SF method for phantom and experimental data. By considering the computing time and the image quality aspects of the numerical study simultaneously, we showed that the SF method is much more efficient than the simple median (SM) and gradient descents (GD) methods.
Bds/gps Integrated Positioning Method Research Based on Nonlinear Kalman Filtering
Ma, Y.; Yuan, W.; Sun, H.
2017-09-01
In order to realize fast and accurate BDS/GPS integrated positioning, it is necessary to overcome the adverse effects of signal attenuation, multipath effect and echo interference to ensure the result of continuous and accurate navigation and positioning. In this paper, pseudo-range positioning is used as the mathematical model. In the stage of data preprocessing, using precise and smooth carrier phase measurement value to promote the rough pseudo-range measurement value without ambiguity. At last, the Extended Kalman Filter(EKF), the Unscented Kalman Filter(UKF) and the Particle Filter(PF) algorithm are applied in the integrated positioning method for higher positioning accuracy. The experimental results show that the positioning accuracy of PF is the highest, and UKF is better than EKF.
Evaluation of the filtered leapfrog-trapezoidal time integration method
Roache, P.J.; Dietrich, D.E.
1988-01-01
An analysis and evaluation are presented for a new method of time integration for fluid dynamic proposed by Dietrich. The method, called the filtered leapfrog-trapezoidal (FLT) scheme, is analyzed for the one-dimensional constant-coefficient advection equation and is shown to have some advantages for quasi-steady flows. A modification (FLTW) using a weighted combination of FLT and leapfrog is developed which retains the advantages for steady flows, increases accuracy for time-dependent flows, and involves little coding effort. Merits and applicability are discussed
FPGA Implementation of the Coupled Filtering Method and the Affine Warping Method.
Zhang, Chen; Liang, Tianzhu; Mok, Philip K T; Yu, Weichuan
2017-07-01
In ultrasound image analysis, the speckle tracking methods are widely applied to study the elasticity of body tissue. However, "feature-motion decorrelation" still remains as a challenge for the speckle tracking methods. Recently, a coupled filtering method and an affine warping method were proposed to accurately estimate strain values, when the tissue deformation is large. The major drawback of these methods is the high computational complexity. Even the graphics processing unit (GPU)-based program requires a long time to finish the analysis. In this paper, we propose field-programmable gate array (FPGA)-based implementations of both methods for further acceleration. The capability of FPGAs on handling different image processing components in these methods is discussed. A fast and memory-saving image warping approach is proposed. The algorithms are reformulated to build a highly efficient pipeline on FPGA. The final implementations on a Xilinx Virtex-7 FPGA are at least 13 times faster than the GPU implementation on the NVIDIA graphic card (GeForce GTX 580).
An algebraic method for constructing stable and consistent autoregressive filters
Harlim, John; Hong, Hoon; Robbins, Jacob L.
2015-01-01
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides a discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern
Applied mathematical methods in nuclear thermal hydraulics
Ransom, V.H.; Trapp, J.A.
1983-01-01
Applied mathematical methods are used extensively in modeling of nuclear reactor thermal-hydraulic behavior. This application has required significant extension to the state-of-the-art. The problems encountered in modeling of two-phase fluid transients and the development of associated numerical solution methods are reviewed and quantified using results from a numerical study of an analogous linear system of differential equations. In particular, some possible approaches for formulating a well-posed numerical problem for an ill-posed differential model are investigated and discussed. The need for closer attention to numerical fidelity is indicated
Entropy viscosity method applied to Euler equations
Delchini, M. O.; Ragusa, J. C.; Berry, R. A.
2013-01-01
The entropy viscosity method [4] has been successfully applied to hyperbolic systems of equations such as Burgers equation and Euler equations. The method consists in adding dissipative terms to the governing equations, where a viscosity coefficient modulates the amount of dissipation. The entropy viscosity method has been applied to the 1-D Euler equations with variable area using a continuous finite element discretization in the MOOSE framework and our results show that it has the ability to efficiently smooth out oscillations and accurately resolve shocks. Two equations of state are considered: Ideal Gas and Stiffened Gas Equations Of State. Results are provided for a second-order time implicit schemes (BDF2). Some typical Riemann problems are run with the entropy viscosity method to demonstrate some of its features. Then, a 1-D convergent-divergent nozzle is considered with open boundary conditions. The correct steady-state is reached for the liquid and gas phases with a time implicit scheme. The entropy viscosity method correctly behaves in every problem run. For each test problem, results are shown for both equations of state considered here. (authors)
Analytical methods applied to water pollution
Baudin, G.
1977-01-01
A comparison of different methods applied to water analysis is given. The discussion is limited to the problems presented by inorganic elements, accessible to nuclear activation analysis methods. The following methods were compared: activation analysis: with gamma-ray spectrometry, atomic absorption spectrometry, fluorimetry, emission spectrometry, colorimetry or spectrophotometry, X-ray fluorescence, mass spectrometry, voltametry, polarography or other electrochemical methods, activation analysis-beta measurements. Drinking-water, irrigation waters, sea waters, industrial wastes and very pure waters are the subjects of the investigations. The comparative evaluation is made on the basis of storage of samples, in situ analysis, treatment and concentration, specificity and interference, monoelement or multielement analysis, analysis time and accuracy. The significance of the neutron analysis is shown. (T.G.)
The harmonics detection method based on neural network applied ...
Several different methods have been used to sense load currents and extract its ... in order to produce a reference current in shunt active power filters (SAPF), and ... technique compared to other similar methods are found quite satisfactory by ...
Dual linear structured support vector machine tracking method via scale correlation filter
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
Methods of filtering the graph images of the functions
Олександр Григорович Бурса
2017-06-01
Full Text Available The theoretical aspects of cleaning raster images of scanned graphs of functions from digital, chromatic and luminance distortions by using computer graphics techniques have been considered. The basic types of distortions characteristic of graph images of functions have been stated. To suppress the distortion several methods, providing for high-quality of the resulting images and saving their topological features, were suggested. The paper describes the techniques developed and improved by the authors: the method of cleaning the image of distortions by means of iterative contrasting, based on the step-by-step increase in image contrast in the graph by 1%; the method of small entities distortion restoring, based on the thinning of the known matrix of contrast increase filter (the allowable dimensions of the nucleus dilution radius convolution matrix, which provide for the retention of the graph lines have been established; integration technique of the noise reduction method by means of contrasting and distortion restoring method of small entities with known σ-filter. Each method in the complex has been theoretically substantiated. The developed methods involve treatment of graph images as the entire image (global processing and its fragments (local processing. The metrics assessing the quality of the resulting image with the global and local processing have been chosen, the substantiation of the choice as well as the formulas have been given. The proposed complex methods of cleaning the graphs images of functions from grayscale image distortions is adaptive to the form of an image carrier, the distortion level in the image and its distribution. The presented results of testing the developed complex of methods for a representative sample of images confirm its effectiveness
Well known and new neutron filters and employment of them for fundamental and applied investigations
Gritzay, O.; Koloty, V.; Libman, V.; Kalchenko, O.; Klimova, N.
2006-01-01
This is a short review of the neutron filtered beam technique developed in Neutron Physics Department (NPD) at Kyiv Research Reactor (KRR) by now. The ways for search of new neutron filters and improving of known ones are described. A short information about characteristics of the existing filters is presented. This review may be useful for users who wish to use the existing filtered beams at KRR or to develop neutron filtered beam technique at their installations
Chen, Lin; Fan, Xiangtao; Du, Xiaoping
2014-01-01
Point cloud filtering is the basic and key step in LiDAR data processing. Adaptive Triangle Irregular Network Modelling (ATINM) algorithm and Threshold Segmentation on Elevation Statistics (TSES) algorithm are among the mature algorithms. However, few researches concentrate on the parameter selections of ATINM and the iteration condition of TSES, which can greatly affect the filtering results. First the paper presents these two key problems under two different terrain environments. For a flat area, small height parameter and angle parameter perform well and for areas with complex feature changes, large height parameter and angle parameter perform well. One-time segmentation is enough for flat areas, and repeated segmentations are essential for complex areas. Then the paper makes comparisons and analyses of the results by these two methods. ATINM has a larger I error in both two data sets as it sometimes removes excessive points. TSES has a larger II error in both two data sets as it ignores topological relations between points. ATINM performs well even with a large region and a dramatic topology while TSES is more suitable for small region with flat topology. Different parameters and iterations can cause relative large filtering differences
Applying Enhancement Filters in the Pre-processing of Images of Lymphoma
Silva, Sérgio Henrique; Do Nascimento, Marcelo Zanchetta; Neves, Leandro Alves; Batista, Valério Ramos
2015-01-01
Lymphoma is a type of cancer that affects the immune system, and is classified as Hodgkin or non-Hodgkin. It is one of the ten types of cancer that are the most common on earth. Among all malignant neoplasms diagnosed in the world, lymphoma ranges from three to four percent of them. Our work presents a study of some filters devoted to enhancing images of lymphoma at the pre-processing step. Here the enhancement is useful for removing noise from the digital images. We have analysed the noise caused by different sources like room vibration, scraps and defocusing, and in the following classes of lymphoma: follicular, mantle cell and B-cell chronic lymphocytic leukemia. The filters Gaussian, Median and Mean-Shift were applied to different colour models (RGB, Lab and HSV). Afterwards, we performed a quantitative analysis of the images by means of the Structural Similarity Index. This was done in order to evaluate the similarity between the images. In all cases we have obtained a certainty of at least 75%, which rises to 99% if one considers only HSV. Namely, we have concluded that HSV is an important choice of colour model at pre-processing histological images of lymphoma, because in this case the resulting image will get the best enhancement
Kim, Cheolsun; Lee, Woong-Bi; Ju, Gun Wu; Cho, Jeonghoon; Kim, Seongmin; Oh, Jinkyung; Lim, Dongsung; Lee, Yong Tak; Lee, Heung-No
2017-02-01
In recent years, there has been an increasing interest in miniature spectrometers for research and development. Especially, filter-array-based spectrometers have advantages of low cost and portability, and can be applied in various fields such as biology, chemistry and food industry. Miniaturization in optical filters causes degradation of spectral resolution due to limitations on spectral responses and the number of filters. Nowadays, many studies have been reported that the filter-array-based spectrometers have achieved resolution improvements by using digital signal processing (DSP) techniques. The performance of the DSP-based spectral recovery highly depends on the prior information of transmission functions (TFs) of the filters. The TFs vary with respect to an incident angle of light onto the filter-array. Conventionally, it is assumed that the incident angle of light on the filters is fixed and the TFs are known to the DSP. However, the incident angle is inconstant according to various environments and applications, and thus TFs also vary, which leads to performance degradation of spectral recovery. In this paper, we propose a method of incident angle estimation (IAE) for high resolution spectral recovery in the filter-array-based spectrometers. By exploiting sparse signal reconstruction of the L1- norm minimization, IAE estimates an incident angle among all possible incident angles which minimizes the error of the reconstructed signal. Based on IAE, DSP effectively provides a high resolution spectral recovery in the filter-array-based spectrometers.
Meade, Rhiana D; Murray, Anna L; Mittelman, Anjuliee M; Rayner, Justine; Lantagne, Daniele S
2017-02-01
Locally manufactured ceramic water filters are one effective household drinking water treatment technology. During manufacturing, silver nanoparticles or silver nitrate are applied to prevent microbiological growth within the filter and increase bacterial removal efficacy. Currently, there is no recommendation for manufacturers to test silver concentrations of application solutions or filtered water. We identified six commercially available silver test strips, kits, and meters, and evaluated them by: (1) measuring in quintuplicate six samples from 100 to 1,000 mg/L (application range) and six samples from 0.0 to 1.0 mg/L (effluent range) of silver nanoparticles and silver nitrate to determine accuracy and precision; (2) conducting volunteer testing to assess ease-of-use; and (3) comparing costs. We found no method accurately detected silver nanoparticles, and accuracy ranged from 4 to 91% measurement error for silver nitrate samples. Most methods were precise, but only one method could test both application and effluent concentration ranges of silver nitrate. Volunteers considered test strip methods easiest. The cost for 100 tests ranged from 36 to 1,600 USD. We found no currently available method accurately and precisely measured both silver types at reasonable cost and ease-of-use, thus these methods are not recommended to manufacturers. We recommend development of field-appropriate methods that accurately and precisely measure silver nanoparticle and silver nitrate concentrations.
GPS surveying method applied to terminal area navigation flight experiments
Murata, M; Shingu, H; Satsushima, K; Tsuji, T; Ishikawa, K; Miyazawa, Y; Uchida, T [National Aerospace Laboratory, Tokyo (Japan)
1993-03-01
With an objective of evaluating accuracy of new landing and navigation systems such as microwave landing guidance system and global positioning satellite (GPS) system, flight experiments are being carried out using experimental aircraft. This aircraft mounts a GPS and evaluates its accuracy by comparing the standard orbits spotted by a Kalman filter from the laser tracing data on the aircraft with the navigation results. The GPS outputs position and speed information from an earth-centered-earth-fixed system called the World Geodetic System, 1984 (WGS84). However, in order to compare the navigation results with output from a reference orbit sensor or other navigation sensor, it is necessary to structure a high-precision reference coordinates system based on the WGS84. A method that applies the GPS phase interference measurement for this problem was proposed, and used actually in analyzing a flight experiment data. As referred to a case of the method having been applied to evaluating an independent navigation accuracy, the method was verified sufficiently effective and reliable not only in navigation method analysis, but also in the aspect of navigational operations. 12 refs., 10 figs., 5 tabs.
A novel method for EMG decomposition based on matched filters
Ailton Luiz Dias Siqueira Júnior
Full Text Available Introduction Decomposition of electromyography (EMG signals into the constituent motor unit action potentials (MUAPs can allow for deeper insights into the underlying processes associated with the neuromuscular system. The vast majority of the methods for EMG decomposition found in the literature depend on complex algorithms and specific instrumentation. As an attempt to contribute to solving these issues, we propose a method based on a bank of matched filters for the decomposition of EMG signals. Methods Four main units comprise our method: a bank of matched filters, a peak detector, a motor unit classifier and an overlapping resolution module. The system’s performance was evaluated with simulated and real EMG data. Classification accuracy was measured by comparing the responses of the system with known data from the simulator and with the annotations of a human expert. Results The results show that decomposition of non-overlapping MUAPs can be achieved with up to 99% accuracy for signals with up to 10 active motor units and a signal-to-noise ratio (SNR of 10 dB. For overlapping MUAPs with up to 10 motor units per signal and a SNR of 20 dB, the technique allows for correct classification of approximately 71% of the MUAPs. The method is capable of processing, decomposing and classifying a 50 ms window of data in less than 5 ms using a standard desktop computer. Conclusion This article contributes to the ongoing research on EMG decomposition by describing a novel technique capable of delivering high rates of success by means of a fast algorithm, suggesting its possible use in future real-time embedded applications, such as myoelectric prostheses control and biofeedback systems.
Instability of the filtering method for Vlasov's equation
Figua, H.; Bouchut, F.; Fijalkow, E.
1999-01-01
Klimas has introduced a smoothed Fourier-Fourier method. This method consists in convolving the original distribution function with a Gaussian distribution function, and, next, in solving the new system with a transformed splitting algorithm. Unfortunately, a second-order term appears in the new equation. In this work, it is studied how this term affects the numerical equation. In particular it is proven that instability occurs in the linear version of the Vlasov equation obtained by considering only free non-interacting particles. It is proved that the use of Fourier-Fourier transform is a fundamental requirement to solve this new equation. An important property is pointed out concerning the filtered distribution function in the transformed space. (K.A.)
Lu, Xiaonan; Sun, Kai; Huang, Lipei
2014-01-01
around the switching frequency and its multiples. Although the LCL-filters have several advantages compared to single inductance filter, its resonance problem should be noticed. Conventionally, the resonance analysis is mainly focused on the single inverter system, whereas in a renewable energy system...... to the conventional active damping approaches, the biquad filter based active damping method does not require additional sensors and control loops. Meanwhile, the multiple instable closed-loop poles of the parallel inverter system can be moved to the stable region simultaneously. Real-time simulations based on d...
Method of mounting filter elements and mounting therefor
Karelin, J.; Neumann, G.M.
1981-01-01
A process for the insertion and exchange of the filter elements for suspended matter is performed from the clean-air-side. During the insertion of a filter element, a plastic tube (Which encircles the circumference of the filter element and which exceeds in its length the layer thickness of the filter element several times) is tightly connected in its middle section with the side walls, which side walls form a border around the filter element; and then the open end of the plastic tube, which faces the frame, is connected by way of a tight fit with a ring, which is actually known and which surrounds the orifice of the frame into which the filter element is inserted. The filter element is connected with the frame by means of tightening devices, and the outer free end of the tube is turned inside out and around the filter element for the purpose of unhindered air passage through the filter layer, that during the exchange of the contaminated filter element, the outer open end of the tube is heat sealed. The filter element is disconnected and removed from the frame by flipping down of the tightening devices, and the tube is heat sealed in the section between the filter element and the frame, and, that during the insertion of a new filter element, a new tube is attached by way of tight fitting to the ring of the frame , which tube is at its middle section tightly connected with the filter element, and which tube is attached to the ring of the frame in an actually known by overlapping of the heat-sealed tube rest. The tube rest is pulled onto the new tube and pulled off the ring, and the filter element is tightly connected with the frame by means of the tightening devices
A method of alpha-radiating nuclide activity measuring in aerosol filters
Ignatov, V.P.; Galkina, V.N.
1992-01-01
Scintillation method of determination of alpha-radiating nuclide activity in aerosol filters was suggested. The method involves dissolution of the filter in organic solvent, introduction of luminophore into solution prepared, drying of the preparation and measurement of radionuclide activity. Dependences of alpha-radiation detection efficiency on the content of luminophore, filter material, colourless and coloured substances in preparations analyzed were considered
Method for Improving Indoor Positioning Accuracy Using Extended Kalman Filter
Seoung-Hyeon Lee
2016-01-01
Full Text Available Beacons using bluetooth low-energy (BLE technology have emerged as a new paradigm of indoor positioning service (IPS because of their advantages such as low power consumption, miniaturization, wide signal range, and low cost. However, the beacon performance is poor in terms of the indoor positioning accuracy because of noise, motion, and fading, all of which are characteristics of a bluetooth signal and depend on the installation location. Therefore, it is necessary to improve the accuracy of beacon-based indoor positioning technology by fusing it with existing indoor positioning technology, which uses Wi-Fi, ZigBee, and so forth. This study proposes a beacon-based indoor positioning method using an extended Kalman filter that recursively processes input data including noise. After defining the movement of a smartphone on a flat two-dimensional surface, it was assumed that the beacon signal is nonlinear. Then, the standard deviation and properties of the beacon signal were analyzed. According to the analysis results, an extended Kalman filter was designed and the accuracy of the smartphone’s indoor position was analyzed through simulations and tests. The proposed technique achieved good indoor positioning accuracy, with errors of 0.26 m and 0.28 m from the average x- and y-coordinates, respectively, based solely on the beacon signal.
Particle Filtering Equalization Method for a Satellite Communication Channel
Amblard Pierre-Olivier
2004-01-01
Full Text Available We propose the use of particle filtering techniques and Monte Carlo methods to tackle the in-line and blind equalization of a satellite communication channel. The main difficulties encountered are the nonlinear distortions caused by the amplifier stage in the satellite. Several processing methods manage to take into account these nonlinearities but they require the knowledge of a training input sequence for updating the equalizer parameters. Blind equalization methods also exist but they require a Volterra modelization of the system which is not suited for equalization purpose for the present model. The aim of the method proposed in the paper is also to blindly restore the emitted message. To reach this goal, a Bayesian point of view is adopted. Prior knowledge of the emitted symbols and of the nonlinear amplification model, as well as the information available from the received signal, is jointly used by considering the posterior distribution of the input sequence. Such a probability distribution is very difficult to study and thus motivates the implementation of Monte Carlo simulation methods. The presentation of the equalization method is cut into two parts. The first part solves the problem for a simplified model, focusing on the nonlinearities of the model. The second part deals with the complete model, using sampling approaches previously developed. The algorithms are illustrated and their performance is evaluated using bit error rate versus signal-to-noise ratio curves.
Genetic Algorithm Applied to the Eigenvalue Equalization Filtered-x LMS Algorithm (EE-FXLMS
Stephan P. Lovstedt
2008-01-01
Full Text Available The FXLMS algorithm, used extensively in active noise control (ANC, exhibits frequency-dependent convergence behavior. This leads to degraded performance for time-varying tonal noise and noise with multiple stationary tones. Previous work by the authors proposed the eigenvalue equalization filtered-x least mean squares (EE-FXLMS algorithm. For that algorithm, magnitude coefficients of the secondary path transfer function are modified to decrease variation in the eigenvalues of the filtered-x autocorrelation matrix, while preserving the phase, giving faster convergence and increasing overall attenuation. This paper revisits the EE-FXLMS algorithm, using a genetic algorithm to find magnitude coefficients that give the least variation in eigenvalues. This method overcomes some of the problems with implementing the EE-FXLMS algorithm arising from finite resolution of sampled systems. Experimental control results using the original secondary path model, and a modified secondary path model for both the previous implementation of EE-FXLMS and the genetic algorithm implementation are compared.
Multi-decadal analysis of root-zone soil moisture applying the exponential filter across CONUS
K. J. Tobin
2017-09-01
Full Text Available This study applied the exponential filter to produce an estimate of root-zone soil moisture (RZSM. Four types of microwave-based, surface satellite soil moisture were used. The core remotely sensed data for this study came from NASA's long-lasting AMSR-E mission. Additionally, three other products were obtained from the European Space Agency Climate Change Initiative (CCI. These datasets were blended based on all available satellite observations (CCI-active, CCI-passive, and CCI-combined. All of these products were 0.25° and taken daily. We applied the filter to produce a soil moisture index (SWI that others have successfully used to estimate RZSM. The only unknown in this approach was the characteristic time of soil moisture variation (T. We examined five different eras (1997–2002; 2002–2005; 2005–2008; 2008–2011; 2011–2014 that represented periods with different satellite data sensors. SWI values were compared with in situ soil moisture data from the International Soil Moisture Network at a depth ranging from 20 to 25 cm. Selected networks included the US Department of Energy Atmospheric Radiation Measurement (ARM program (25 cm, Soil Climate Analysis Network (SCAN; 20.32 cm, SNOwpack TELemetry (SNOTEL; 20.32 cm, and the US Climate Reference Network (USCRN; 20 cm. We selected in situ stations that had reasonable completeness. These datasets were used to filter out periods with freezing temperatures and rainfall using data from the Parameter elevation Regression on Independent Slopes Model (PRISM. Additionally, we only examined sites where surface and root-zone soil moisture had a reasonably high lagged r value (r > 0. 5. The unknown T value was constrained based on two approaches: optimization of root mean square error (RMSE and calculation based on the normalized difference vegetation index (NDVI value. Both approaches yielded comparable results; although, as to be expected, the optimization approach generally
Robust and Adaptive Block Tracking Method Based on Particle Filter
Bin Sun
2015-10-01
Full Text Available In the field of video analysis and processing, object tracking is attracting more and more attention especially in traffic management, digital surveillance and so on. However problems such as objects’ abrupt motion, occlusion and complex target structures would bring difficulties to academic study and engineering application. In this paper, a fragmentsbased tracking method using the block relationship coefficient is proposed. In this method, we use particle filter algorithm and object region is divided into blocks initially. The contribution of this method is that object features are not extracted just from a single block, the relationship between current block and its neighbor blocks are extracted to describe the variation of the block. Each block is weighted according to the block relationship coefficient when the block is voted on the most matched region in next frame. This method can make full use of the relationship between blocks. The experimental results demonstrate that our method can provide good performance in condition of occlusion and abrupt posture variation.
Multi-objective Design Method for Hybrid Active Power Filter
Yu, Jingrong; Deng, Limin; Liu, Maoyun; Qiu, Zhifeng
2017-10-01
In this paper, a multi-objective optimal design for transformerless hybrid active power filter (HAPF) is proposed. The interactions between the active and passive circuits is analyzed, and by taking the interactions into consideration, a three-dimensional objective problem comprising of performance, efficiency and cost of HAPF system is formulated. To deal with the multiple constraints and the strong coupling characteristics of the optimization model, a novel constraint processing mechanism based on distance measurement and adaptive penalty function is presented. In order to improve the diversity of optimal solution and the local searching ability of the particle swarm optimization (PSO) algorithm, a chaotic mutation operator based on multistage neighborhood is proposed. The simulation results show that the optimums near the ordinate origin of the three-dimension space make better tradeoff among the performance, efficiency and cost of HAPF, and the experimental results of transformerless HAPF verify the effectiveness of the method for multi-objective optimization and design.
Comparison of various filtering methods for digital X-ray image processing
Pfluger, T.; Reinfelder, H.E.; Dorschky, K.; Oppelt, A.; Siemens A.G., Erlangen
1987-01-01
Three filtering methods are explained and compared that are used for border edge enhancement of digitally processed X-ray images. The filters are compared by two examples, a radiograph of the chest, and one of the knee joint. The unsharpness mask is found to yield the best compromise between edge enhancement and image noise intensifying effect, whereas the results obtained by the high-pass filter or the Wallis filter are less good for diagnostic evaluation. The filtered images better display narrow lines, structural borders and edges, and finely spotted areas, than the original radiograph, so that diagnostic evaluation is easier after image filtering. (orig.) [de
Reservoir History Matching Using Ensemble Kalman Filters with Anamorphosis Transforms
Aman, Beshir M.
2012-01-01
Some History matching methods such as Kalman filter, particle filter and the ensemble Kalman filter are reviewed and applied to a test case in the reservoir application. The key idea is to apply the transformation before the update step
Method of producing monolithic ceramic cross-flow filter
Larsen, David A.; Bacchi, David P.; Connors, Timothy F.; Collins, III, Edwin L.
1998-01-01
Ceramic filter of various configuration have been used to filter particulates from hot gases exhausted from coal-fired systems. Prior ceramic cross-flow filters have been favored over other types, but those previously horn have been assemblies of parts somehow fastened together and consequently subject often to distortion or delamination on exposure hot gas in normal use. The present new monolithic, seamless, cross-flow ceramic filters, being of one-piece construction, are not prone to such failure. Further, these new products are made by novel casting process which involves the key steps of demolding the ceramic filter green body so that none of the fragile inner walls of the filter is cracked or broken.
Input Forces Estimation for Nonlinear Systems by Applying a Square-Root Cubature Kalman Filter.
Song, Xuegang; Zhang, Yuexin; Liang, Dakai
2017-10-10
This work presents a novel inverse algorithm to estimate time-varying input forces in nonlinear beam systems. With the system parameters determined, the input forces can be estimated in real-time from dynamic responses, which can be used for structural health monitoring. In the process of input forces estimation, the Runge-Kutta fourth-order algorithm was employed to discretize the state equations; a square-root cubature Kalman filter (SRCKF) was employed to suppress white noise; the residual innovation sequences, a priori state estimate, gain matrix, and innovation covariance generated by SRCKF were employed to estimate the magnitude and location of input forces by using a nonlinear estimator. The nonlinear estimator was based on the least squares method. Numerical simulations of a large deflection beam and an experiment of a linear beam constrained by a nonlinear spring were employed. The results demonstrated accuracy of the nonlinear algorithm.
Input Forces Estimation for Nonlinear Systems by Applying a Square-Root Cubature Kalman Filter
Xuegang Song
2017-10-01
Full Text Available This work presents a novel inverse algorithm to estimate time-varying input forces in nonlinear beam systems. With the system parameters determined, the input forces can be estimated in real-time from dynamic responses, which can be used for structural health monitoring. In the process of input forces estimation, the Runge-Kutta fourth-order algorithm was employed to discretize the state equations; a square-root cubature Kalman filter (SRCKF was employed to suppress white noise; the residual innovation sequences, a priori state estimate, gain matrix, and innovation covariance generated by SRCKF were employed to estimate the magnitude and location of input forces by using a nonlinear estimator. The nonlinear estimator was based on the least squares method. Numerical simulations of a large deflection beam and an experiment of a linear beam constrained by a nonlinear spring were employed. The results demonstrated accuracy of the nonlinear algorithm.
An Improved Filtering Method for Quantum Color Image in Frequency Domain
Li, Panchi; Xiao, Hong
2018-01-01
In this paper we investigate the use of quantum Fourier transform (QFT) in the field of image processing. We consider QFT-based color image filtering operations and their applications in image smoothing, sharpening, and selective filtering using quantum frequency domain filters. The underlying principle used for constructing the proposed quantum filters is to use the principle of the quantum Oracle to implement the filter function. Compared with the existing methods, our method is not only suitable for color images, but also can flexibly design the notch filters. We provide the quantum circuit that implements the filtering task and present the results of several simulation experiments on color images. The major advantages of the quantum frequency filtering lies in the exploitation of the efficient implementation of the quantum Fourier transform.
Filtering apparatus and method for mixing, extraction and/or separation
2013-01-01
The present invention relates to a filtering apparatus and method for mixing a compound of solid and fluid phases, separating the phases and/or extracting fluid from the compound. One embodiment of the invention discloses a filtering apparatus comprising a first filter section accommodating a fir...... in a beer brewing procedure....
A new method for E-government procurement using collaborative filtering and Bayesian approach.
Zhang, Shuai; Xi, Chengyu; Wang, Yan; Zhang, Wenyu; Chen, Yanhong
2013-01-01
Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP) to search for the optimal procurement scheme (OPS). Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services' attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach.
A New Method for E-Government Procurement Using Collaborative Filtering and Bayesian Approach
Shuai Zhang
2013-01-01
Full Text Available Nowadays, as the Internet services increase faster than ever before, government systems are reinvented as E-government services. Therefore, government procurement sectors have to face challenges brought by the explosion of service information. This paper presents a novel method for E-government procurement (eGP to search for the optimal procurement scheme (OPS. Item-based collaborative filtering and Bayesian approach are used to evaluate and select the candidate services to get the top-M recommendations such that the involved computation load can be alleviated. A trapezoidal fuzzy number similarity algorithm is applied to support the item-based collaborative filtering and Bayesian approach, since some of the services’ attributes can be hardly expressed as certain and static values but only be easily represented as fuzzy values. A prototype system is built and validated with an illustrative example from eGP to confirm the feasibility of our approach.
Deviation-based spam-filtering method via stochastic approach
Lee, Daekyung; Lee, Mi Jin; Kim, Beom Jun
2018-03-01
In the presence of a huge number of possible purchase choices, ranks or ratings of items by others often play very important roles for a buyer to make a final purchase decision. Perfectly objective rating is an impossible task to achieve, and we often use an average rating built on how previous buyers estimated the quality of the product. The problem of using a simple average rating is that it can easily be polluted by careless users whose evaluation of products cannot be trusted, and by malicious spammers who try to bias the rating result on purpose. In this letter we suggest how trustworthiness of individual users can be systematically and quantitatively reflected to build a more reliable rating system. We compute the suitably defined reliability of each user based on the user's rating pattern for all products she evaluated. We call our proposed method as the deviation-based ranking, since the statistical significance of each user's rating pattern with respect to the average rating pattern is the key ingredient. We find that our deviation-based ranking method outperforms existing methods in filtering out careless random evaluators as well as malicious spammers.
Emotion Recognition of Speech Signals Based on Filter Methods
Narjes Yazdanian
2016-10-01
Full Text Available Speech is the basic mean of communication among human beings.With the increase of transaction between human and machine, necessity of automatic dialogue and removing human factor has been considered. The aim of this study was to determine a set of affective features the speech signal is based on emotions. In this study system was designs that include three mains sections, features extraction, features selection and classification. After extraction of useful features such as, mel frequency cepstral coefficient (MFCC, linear prediction cepstral coefficients (LPC, perceptive linear prediction coefficients (PLP, ferment frequency, zero crossing rate, cepstral coefficients and pitch frequency, Mean, Jitter, Shimmer, Energy, Minimum, Maximum, Amplitude, Standard Deviation, at a later stage with filter methods such as Pearson Correlation Coefficient, t-test, relief and information gain, we came up with a method to rank and select effective features in emotion recognition. Then Result, are given to the classification system as a subset of input. In this classification stage, multi support vector machine are used to classify seven type of emotion. According to the results, that method of relief, together with multi support vector machine, has the most classification accuracy with emotion recognition rate of 93.94%.
Diaz, Orlando X.
2010-01-01
Approved for public release; distribution is unlimited Two methods of estimating the attitude position of a spacecraft are examined in this thesis: the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). In particular, the UnScented QUaternion Estimator (USQUE) derived from [4] is implemented into a spacecraft model. For generalizations about the each of the filters, a simple problem is initially solved. These solutions display typical characteristics of each filter type. T...
Xu, Chuanlong; Tang, Guanghua; Zhou, Bin; Wang, Shimin
2009-01-01
The spatial filtering method for particle velocity measurement has the advantages of simplicity of the measurement system and convenience of data processing. In this paper, the relationship between solid particles mean velocity in a pneumatic pipeline and the power spectrum of the output signal of an electrostatic sensor was mathematically modeled. The effects of the length of the sensor, the thickness of the dielectric pipe and its length on the spatial filtering characteristics of the sensor were also investigated using the finite element method. As for the roughness of and the difficult determination of the peak frequency f max of the power spectrum characteristics of the output signal of the sensor, a wavelet analysis based filtering method was applied to smooth the curve, which can accurately determine the peak frequency f max . Finally, experiments were performed on a pilot dense phase pneumatic conveying rig at high pressure to test the performance of the velocity measurement system. The experimental results show that the system repeatability is within ±4% over a gas superficial velocity range of 8.63–18.62 m s −1 for a particle concentration range of 0.067–0.130 m 3 m −3
3D Wavelet-Based Filter and Method
Moss, William C.; Haase, Sebastian; Sedat, John W.
2008-08-12
A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.
Leak test method and test device for iodine filter
Fukasawa, Tetsuo; Funabashi, Kiyomi; Miura, Noboru; Miura, Eiichi.
1995-01-01
An air introduction device which can change a humidity is disposed upstream of an iodine filter to be tested, and a humidity measuring device is disposed downstream of the iodine filter respectively. At first, dried air reduced with humidity is flown from the air introduction device to the iodine filter, to remove moisture content from an iodine adsorber in the iodine filter. Next, air at an increased humidity is supplied to the iodine filter. The difference between the time starting the supply of the highly humid air and the time detecting the high humidity at the humidity measuring device is measured. When the time difference is smaller than the time difference measured previously in a normal iodine filter, it shows the presence of leak in the iodine filter to be tested. With such procedures, leakage in the iodine filter which removes radioactive iodine from off-gases discharged from the radioactive material handling facilities can be detected easily by using water (steams), namely, a naturally present material. (I.N.)
Multi-stage type replacing method of iodine filter
Kitamura, Masao; Kamiya, Kunio.
1976-01-01
Object: To effectively replace a filter into a removing device of radioactive impurities used in ventilation and air conditioning system or the like in an atomic power plant. Structure: A plurality of elements of a filter are arranged in series relative to fluid. In the first replacement, an ante-filter-element on inlet side of fluid is removed, and a post-filter-element is repositioned to that position of the ante-element. Then, a fresh element is newly mounted on that position of the post-element. Replacement after the second time may be effected by repeating the operation noted above. With this arrangement, the minimal value of collection efficiency at replacement of filter may be increased. (Ikeda, J.)
Applied Mathematical Methods in Theoretical Physics
Masujima, Michio
2005-04-01
All there is to know about functional analysis, integral equations and calculus of variations in a single volume. This advanced textbook is divided into two parts: The first on integral equations and the second on the calculus of variations. It begins with a short introduction to functional analysis, including a short review of complex analysis, before continuing a systematic discussion of different types of equations, such as Volterra integral equations, singular integral equations of Cauchy type, integral equations of the Fredholm type, with a special emphasis on Wiener-Hopf integral equations and Wiener-Hopf sum equations. After a few remarks on the historical development, the second part starts with an introduction to the calculus of variations and the relationship between integral equations and applications of the calculus of variations. It further covers applications of the calculus of variations developed in the second half of the 20th century in the fields of quantum mechanics, quantum statistical mechanics and quantum field theory. Throughout the book, the author presents over 150 problems and exercises -- many from such branches of physics as quantum mechanics, quantum statistical mechanics, and quantum field theory -- together with outlines of the solutions in each case. Detailed solutions are given, supplementing the materials discussed in the main text, allowing problems to be solved making direct use of the method illustrated. The original references are given for difficult problems. The result is complete coverage of the mathematical tools and techniques used by physicists and applied mathematicians Intended for senior undergraduates and first-year graduates in science and engineering, this is equally useful as a reference and self-study guide.
Improvement of chirped pulse contrast using electro-optic birefringence scanning filter method
Zeng Shuguang; Wang Xianglin; Wang Qishan; Zhang Bin; Sun Nianchun; Wang Fei
2013-01-01
A method using scanning filter to improve the contrast of chirped pulse is proposed, and the principle of this method is analyzed. The scanning filter is compared with the existing pulse-picking technique and nonlinear filtering technique. The scanning filter is a temporal gate that is independent on the intensity of the pulses, but on the instantaneous wavelengths of light. Taking the electro-optic birefringence scanning filter as an example, the application of scanning filter methods is illustrated. Based on numerical simulation and experimental research, it is found that the electro-optic birefringence scanning filter can eliminate a prepulse which is several hundred picoseconds before the main pulse, and the main pulse can maintain a high transmissivity. (authors)
Shimazu, Y.; Rooijen, W.F.G. van
2014-01-01
Highlights: • Estimation of the reactivity of nuclear reactor based on neutron flux measurements. • Comparison of the traditional method, and the new approach based on Extended Kalman Filtering (EKF). • Estimation accuracy depends on filter parameters, the selection of which is described in this paper. • The EKF algorithm is preferred if the signal to noise ratio is low (low flux situation). • The accuracy of the EKF depends on the ratio of the filter coefficients. - Abstract: The Extended Kalman Filtering (EKF) technique has been applied for estimation of subcriticality with a good noise filtering and accuracy. The Inverse Point Kinetic (IPK) method has also been widely used for reactivity estimation. The important parameters for the EKF estimation are the process noise covariance, and the measurement noise covariance. However the optimal selection is quite difficult. On the other hand, there is only one parameter in the IPK method, namely the time constant for the first order delay filter. Thus, the selection of this parameter is quite easy. Thus, it is required to give certain idea for the selection of which method should be selected and how to select the required parameters. From this point of view, a qualitative performance comparison is carried out
Bayesian signal processing classical, modern, and particle filtering methods
Candy, James V
2016-01-01
This book aims to give readers a unified Bayesian treatment starting from the basics (Baye's rule) to the more advanced (Monte Carlo sampling), evolving to the next-generation model-based techniques (sequential Monte Carlo sampling). This next edition incorporates a new chapter on "Sequential Bayesian Detection," a new section on "Ensemble Kalman Filters" as well as an expansion of Case Studies that detail Bayesian solutions for a variety of applications. These studies illustrate Bayesian approaches to real-world problems incorporating detailed particle filter designs, adaptive particle filters and sequential Bayesian detectors. In addition to these major developments a variety of sections are expanded to "fill-in-the gaps" of the first edition. Here metrics for particle filter (PF) designs with emphasis on classical "sanity testing" lead to ensemble techniques as a basic requirement for performance analysis. The expansion of information theory metrics and their application to PF designs is fully developed an...
Standardized methods for in-place filter testing
Dykes, M.; Fretthold, J.K.; Slawski, J.
1997-08-01
The conference minutes of a US DOE meeting held on in-place filter testing are presented. The purpose of the conference was to transfer technical in-place testing knowledge throughout the DOE complex. Major items discussed included purchase requisitions, in-place testing, instrumentation, and in-place test personnel qualifications and training. Future actions identified by conference attendees centered on establishing complex-wide DOE policies on training, inspection and testing, and filter specifications.
Havranek, E; Bumbalova, A
1978-02-15
The method of filter manufacturing described consists in that an element in powder form, e.g., cadmium or tin, or oxides, e.g., cadmium oxide or tin oxide are compacted at a pressure of 500 to 2000 kg/cm/sup 2/ with powder fillers, such as lactose, glucose, calcium phosphates, cellulose or starch. The filter surface is finished with fixation agents, e.g., polystyrene chloroform solutions. Thus, the need for filter balancing is eliminated. Accurate proportioning of the filtering element of the compacted mixture and accurate balancing are achieved by reducing the filtering element content.
Method and apparatus for selective filtering of ions
Page, Jason S [Kennewick, WA; Tang, Keqi [Richland, WA; Smith, Richard D [Richland, WA
2009-04-07
An adjustable, low mass-to-charge (m/z) filter is disclosed employing electrospray ionization to block ions associated with unwanted low m/z species from entering the mass spectrometer and contributing their space charge to down-stream ion accumulation steps. The low-mass filter is made by using an adjustable potential energy barrier from the conductance limiting terminal electrode of an electrodynamic ion funnel, which prohibits species with higher ion mobilities from being transmitted. The filter provides a linear voltage adjustment of low-mass filtering from m/z values from about 50 to about 500. Mass filtering above m/z 500 can also be performed; however, higher m/z species are attenuated. The mass filter was evaluated with a liquid chromatography-mass spectrometry analysis of an albumin tryptic digest and resulted in the ability to block low-mass, "background" ions which account for 40-70% of the total ion current from the ESI source during peak elution.
GPR image analysis to locate water leaks from buried pipes by applying variance filters
Ocaña-Levario, Silvia J.; Carreño-Alvarado, Elizabeth P.; Ayala-Cabrera, David; Izquierdo, Joaquín
2018-05-01
Nowadays, there is growing interest in controlling and reducing the amount of water lost through leakage in water supply systems (WSSs). Leakage is, in fact, one of the biggest problems faced by the managers of these utilities. This work addresses the problem of leakage in WSSs by using GPR (Ground Penetrating Radar) as a non-destructive method. The main objective is to identify and extract features from GPR images such as leaks and components in a controlled laboratory condition by a methodology based on second order statistical parameters and, using the obtained features, to create 3D models that allows quick visualization of components and leaks in WSSs from GPR image analysis and subsequent interpretation. This methodology has been used before in other fields and provided promising results. The results obtained with the proposed methodology are presented, analyzed, interpreted and compared with the results obtained by using a well-established multi-agent based methodology. These results show that the variance filter is capable of highlighting the characteristics of components and anomalies, in an intuitive manner, which can be identified by non-highly qualified personnel, using the 3D models we develop. This research intends to pave the way towards future intelligent detection systems that enable the automatic detection of leaks in WSSs.
Extended Kalman filtering applied to a two-axis robotic arm with flexible links
Lertpiriyasuwat, V.; Berg, M.C.; Buffinton, K.W.
2000-03-01
An industrial robot today uses measurements of its joint positions and models of its kinematics and dynamics to estimate and control its end-effector position. Substantially better end-effector position estimation and control performance would be obtainable if direct measurements of its end-effector position were also used. The subject of this paper is extended Kalman filtering for precise estimation of the position of the end-effector of a robot using, in addition to the usual measurements of the joint positions, direct measurements of the end-effector position. The estimation performances of extended Kalman filters are compared in applications to a planar two-axis robotic arm with very flexible links. The comparisons shed new light on the dependence of extended Kalman filter estimation performance on the quality of the model of the arm dynamics that the extended Kalman filter operates with.
Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters
Calfee, M. Worth; Rose, Laura J.; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal
2013-01-01
The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37 mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50...
Kalman and particle filtering methods for full vehicle and tyre identification
Bogdanski, Karol; Best, Matthew C.
2018-05-01
This paper considers identification of all significant vehicle handling dynamics of a test vehicle, including identification of a combined-slip tyre model, using only those sensors currently available on most vehicle controller area network buses. Using an appropriately simple but efficient model structure, all of the independent parameters are found from test vehicle data, with the resulting model accuracy demonstrated on independent validation data. The paper extends previous work on augmented Kalman Filter state estimators to concentrate wholly on parameter identification. It also serves as a review of three alternative filtering methods; identifying forms of the unscented Kalman filter, extended Kalman filter and particle filter are proposed and compared for effectiveness, complexity and computational efficiency. All three filters are suited to applications of system identification and the Kalman Filters can also operate in real-time in on-line model predictive controllers or estimators.
Applying scrum methods to ITS projects.
2017-08-01
The introduction of new technology generally brings new challenges and new methods to help with deployments. Agile methodologies have been introduced in the information technology industry to potentially speed up development. The Federal Highway Admi...
Applying Fuzzy Possibilistic Methods on Critical Objects
Yazdani, Hossein; Ortiz-Arroyo, Daniel; Choros, Kazimierz
2016-01-01
Providing a ﬂexible environment to process data objects is a desirable goal of machine learning algorithms. In fuzzy and possibilistic methods, the relevance of data objects is evaluated and a membership degree is assigned. However, some critical objects objects have the potential ability to affect...... the performance of the clustering algorithms if they remain in a speciﬁc cluster or they are moved into another. In this paper we analyze and compare how critical objects affect the behaviour of fuzzy possibilistic methods in several data sets. The comparison is based on the accuracy and ability of learning...... methods to provide a proper searching space for data objects. The membership functions used by each method when dealing with critical objects is also evaluated. Our results show that relaxing the conditions of participation for data objects in as many partitions as they can, is beneﬁcial....
A novel optimized LCL-filter designing method for grid connected converter
Guohong, Zeng; Rasmussen, Tonny Wederberg; Teodorescu, Remus
2010-01-01
This paper presents a new LCL-filters optimized designing method for grid connected voltage source converter. This method is based on the analysis of converter output voltage components and inherent relations among LCL-filter parameters. By introducing an optimizing index of equivalent total capa...
The applicability of micro-filters produced by nuclear methods in the food industry
Szabo, S.A.; Ember, G.
1982-01-01
Problems of the applicability in the food industry of micro-filters produced by nuclear methods are dealt with. Production methods of the polymeric micro-filters, their main characteristics as well as their most important application fields (breweries, dairies, alcoholic- and soft-drink plants, wine industry) are briefly reviewed. (author)
Lessons learned in preparing method 29 filters for compliance testing audits.
Martz, R F; McCartney, J E; Bursey, J T; Riley, C E
2000-01-01
Companies conducting compliance testing are required to analyze audit samples at the time they collect and analyze the stack samples if audit samples are available. Eastern Research Group (ERG) provides technical support to the EPA's Emission Measurements Center's Stationary Source Audit Program (SSAP) for developing, preparing, and distributing performance evaluation samples and audit materials. These audit samples are requested via the regulatory Agency and include spiked audit materials for EPA Method 29-Metals Emissions from Stationary Sources, as well as other methods. To provide appropriate audit materials to federal, state, tribal, and local governments, as well as agencies performing environmental activities and conducting emission compliance tests, ERG has recently performed testing of blank filter materials and preparation of spiked filters for EPA Method 29. For sampling stationary sources using an EPA Method 29 sampling train, the use of filters without organic binders containing less than 1.3 microg/in.2 of each of the metals to be measured is required. Risk Assessment testing imposes even stricter requirements for clean filter background levels. Three vendor sources of quartz fiber filters were evaluated for background contamination to ensure that audit samples would be prepared using filters with the lowest metal background levels. A procedure was developed to test new filters, and a cleaning procedure was evaluated to see if a greater level of cleanliness could be achieved using an acid rinse with new filters. Background levels for filters supplied by different vendors and within lots of filters from the same vendor showed a wide variation, confirmed through contact with several analytical laboratories that frequently perform EPA Method 29 analyses. It has been necessary to repeat more than one compliance test because of suspect metals background contamination levels. An acid cleaning step produced improvement in contamination level, but the
Output regularization of SVM seizure predictors: Kalman Filter versus the "Firing Power" method.
Teixeira, Cesar; Direito, Bruno; Bandarabadi, Mojtaba; Dourado, António
2012-01-01
Two methods for output regularization of support vector machines (SVMs) classifiers were applied for seizure prediction in 10 patients with long-term annotated data. The output of the classifiers were regularized by two methods: one based on the Kalman Filter (KF) and other based on a measure called the "Firing Power" (FP). The FP is a quantification of the rate of the classification in the preictal class in a past time window. In order to enable the application of the KF, the classification problem was subdivided in a two two-class problem, and the real-valued output of SVMs was considered. The results point that the FP method raise less false alarms than the KF approach. However, the KF approach presents an higher sensitivity, but the high number of false alarms turns their applicability negligible in some situations.
Quality assurance and applied statistics. Method 3
1992-01-01
This German-Industry-Standards-paperback contains the International Standards from the Series ISO 9000 (or, as the case may be, the European Standards from the Series EN 29000) concerning quality assurance and including the already completed supplementary guidelines with ISO 9000- and ISO 9004-section numbers, which have been adopted as German Industry Standards and which are observed and applied world-wide to a great extent. It also includes the German-Industry-Standards ISO 10011 parts 1, 2 and 3 concerning the auditing of quality-assurance systems and the German-Industry-Standard ISO 10012 part 1 concerning quality-assurance demands (confirmation system) for measuring devices. The standards also include English and French versions. They are applicable independent of the user's line of industry and thus constitute basic standards. (orig.) [de
Lavine method applied to three body problems
Mourre, Eric.
1975-09-01
The methods presently proposed for the three body problem in quantum mechanics, using the Faddeev approach for proving the asymptotic completeness, come up against the presence of new singularities when the potentials considered v(α)(x(α)) for two-particle interactions decay less rapidly than /x(α)/ -2 ; and also when trials are made for solving the problem with a representation space whose dimension for a particle is lower than three. A method is given that allows the mathematical approach to be extended to three body problem, in spite of singularities. Applications are given [fr
Applying Human Computation Methods to Information Science
Harris, Christopher Glenn
2013-01-01
Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…
Applying Mixed Methods Techniques in Strategic Planning
Voorhees, Richard A.
2008-01-01
In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…
Su, Gui-yang; Li, Jian-hua; Ma, Ying-hua; Li, Sheng-hong
2004-09-01
With the flooding of pornographic information on the Internet, how to keep people away from that offensive information is becoming one of the most important research areas in network information security. Some applications which can block or filter such information are used. Approaches in those systems can be roughly classified into two kinds: metadata based and content based. With the development of distributed technologies, content based filtering technologies will play a more and more important role in filtering systems. Keyword matching is a content based method used widely in harmful text filtering. Experiments to evaluate the recall and precision of the method showed that the precision of the method is not satisfactory, though the recall of the method is rather high. According to the results, a new pornographic text filtering model based on reconfirming is put forward. Experiments showed that the model is practical, has less loss of recall than the single keyword matching method, and has higher precision.
Spatio-temporal point process filtering methods with an application
Frcalová, B.; Beneš, V.; Klement, Daniel
2010-01-01
Roč. 21, 3-4 (2010), s. 240-252 ISSN 1180-4009 R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z50110509 Keywords : cox point process * filtering * spatio-temporal modelling * spike Subject RIV: BA - General Mathematics Impact factor: 0.750, year: 2010
Multiple HEPA filter test methods, January--December 1976
Schuster, B.; Kyle, T.; Osetek, D.
1977-06-01
The testing of tandem high-efficiency particulate air (HEPA) filter systems is of prime importance for the measurement of accurate overall system protection factors. A procedure, based on the use of an intra-cavity laser particle spectrometer, has been developed for measuring protection factors in the 10 8 range. A laboratory scale model of a filter system was constructed and initially tested to determine individual HEPA filter characteristics with regard to size and state (liquid or solid) of several test aerosols. Based on these laboratory measurements, in-situ testing has been successfully conducted on a number of single and tandem filter installations within the Los Alamos Scientific Laboratory as well as on extraordinary large single systems at Rocky Flats. For the purpose of recovery and for simplified solid waste disposal, or prefiltering purposes, two versions of an inhomogeneous electric field air cleaner have been devised and are undergoing testing. Initial experience with one of the systems, which relies on an electrostatic spraying phenomenon, indicates performance efficiency of greater than 99.9% for flow velocities commonly used in air cleaning systems. Among the effluents associated with nuclear fuel reprocessing is 129 I. An intra-cavity laser detection system is under development which shows promise of being able to detect mixing ratios of one part in 10 7 , I 2 in air
75 FR 80117 - Methods for Measurement of Filterable PM10
2010-12-21
... the post-test leak check has been conducted, any water collected in the dry impingers is purged with... That Significantly Affect Energy Supply, Distribution, or Use I. National Technology Transfer and... mass collected on the filter is determined gravimetrically after removal of uncombined water. On...
[The diagnostic methods applied in mycology].
Kurnatowska, Alicja; Kurnatowski, Piotr
2008-01-01
The systemic fungal invasions are recognized with increasing frequency and constitute a primary cause of morbidity and mortality, especially in immunocompromised patients. Early diagnosis improves prognosis, but remains a problem because there is lack of sensitive tests to aid in the diagnosis of systemic mycoses on the one hand, and on the other the patients only present unspecific signs and symptoms, thus delaying early diagnosis. The diagnosis depends upon a combination of clinical observation and laboratory investigation. The successful laboratory diagnosis of fungal infection depends in major part on the collection of appropriate clinical specimens for investigations and on the selection of appropriate microbiological test procedures. So these problems (collection of specimens, direct techniques, staining methods, cultures on different media and non-culture-based methods) are presented in article.
Proteomics methods applied to malaria: Plasmodium falciparum
Cuesta Astroz, Yesid; Segura Latorre, Cesar
2012-01-01
Malaria is a parasitic disease that has a high impact on public health in developing countries. The sequencing of the plasmodium falciparum genome and the development of proteomics have enabled a breakthrough in understanding the biology of the parasite. Proteomics have allowed to characterize qualitatively and quantitatively the parasite s expression of proteins and has provided information on protein expression under conditions of stress induced by antimalarial. Given the complexity of their life cycle, this takes place in the vertebrate host and mosquito vector. It has proven difficult to characterize the protein expression during each stage throughout the infection process in order to determine the proteome that mediates several metabolic, physiological and energetic processes. Two dimensional electrophoresis, liquid chromatography and mass spectrometry have been useful to assess the effects of antimalarial on parasite protein expression and to characterize the proteomic profile of different p. falciparum stages and organelles. The purpose of this review is to present state of the art tools and advances in proteomics applied to the study of malaria, and to present different experimental strategies used to study the parasite's proteome in order to show the advantages and disadvantages of each one.
METHOD OF APPLYING NICKEL COATINGS ON URANIUM
Gray, A.G.
1959-07-14
A method is presented for protectively coating uranium which comprises etching the uranium in an aqueous etching solution containing chloride ions, electroplating a coating of nickel on the etched uranium and heating the nickel plated uranium by immersion thereof in a molten bath composed of a material selected from the group consisting of sodium chloride, potassium chloride, lithium chloride, and mixtures thereof, maintained at a temperature of between 700 and 800 deg C, for a time sufficient to alloy the nickel and uranium and form an integral protective coating of corrosion-resistant uranium-nickel alloy.
Test of methods for retrospective activity size distribution determination from filter samples
Meisenberg, Oliver; Tschiersch, Jochen
2015-01-01
Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter
Versatile Formal Methods Applied to Quantum Information.
Witzel, Wayne [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Rudinger, Kenneth Michael [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sarovar, Mohan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-11-01
Using a novel formal methods approach, we have generated computer-veri ed proofs of major theorems pertinent to the quantum phase estimation algorithm. This was accomplished using our Prove-It software package in Python. While many formal methods tools are available, their practical utility is limited. Translating a problem of interest into these systems and working through the steps of a proof is an art form that requires much expertise. One must surrender to the preferences and restrictions of the tool regarding how mathematical notions are expressed and what deductions are allowed. Automation is a major driver that forces restrictions. Our focus, on the other hand, is to produce a tool that allows users the ability to con rm proofs that are essentially known already. This goal is valuable in itself. We demonstrate the viability of our approach that allows the user great exibility in expressing state- ments and composing derivations. There were no major obstacles in following a textbook proof of the quantum phase estimation algorithm. There were tedious details of algebraic manipulations that we needed to implement (and a few that we did not have time to enter into our system) and some basic components that we needed to rethink, but there were no serious roadblocks. In the process, we made a number of convenient additions to our Prove-It package that will make certain algebraic manipulations easier to perform in the future. In fact, our intent is for our system to build upon itself in this manner.
Optimization methods applied to hybrid vehicle design
Donoghue, J. F.; Burghart, J. H.
1983-01-01
The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.
Applying the Socratic Method to Physics Education
Corcoran, Ed
2005-04-01
We have restructured University Physics I and II in accordance with methods that PER has shown to be effective, including a more interactive discussion- and activity-based curriculum based on the premise that developing understanding requires an interactive process in which students have the opportunity to talk through and think through ideas with both other students and the teacher. Studies have shown that in classes implementing this approach to teaching as compared to classes using a traditional approach, students have significantly higher gains on the Force Concept Inventory (FCI). This has been true in UPI. However, UPI FCI results seem to suggest that there is a significant conceptual hole in students' understanding of Newton's Second Law. Two labs in UPI which teach Newton's Second Law will be redesigned replacing more activity with students as a group talking through, thinking through, and answering conceptual questions asked by the TA. The results will be measured by comparing FCI results to those from previous semesters, coupled with interviews. The results will be analyzed, and we will attempt to understand why gains were or were not made.
Scanning probe methods applied to molecular electronics
Pavlicek, Niko
2013-08-01
Scanning probe methods on insulating films offer a rich toolbox to study electronic, structural and spin properties of individual molecules. This work discusses three issues in the field of molecular and organic electronics. An STM head to be operated in high magnetic fields has been designed and built up. The STM head is very compact and rigid relying on a robust coarse approach mechanism. This will facilitate investigations of the spin properties of individual molecules in the future. Combined STM/AFM studies revealed a reversible molecular switch based on two stable configurations of DBTH molecules on ultrathin NaCl films. AFM experiments visualize the molecular structure in both states. Our experiments allowed to unambiguously determine the pathway of the switch. Finally, tunneling into and out of the frontier molecular orbitals of pentacene molecules has been investigated on different insulating films. These experiments show that the local symmetry of initial and final electron wave function are decisive for the ratio between elastic and vibration-assisted tunneling. The results can be generalized to electron transport in organic materials.
Comparison contemporary methods of regeneration sodium-cationic filters
Burakov, I. A.; Burakov, A. Y.; Nikitina, I. S.; Verkhovsky, A. E.; Ilyushin, A. S.; Aladushkin, S. V.
2017-11-01
Regeneration plays a crucial role in the field of efficient application sodium-cationic filters for softening the water. Traditionally used as regenerant saline NaCl. However, due to the modern development of the energy industry and its close relationship with other industrial and academic sectors the opportunity to use in the regeneration of other solutions. The report estimated data and application possibilities as regenerant solution sodium-cationic filters brine wells a high mineral content, as both primary application and after balneotherapeutic use reverse osmosis and concentrates especially recycled regenerant water repeated. Comparison of the effectiveness of these solutions with the traditional use of NaCl. Developed and tested system for the processing of highly mineralized brines wells after balneological use. Recommendations for use as regeneration solutions for the sodium-cationic unit considered solutions and defined rules of brine for regeneration costs.
A New Filter Design Method for Disturbed Multilayer Hopfield Neural Networks
AHN, C. K.
2011-05-01
Full Text Available This paper investigates the passivity based filtering problem for multilayer Hopfield neural networks with external disturbance. A new passivity based filter design method for multilayer Hopfield neural networks is developed to ensure that the filtering error system is exponentially stable and passive from the external disturbance vector to the output error vector. The unknown gain matrix is obtained by solving a linear matrix inequality (LMI, which can be easily facilitated by using some standard numerical packages. An illustrative example is given to demonstrate the effectiveness of the proposed filter.
Method of processing cellulose filter sludge containing radioactive waste
Shibata, Setsuo; Shibuya, Hidetoshi; Kusakabe, Takao; Kawakami, Hiroshi.
1991-01-01
To cellulose filter sludges deposited with radioactive wastes, 1 to 15% of cellulase based on the solid content of the filter sludges is caused to act in an aqueous medium with 4 to 8 pH at 10 to 50degC. If the pH value exceeds 8, hydrolyzing effect of cellulase is decreased, whereas a tank is corroded if the pH value is 4 or lower. If temperature is lower than 10degC, the rate of the hydrolysis reaction is too low to be practical. It is appropriate that the temperature is at the order of 40degC. If it exceeds 50degC, the cellulase itself becomes unstable. It is most effective that the amount of cellulase is about 8% and its addition by more than 15% is not effective. In this way, liquids in which most of filter sludges are hydrolyzed are processed as low level radioactive wastes. (T.M.)
Zhao, Yun-wei; Zhu, Zi-qiang; Lu, Guang-yin; Han, Bo
2018-03-01
The sine and cosine transforms implemented with digital filters have been used in the Transient electromagnetic methods for a few decades. Kong (2007) proposed a method of obtaining filter coefficients, which are computed in the sample domain by Hankel transform pair. However, the curve shape of Hankel transform pair changes with a parameter, which usually is set to be 1 or 3 in the process of obtaining the digital filter coefficients of sine and cosine transforms. First, this study investigates the influence of the parameter on the digital filter algorithm of sine and cosine transforms based on the digital filter algorithm of Hankel transform and the relationship between the sine, cosine function and the ±1/2 order Bessel function of the first kind. The results show that the selection of the parameter highly influences the precision of digital filter algorithm. Second, upon the optimal selection of the parameter, it is found that an optimal sampling interval s also exists to achieve the best precision of digital filter algorithm. Finally, this study proposes four groups of sine and cosine transform digital filter coefficients with different length, which may help to develop the digital filter algorithm of sine and cosine transforms, and promote its application.
Methods for in-place testing of HEPA and iodine filters used in nuclear power plants
Holmberg, R.; Laine, J.
1978-04-01
The purpose of this work was a general investigation of existing in-place test methods and to build an equipment for in-place testing of HEPA and iodine sorption filters. In this work the discussion is limited to methods used in in-place testing of HEPA and iodine sorption filters used in light-water-cooled reactor plants. Dealy systems, built for the separation of noble gases, and testing of them is not discussed in the work. Contaminants present in the air of a reactor containment can roughly be diveded into three groups: aerosols, reactive gases, and noble gases. The aerosols are filtered with HEPA (High Efficiency Particulate Air) filters. The most important reactive gases are molecular iodine and its two compounds: hydrogen iodide and methyl iodide. Of gases to be removed by the filters methyl iodide is the gas most difficult to remove especially at high relative humidities. Impregnated activated charcoal is generally used as sorption material in the iodine filters. Experience gained from the use of nuclear power plants proves that the function of high efficiency air filter systems can not be considered safe until this is proved by in-place tests. In-place tests in use are basically equal. A known test agent is injected upstream of the filter to be tested. The efficiency is calculated from air samples taken from both sides of the filter. (author)
A novel hypothesis splitting method implementation for multi-hypothesis filters
Bayramoglu, Enis; Ravn, Ole; Andersen, Nils Axel
2013-01-01
The paper presents a multi-hypothesis filter library featuring a novel method for splitting Gaussians into ones with smaller variances. The library is written in C++ for high performance and the source code is open and free1. The multi-hypothesis filters commonly approximate the distribution tran...
Improved Kalman Filter Method for Measurement Noise Reduction in Multi Sensor RFID Systems
Min Chul Kim
2011-10-01
Full Text Available Recently, the range of available Radio Frequency Identification (RFID tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less Mean Squared Error (MSE than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments.
Improved Kalman filter method for measurement noise reduction in multi sensor RFID systems.
Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon
2011-01-01
Recently, the range of available radio frequency identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less mean squared error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments.
Filter-design perspective applied to dynamical decoupling of a multi-qubit system
Su Zhikun; Jiang Shaoji
2012-01-01
We employ the filter-design perspective and derive the filter functions according to nested Uhrig dynamical decoupling (NUDD) and symmetric dynamical decoupling (SDD) in the pure-dephasing spin-boson model with N qubits. The performances of NUDD and SDD are discussed in detail for a two-qubit system. The analysis shows that (i) SDD outperforms NUDD for the bath with a soft cutoff while NUDD approaches SDD as the cutoff becomes harder; (ii) if the qubits are coupled to a common reservoir, SDD helps to protect the decoherence-free subspace while NUDD destroys it; (iii) when the imperfect control pulses with finite width are considered, NUDD is affected in both the high-fidelity regime and coherence time regime while SDD is affected in the coherence time regime only. (paper)
A dynamic particle filter-support vector regression method for reliability prediction
Wei, Zhao; Tao, Tao; ZhuoShu, Ding; Zio, Enrico
2013-01-01
Support vector regression (SVR) has been applied to time series prediction and some works have demonstrated the feasibility of its use to forecast system reliability. For accuracy of reliability forecasting, the selection of SVR's parameters is important. The existing research works on SVR's parameters selection divide the example dataset into training and test subsets, and tune the parameters on the training data. However, these fixed parameters can lead to poor prediction capabilities if the data of the test subset differ significantly from those of training. Differently, the novel method proposed in this paper uses particle filtering to estimate the SVR model parameters according to the whole measurement sequence up to the last observation instance. By treating the SVR training model as the observation equation of a particle filter, our method allows updating the SVR model parameters dynamically when a new observation comes. Because of the adaptability of the parameters to dynamic data pattern, the new PF–SVR method has superior prediction performance over that of standard SVR. Four application results show that PF–SVR is more robust than SVR to the decrease of the number of training data and the change of initial SVR parameter values. Also, even if there are trends in the test data different from those in the training data, the method can capture the changes, correct the SVR parameters and obtain good predictions. -- Highlights: •A dynamic PF–SVR method is proposed to predict the system reliability. •The method can adjust the SVR parameters according to the change of data. •The method is robust to the size of training data and initial parameter values. •Some cases based on both artificial and real data are studied. •PF–SVR shows superior prediction performance over standard SVR
Applying a particle filtering technique for canola crop growth stage estimation in Canada
Sinha, Abhijit; Tan, Weikai; Li, Yifeng; McNairn, Heather; Jiao, Xianfeng; Hosseini, Mehdi
2017-10-01
Accurate crop growth stage estimation is important in precision agriculture as it facilitates improved crop management, pest and disease mitigation and resource planning. Earth observation imagery, specifically Synthetic Aperture Radar (SAR) data, can provide field level growth estimates while covering regional scales. In this paper, RADARSAT-2 quad polarization and TerraSAR-X dual polarization SAR data and ground truth growth stage data are used to model the influence of canola growth stages on SAR imagery extracted parameters. The details of the growth stage modeling work are provided, including a) the development of a new crop growth stage indicator that is continuous and suitable as the state variable in the dynamic estimation procedure; b) a selection procedure for SAR polarimetric parameters that is sensitive to both linear and nonlinear dependency between variables; and c) procedures for compensation of SAR polarimetric parameters for different beam modes. The data was collected over three crop growth seasons in Manitoba, Canada, and the growth model provides the foundation of a novel dynamic filtering framework for real-time estimation of canola growth stages using the multi-sensor and multi-mode SAR data. A description of the dynamic filtering framework that uses particle filter as the estimator is also provided in this paper.
Calculation methods of reactivity using derivatives of nuclear power and Filter fir
Diaz, Daniel Suescun
2007-01-01
This work presents two new methods for the solution of the inverse point kinetics equation. The first method is based on the integration by parts of the integral of the inverse point kinetics equation, which results in a power series in terms of the nuclear power in time dependence. Applying some conditions to the nuclear power, the reactivity is represented as first and second derivatives of this nuclear power. This new calculation method for reactivity has special characteristics, amongst which the possibility of using different sampling periods, and the possibility of restarting the calculation, after its interruption associated it with a possible equipment malfunction, allowing the calculation of reactivity in a non-continuous way. Apart from this reactivity can be obtained with or without dependency on the nuclear power memory. The second method is based on the Laplace transform of the point kinetics equations, resulting in an expression equivalent to the inverse kinetics equation as a function of the power history. The reactivity can be written in terms of the summation of convolution with response to impulse, characteristic of a linear system. For its digital form the Z-transform is used, which is the discrete version of the Laplace transform. In this method it can be pointed out that the linear part is equivalent to a filter named Finite Impulse Response (Fir). The Fir filter will always be, stable and non-varying in time, and, apart from this, it can be implemented in the non-recursive way. This type of implementation does not require feedback, allowing the calculation of reactivity in a continuous way. The proposed methods were validated using signals with random noise and showing the relationship between the reactivity difference and the degree of the random noise. (author)
The filter of choice: filtration method preference among injecting drug users
Keijzer Lenneke
2011-08-01
Full Text Available Abstract Background Injection drug use syringe filters (IDUSF are designed to prevent several complications related to the injection of drugs. Due to their small pore size, their use can reduce the solution's insoluble particle content and thus diminish the prevalence of phlebitis, talcosis.... Their low drug retention discourages from filter reuse and sharing and can thus prevent viral and microbial infections. In France, drug users have access to sterile cotton filters for 15 years and to an IDUSF (the Sterifilt® for 5 years. This study was set up to explore the factors influencing filter preference amongst injecting drug users. Methods Quantitative and qualitative data were gathered through 241 questionnaires and the participation of 23 people in focus groups. Results Factors found to significantly influence filter preference were duration and frequency of injecting drug use, the type of drugs injected and subculture. Furthermore, IDU's rationale for the preference of one type of filter over others was explored. It was found that filter preference depends on perceived health benefits (reduced harms, prevention of vein damage, protection of injection sites, drug retention (low retention: better high, protective mechanism against the reuse of filters; high retention: filter reuse as a protective mechanism against withdrawal, technical and practical issues (filter clogging, ease of use, time needed to prepare an injection and believes (the conviction that a clear solution contains less active compound. Conclusion It was concluded that the factors influencing filter preference are in favour of change; a shift towards the use of more efficient filters can be made through increased availability, information and demonstrations.
Zhou, Qiuling; Tang, Chen; Li, Biyuan; Wang, Linlin; Lei, Zhenkun; Tang, Shuwei
2018-01-01
The filtering of discontinuous optical fringe patterns is a challenging problem faced in this area. This paper is concerned with oriented partial differential equations (OPDEs)-based image filtering methods for discontinuous optical fringe patterns. We redefine a new controlling speed function to depend on the orientation coherence. The orientation coherence can be used to distinguish the continuous regions and the discontinuous regions, and can be calculated by utilizing fringe orientation. We introduce the new controlling speed function to the previous OPDEs and propose adaptive OPDEs filtering models. According to our proposed adaptive OPDEs filtering models, the filtering in the continuous and discontinuous regions can be selectively carried out. We demonstrate the performance of the proposed adaptive OPDEs via application to the simulated and experimental fringe patterns, and compare our methods with the previous OPDEs.
A hybrid filtering method based on a novel empirical mode decomposition for friction signals
Li, Chengwei; Zhan, Liwei
2015-01-01
During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods. (paper)
Neutron Filter Technique and its use for Fundamental and applied Investigations
Gritzay, V.; Kolotyi, V.
2008-01-01
At Kyiv Research Reactor (KRR) the neutron filtered beam technique is used for more than 30 years and its development continues, the new and updated facilities for neutron cross section measurements provide the receipt of neutron cross sections with rather high accuracy: total neutron cross sections with accuracy 1% and better, neutron scattering cross sections with 3-6% accuracy. The main purpose of this paper is presentation of the neutron measurement techniques, developed at KRR, and demonstration some experimental results, obtained using these techniques
Reflections on Mixing Methods in Applied Linguistics Research
Hashemi, Mohammad R.
2012-01-01
This commentary advocates the use of mixed methods research--that is the integration of qualitative and quantitative methods in a single study--in applied linguistics. Based on preliminary findings from a research project in progress, some reflections on the current practice of mixing methods as a new trend in applied linguistics are put forward.…
Qian, S.; Dunham, M.E.
1996-11-12
A system and method are disclosed for constructing a bank of filters which detect the presence of signals whose frequency content varies with time. The present invention includes a novel system and method for developing one or more time templates designed to match the received signals of interest and the bank of matched filters use the one or more time templates to detect the received signals. Each matched filter compares the received signal x(t) with a respective, unique time template that has been designed to approximate a form of the signals of interest. The robust time domain template is assumed to be of the order of w(t)=A(t)cos(2{pi}{phi}(t)) and the present invention uses the trajectory of a joint time-frequency representation of x(t) as an approximation of the instantaneous frequency function {phi}{prime}(t). First, numerous data samples of the received signal x(t) are collected. A joint time frequency representation is then applied to represent the signal, preferably using the time frequency distribution series. The joint time-frequency transformation represents the analyzed signal energy at time t and frequency f, P(t,f), which is a three-dimensional plot of time vs. frequency vs. signal energy. Then P(t,f) is reduced to a multivalued function f(t), a two dimensional plot of time vs. frequency, using a thresholding process. Curve fitting steps are then performed on the time/frequency plot, preferably using Levenberg-Marquardt curve fitting techniques, to derive a general instantaneous frequency function {phi}{prime}(t) which best fits the multivalued function f(t). Integrating {phi}{prime}(t) along t yields {phi}{prime}(t), which is then inserted into the form of the time template equation. A suitable amplitude A(t) is also preferably determined. Once the time template has been determined, one or more filters are developed which each use a version or form of the time template. 7 figs.
Applying homotopy analysis method for solving differential-difference equation
Wang Zhen; Zou Li; Zhang Hongqing
2007-01-01
In this Letter, we apply the homotopy analysis method to solving the differential-difference equations. A simple but typical example is applied to illustrate the validity and the great potential of the generalized homotopy analysis method in solving differential-difference equation. Comparisons are made between the results of the proposed method and exact solutions. The results show that the homotopy analysis method is an attractive method in solving the differential-difference equations
Borodachev, S. M.
2016-06-01
The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.
Software filtering method to suppress spike pulse interference in multi-channel scaler
Huang Shun; Zhao Xiuliang; Li Zhiqiang; Zhao Yanhui
2008-01-01
In the test on anti-jamming function of a multi-channel scaler, we found that the spike pulse interference on the second level counter caused by the motor start-stop operations brings a major count error. There are resolvable characteristics between effective signal and spike pulse interference, and multi-channel hardware filtering circuit is too huge and can't filter thoroughly, therefore we designed a software filtering method. In this method based on C8051F020 MCU, we dynamically store sampling values of one channel in only a one-byte variable and distinguish the rise-trail edge of a signal and spike pulse interference because of value changes of the variable. Test showed that the filtering software method can solve the error counting problem of the multi-channel scaler caused by the motor start-stop operations. The flow chart and source codes of the method were detailed in this paper. (authors)
Comparative study on γ-ray spectrum by several filtering method
Yuan Xinyu; Liu Liangjun; Zhou Jianliang
2011-01-01
Comparative study was conducted on results of gamma-ray spectrum by using a majority of active smoothing method, which were used to show filtering effect. The results showed that peak was widened and overlap peaks increased with energy domain filter in γ-ray spectrum. Filter and its parameters should be seriously taken into consideration in frequency domain. Wavelet transformation can keep signal in high frequency region well. Improved threshold method showed the advantages of hard and soft threshold method at the same time by comparison, which was suitable for weak peaks detection. A new filter was put forward to eke out gravity model approach, whose denoise level was detected by standard deviation. This method not only kept signal and net area of peak well,but also attained better result and had simple computer program. (authors)
Szilagyi, Gyoergy
1989-01-01
A full-flow condensation water purification system is applied in the secondary cooling circuit of the Paks NPP. The electromagnetic filter of the filtering system eliminates ferromagnetic impurities. The filter consists of a high current coil and an automatic control unit. During the improvement of this unit, a FESTO FPC-404 type controller based on an extended capability PLC was installed. (R.P.) 5 figs
Alzahrani, Hani Ataiq
2014-09-01
ABSTRACT Testing the Feasibility of Using PERM to Apply Scattering-Angle Filtering in the Image-Domain for FWI Applications Hani Ataiq Alzahrani Full Waveform Inversion (FWI) is a non-linear optimization problem aimed to estimating subsurface parameters by minimizing the mis t between modeled and recorded seismic data using gradient descent methods, which are the only practical choice because of the size of the problem. Due to the high non-linearity of the problem, gradient methods will converge to a local minimum if the starting model is not close to the true one. The accuracy of the long-wavelength components of the initial model controls the level of non-linearity of the inversion. In order for FWI to converge to the global minimum, we have to obtain the long wavelength components of the model before inverting for the short wavelengths. Ultra-low temporal frequencies are sensitive to the smooth (long wavelength) part of the model, and can be utilized by waveform inversion to resolve that part. Un- fortunately, frequencies in this range are normally missing in eld data due to data- acquisition limitations. The lack of low frequencies can be compensated for by uti- lizing wide-aperture data, as they include arrivals that are especially sensitive to the long wavelength components of the model. The higher the scattering angle of a 5 recorded event, the higher the model wavelength it can resolve. Based on this prop- erty, a scattering-angle ltering algorithm is proposed to start the inversion process with events corresponding to the highest scattering angle available in the data, and then include lower scattering angles progressively. The large scattering angles will resolve the smooth part of the model and reduce the non-linearity of the problem, then the lower ones will enhance the resolution of the model. Recorded data is rst migrated using Pre-stack Exploding Re ector Migration (PERM), then the resulting pre-stack image is transformed into angle gathers to which
Method for HEPA filter leak scanning with differentiating aerosol detector
Kovach, B.J.; Banks, E.M.; Wikoff, W.O. [NUCON International, Inc., Columbus, OH (United States)
1997-08-01
While scanning HEPA filters for leaks with {open_quotes}Off the Shelf{close_quote} aerosol detection equipment, the operator`s scanning speed is limited by the time constant and threshold sensitivity of the detector. This is based on detection of the aerosol density, where the maximum signal is achieved when the scanning probe resides over the pinhole longer than several detector time-constants. Since the differential value of the changing signal can be determined by observing only the first small fraction of the rising signal, using a differentiating amplifier will speed up the locating process. The other advantage of differentiation is that slow signal drift or zero offset will not interfere with the process of locating the leak, since they are not detected. A scanning hand-probe attachable to any NUCON{reg_sign} Aerosol Detector displaying the combination of both aerosol density and differentiated signal was designed. 3 refs., 1 fig.
Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)
2016-10-15
The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.
Regularization of DT-MRI Using 3D Median Filtering Methods
Soondong Kwon
2014-01-01
Full Text Available DT-MRI (diffusion tensor magnetic resonance imaging tractography is a method to determine the architecture of axonal fibers in the central nervous system by computing the direction of the principal eigenvectors obtained from tensor matrix, which is different from the conventional isotropic MRI. Tractography based on DT-MRI is known to need many computations and is highly sensitive to noise. Hence, adequate regularization methods, such as image processing techniques, are in demand. Among many regularization methods we are interested in the median filtering method. In this paper, we extended two-dimensional median filters already developed to three-dimensional median filters. We compared four median filtering methods which are two-dimensional simple median method (SM2D, two-dimensional successive Fermat method (SF2D, three-dimensional simple median method (SM3D, and three-dimensional successive Fermat method (SF3D. Three kinds of synthetic data with different altitude angles from axial slices and one kind of human data from MR scanner are considered for numerical implementation by the four filtering methods.
Printing method and printer used for applying this method
2006-01-01
The invention pertains to a method for transferring ink to a receiving material using an inkjet printer having an ink chamber (10) with a nozzle (8) and an electromechanical transducer (16) in cooperative connection with the ink chamber, comprising actuating the transducer to generate a pressure
NONLINEAR FILTER METHOD OF GPS DYNAMIC POSITIONING BASED ON BANCROFT ALGORITHM
ZHANGQin; TAOBen-zao; ZHAOChao-ying; WANGLi
2005-01-01
Because of the ignored items after linearization, the extended Kalman filter (EKF) becomes a form of suboptimal gradient descent algorithm. The emanative tendency exists in GPS solution when the filter equations are ill-posed. The deviation in the estimation cannot be avoided. Furthermore, the true solution may be lost in pseudorange positioning because the linearized pseudorange equations are partial solutions. To solve the above problems in GPS dynamic positioning by using EKF, a closed-form Kalman filter method called the two-stage algorithm is presented for the nonlinear algebraic solution of GPS dynamic positioning based on the global nonlinear least squares closed algorithm--Bancroft numerical algorithm of American. The method separates the spatial parts from temporal parts during processing the GPS filter problems, and solves the nonlinear GPS dynamic positioning, thus getting stable and reliable dynamic positioning solutions.
Talpalariu, C. M.; Talpalariu, J.; Popescu, O.; Mocanasu, M.; Lita, I.; Visan, D. A.
2016-01-01
In this work we have studied a software filtering method implemented in a pulse counting computerized measuring channel using PIN diode radiation detector. In case our interest was focalized for low rate decay radiation measurement accuracies improvement and response time optimization. During works for digital mathematical algorithm development, we used a hardware radiation measurement channel configuration based on PIN diode BPW34 detector, preamplifier, filter and programmable counter, computer connected. We report measurement results using two digital recursive methods in statically and dynamically field evolution. Software for graphical input/output real time diagram representation was designed and implemented, facilitating performances evaluation between the response of fixed configuration software recursive filter and dynamically adaptive configuration recursive filter. (authors)
The overview of damping methods for three-phase grid-tied inverter with LLCL-filter
Huang, Min; Blaabjerg, Frede; Loh, Poh Chiang
2014-01-01
Compared with LCL filter, an LLCL-filter is characterized with smaller size and lower cost for grid-connected inverters. But this high order filter may also have resonant problem which will affect the system stability. Many methods can be used to alleviate the resonant problem including active da...... and shows the advantages as well as disadvantages of these methods....
M. Akhoondzadeh
2011-04-01
Full Text Available Thermal anomaly is known as a significant precursor of strong earthquakes, therefore Land Surface Temperature (LST time series have been analyzed in this study to locate relevant anomalous variations prior to the Bam (26 December 2003, Zarand (22 February 2005 and Borujerd (31 March 2006 earthquakes. The duration of the three datasets which are comprised of MODIS LST images is 44, 28 and 46 days for the Bam, Zarand and Borujerd earthquakes, respectively. In order to exclude variations of LST from temperature seasonal effects, Air Temperature (AT data derived from the meteorological stations close to the earthquakes epicenters have been taken into account. The detection of thermal anomalies has been assessed using interquartile, wavelet transform and Kalman filter methods, each presenting its own independent property in anomaly detection. The interquartile method has been used to construct the higher and lower bounds in LST data to detect disturbed states outside the bounds which might be associated with impending earthquakes. The wavelet transform method has been used to locate local maxima within each time series of LST data for identifying earthquake anomalies by a predefined threshold. Also, the prediction property of the Kalman filter has been used in the detection process of prominent LST anomalies. The results concerning the methodology indicate that the interquartile method is capable of detecting the highest intensity anomaly values, the wavelet transform is sensitive to sudden changes, and the Kalman filter method significantly detects the highest unpredictable variations of LST. The three methods detected anomalous occurrences during 1 to 20 days prior to the earthquakes showing close agreement in results found between the different applied methods on LST data in the detection of pre-seismic anomalies. The proposed method for anomaly detection was also applied on regions irrelevant to earthquakes for which no anomaly was detected
Method and means for filtering polychlorinated biphenyls from a gas stream
Sowinski, R.F.
1992-01-01
This patent describes a method of filtering, adjacent to an end user-customer's residence or business in which at least a single gas appliance is located, a natural gas stream in which polychlorinated biphenyls (PCB's) and degraded PCB products have been concentrated at sufficient levels to be a health threat in a natural gas gathering and distributing network. It comprises: introducing the natural gas stream to a filter selected from a group that includes impingement, absorbing and adsorbing media whereby PCB's and degraded PCB products concentrated in the gas stream at sufficient levels to be a health threat by a periodic loading of the natural gas within the gathering and distributing network, are filtered from the gas stream and captured irrespective of mode of transport, passing the filtered natural gas stream to the customer's gas appliance wherein safe use of the energy associated with the stream occurs; periodically and safely removing the filter, inserting a new filter in place of the removed filter
Gillet, M.
1986-07-01
This thesis presents a study for the surveillance of the Primary circuit water inventory of a pressurized water reactor. A reference model is developed for the development of an automatic system ensuring detection and real-time diagnostic. The methods to our application are statistical tests and adapted a pattern recognition method. The estimation of the detected anomalies is treated by the least square fit method, and by filtering. A new projected optimization method with superlinear convergence is developed in this framework, and a segmented linearization of the model is introduced, in view of a multiple filtering. 46 refs [fr
Wang, Z.; Kuzay, T.M.; Hahn, U.
1993-01-01
Synchrotron x-ray windows are vacuum separators and are usually made of thin beryllium metal. Filters are provided upstream to absorb the soft x-rays so that the window can be protected from overheating, which could result in failure. The filters are made of thin carbon products or sometimes beryllium, the same material as the window. When the synchrotron x-rays pass through a filter or window, part of the photons will be absorbed by the filter or window. The absorbed photons cause heat to build up within the filter or window. Successful filter and window designs should effectively dissipate the heat generated by the absorbed photons and guarantee the safety of the filter and window. The cooling methods typically used in a filter or window design are conduction and radiation cooling or a combination of the two. The different cooling methods were first examined with regard to efficiency and effectiveness in different temperature ranges. Analysis results are presented for temperature distribution and corresponding thermal stresses in the filter and window. Another important issue to be resolved in designing a filter/window assembly is how to select the thickness of the filters and windows. This paper focuses on the criteria for choosing the thickness of a filter: whether it is better to use a few thick filters or a series of thin ones; how to determine the minimum/maximum thickness; and the difference in thickness considerations for the window versus the filter. Numerical investigations are presented
Wave field restoration using three-dimensional Fourier filtering method.
Kawasaki, T; Takai, Y; Ikuta, T; Shimizu, R
2001-11-01
A wave field restoration method in transmission electron microscopy (TEM) was mathematically derived based on a three-dimensional (3D) image formation theory. Wave field restoration using this method together with spherical aberration correction was experimentally confirmed in through-focus images of amorphous tungsten thin film, and the resolution of the reconstructed phase image was successfully improved from the Scherzer resolution limit to the information limit. In an application of this method to a crystalline sample, the surface structure of Au(110) was observed in a profile-imaging mode. The processed phase image showed quantitatively the atomic relaxation of the topmost layer.
Discrimination symbol applying method for sintered nuclear fuel product
Ishizaki, Jin
1998-01-01
The present invention provides a symbol applying method for applying discrimination information such as an enrichment degree on the end face of a sintered nuclear product. Namely, discrimination symbols of information of powders are applied by a sintering aid to the end face of a molded member formed by molding nuclear fuel powders under pressure. Then, the molded product is sintered. The sintering aid comprises aluminum oxide, a mixture of aluminum oxide and silicon dioxide, aluminum hydride or aluminum stearate alone or in admixture. As an applying means of the sintering aid, discrimination symbols of information of powders are drawn by an isostearic acid on the end face of the molded product, and the sintering aid is sprayed thereto, or the sintering aid is applied directly, or the sintering aid is suspended in isostearic acid, and the suspension is applied with a brush. As a result, visible discrimination information can be applied to the sintered member easily. (N.H.)
Planetary gearbox fault feature enhancement based on combined adaptive filter method
Shuangshu Tian
2015-12-01
Full Text Available The reliability of vibration signals acquired from a planetary gear system (the indispensable part of wind turbine gearbox is directly related to the accuracy of fault diagnosis. The complex operation environment leads to lots of interference signals which are included in the vibration signals. Furthermore, both multiple gears meshing with each other and the differences in transmission rout produce strong nonlinearity in the vibration signals, which makes it difficult to eliminate the noise. This article presents a combined adaptive filter method by taking a delayed signal as reference signal, the Self-Adaptive Noise Cancellation method is adopted to eliminate the white noise. In the meanwhile, by applying Gaussian function to transform the input signal into high-dimension feature-space signal, the kernel least mean square algorithm is used to cancel the nonlinear interference. Effectiveness of the method has been verified by simulation signals and test rig signals. By dealing with simulation signal, the signal-to-noise ratio can be improved around 30 dB (white noise and the amplitude of nonlinear interference signal can be depressed up to 50%. Experimental results show remarkable improvements and enhance gear fault features.
Ping, Jing
2017-05-19
Optimal management of subsurface processes requires the characterization of the uncertainty in reservoir description and reservoir performance prediction. For fractured reservoirs, the location and orientation of fractures are crucial for predicting production characteristics. With the help of accurate and comprehensive knowledge of fracture distributions, early water/CO 2 breakthrough can be prevented and sweep efficiency can be improved. However, since the rock property fields are highly non-Gaussian in this case, it is a challenge to estimate fracture distributions by conventional history matching approaches. In this work, a method that combines vector-based level-set parameterization technique and ensemble Kalman filter (EnKF) for estimating fracture distributions is presented. Performing the necessary forward modeling is particularly challenging. In addition to the large number of forward models needed, each model is used for sampling of randomly located fractures. Conventional mesh generation for such systems would be time consuming if possible at all. For these reasons, we rely on a novel polyhedral mesh method using the mimetic finite difference (MFD) method. A discrete fracture model is adopted that maintains the full geometry of the fracture network. By using a cut-cell paradigm, a computational mesh for the matrix can be generated quickly and reliably. In this research, we apply this workflow on 2D two-phase fractured reservoirs. The combination of MFD approach, level-set parameterization, and EnKF provides an effective solution to address the challenges in the history matching problem of highly non-Gaussian fractured reservoirs.
Research of isolated resonances using the average energy shift method for filtered neutron beam
Gritzay, O.O.; Grymalo, A.K.; Kolotyi, V.V.; Mityushkin, O.O.; Venediktov, V.M.
2010-01-01
This work is devoted to detailed description of one of the research directions in the Neutron Physics Department (NPD), namely, to research of resonance parameters of isolated nuclear level at the filtered neutron beam on the horizontal experimental channel HEC-8 of the WWR-M reactor. Research of resonance parameters is an actual problem nowadays. This is because there are the essential differences between the resonance parameter values in the different evaluated nuclear data library (ENDL) for many nuclei. Research of resonance parameter is possible due to the set of the neutron cross sections received at the same filter, but with the slightly shifted filter average energy. The shift of the filter average energy is possible by several processes. In this work this shift is realized by neutron energy dependence on scattering angle. This method is provided by equipment.
Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil
2011-01-01
Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.
Building "Applied Linguistic Historiography": Rationale, Scope, and Methods
Smith, Richard
2016-01-01
In this article I argue for the establishment of "Applied Linguistic Historiography" (ALH), that is, a new domain of enquiry within applied linguistics involving a rigorous, scholarly, and self-reflexive approach to historical research. Considering issues of rationale, scope, and methods in turn, I provide reasons why ALH is needed and…
A neural network-based optimal spatial filter design method for motor imagery classification.
Ayhan Yuksel
Full Text Available In this study, a novel spatial filter design method is introduced. Spatial filtering is an important processing step for feature extraction in motor imagery-based brain-computer interfaces. This paper introduces a new motor imagery signal classification method combined with spatial filter optimization. We simultaneously train the spatial filter and the classifier using a neural network approach. The proposed spatial filter network (SFN is composed of two layers: a spatial filtering layer and a classifier layer. These two layers are linked to each other with non-linear mapping functions. The proposed method addresses two shortcomings of the common spatial patterns (CSP algorithm. First, CSP aims to maximize the between-classes variance while ignoring the minimization of within-classes variances. Consequently, the features obtained using the CSP method may have large within-classes variances. Second, the maximizing optimization function of CSP increases the classification accuracy indirectly because an independent classifier is used after the CSP method. With SFN, we aimed to maximize the between-classes variance while minimizing within-classes variances and simultaneously optimizing the spatial filter and the classifier. To classify motor imagery EEG signals, we modified the well-known feed-forward structure and derived forward and backward equations that correspond to the proposed structure. We tested our algorithm on simple toy data. Then, we compared the SFN with conventional CSP and its multi-class version, called one-versus-rest CSP, on two data sets from BCI competition III. The evaluation results demonstrate that SFN is a good alternative for classifying motor imagery EEG signals with increased classification accuracy.
Electromagnetic design methods in systems-on-chip: integrated filters for wireless CMOS RFICs
Contopanagos, Harry
2005-01-01
We present general methods for designing on-chip CMOS passives and utilizing these integrated elements to design on-chip CMOS filters for wireless communications. These methods rely on full-wave electromagnetic numerical calculations that capture all the physics of the underlying foundry technologies. This is especially crucial for deep sub-micron CMOS technologies as it is important to capture the physical effects of finite (and mediocre) Q-factors limited by material losses and constraints on expensive die area, low self-resonance frequencies and dual parasitics that are particularly prevalent in deep sub-micron CMOS processes (65 nm-0.18 μm. We use these integrated elements in an ideal synthesis of a Bluetooth/WLAN pass-band filter in single-ended or differential architectures, and show the significant deviations of the on-chip filter response from the ideal one. We identify which elements in the filter circuit need to maximize their Q-factors and which Q-factors do not affect the filter performance. This saves die area, and predicts the FET parameters (especially transconductances) and negative-resistance FET topologies that have to be integrated in the filter to restore its performance. (invited paper)
Electromagnetic design methods in systems-on-chip: integrated filters for wireless CMOS RFICs
Contopanagos, Harry [Institute for Microelectronics, NCSR ' Demokritos' , PO Box 60228, GR-153 10 Aghia Paraskevi, Athens (Greece)
2005-01-01
We present general methods for designing on-chip CMOS passives and utilizing these integrated elements to design on-chip CMOS filters for wireless communications. These methods rely on full-wave electromagnetic numerical calculations that capture all the physics of the underlying foundry technologies. This is especially crucial for deep sub-micron CMOS technologies as it is important to capture the physical effects of finite (and mediocre) Q-factors limited by material losses and constraints on expensive die area, low self-resonance frequencies and dual parasitics that are particularly prevalent in deep sub-micron CMOS processes (65 nm-0.18 {mu}m. We use these integrated elements in an ideal synthesis of a Bluetooth/WLAN pass-band filter in single-ended or differential architectures, and show the significant deviations of the on-chip filter response from the ideal one. We identify which elements in the filter circuit need to maximize their Q-factors and which Q-factors do not affect the filter performance. This saves die area, and predicts the FET parameters (especially transconductances) and negative-resistance FET topologies that have to be integrated in the filter to restore its performance. (invited paper)
Satoh, K; Koizumi, D; Narahashi, S [NTT DoCoMo, Inc., 3-5 Hikari-no-oka, 239-8536 Yokosuka (Japan)
2006-06-01
This paper proposes a novel method to improve the power handling capability of a coplanar waveguide (CPW) high-temperature superconducting (HTS) filter. The noteworthy point of the proposed method is that it is based on the concept that the power handling capability is improved by reducing the maximum current density of the filter. Numerical investigations confirm that a CPW HTS filter using 66-{omega} characteristic impedance resonators (66-{omega} CPW HTSF) reduces the maximum current density compared to that using conventional 50-{omega} resonators (50-{omega} CPW HTSF). We fabricated 5-GHz band four-pole Chevyshev CPW HTSFs based on the proposed and conventional methods. The fabricated 66-{omega} CPW HTSF exhibited the third-order intercept point (TOI) of + 61 dBm while the 50-{omega} CPW HTSF exhibited the TOI of + 54 dBm, both at 60 K. These results indicate the effectiveness of the proposed method.
Applying Mixed Methods Research at the Synthesis Level: An Overview
Heyvaert, Mieke; Maes, Bea; Onghena, Patrick
2011-01-01
Historically, qualitative and quantitative approaches have been applied relatively separately in synthesizing qualitative and quantitative evidence, respectively, in several research domains. However, mixed methods approaches are becoming increasingly popular nowadays, and practices of combining qualitative and quantitative research components at…
A Method for Microalgae Proteomics Analysis Based on Modified Filter-Aided Sample Preparation.
Li, Song; Cao, Xupeng; Wang, Yan; Zhu, Zhen; Zhang, Haowei; Xue, Song; Tian, Jing
2017-11-01
With the fast development of microalgal biofuel researches, the proteomics studies of microalgae increased quickly. A filter-aided sample preparation (FASP) method is widely used proteomics sample preparation method since 2009. Here, a method of microalgae proteomics analysis based on modified filter-aided sample preparation (mFASP) was described to meet the characteristics of microalgae cells and eliminate the error caused by over-alkylation. Using Chlamydomonas reinhardtii as the model, the prepared sample was tested by standard LC-MS/MS and compared with the previous reports. The results showed mFASP is suitable for most of occasions of microalgae proteomics studies.
Joint Spatio-Temporal Filtering Methods for DOA and Fundamental Frequency Estimation
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Benesty, Jacob
2015-01-01
some attention in the community and is quite promising for several applications. The proposed methods are based on optimal, adaptive filters that leave the desired signal, having a certain DOA and fundamental frequency, undistorted and suppress everything else. The filtering methods simultaneously...... operate in space and time, whereby it is possible resolve cases that are otherwise problematic for pitch estimators or DOA estimators based on beamforming. Several special cases and improvements are considered, including a method for estimating the covariance matrix based on the recently proposed...
A dynamic load estimation method for nonlinear structures with unscented Kalman filter
Guo, L. N.; Ding, Y.; Wang, Z.; Xu, G. S.; Wu, B.
2018-02-01
A force estimation method is proposed for hysteretic nonlinear structures. The equation of motion for the nonlinear structure is represented in state space and the state variable is augmented by the unknown the time history of external force. Unscented Kalman filter (UKF) is improved for the force identification in state space considering the ill-condition characteristic in the computation of square roots for the covariance matrix. The proposed method is firstly validated by a numerical simulation study of a 3-storey nonlinear hysteretic frame excited by periodic force. Each storey is supposed to follow a nonlinear hysteretic model. The external force is identified and the measurement noise is considered in this case. Then a case of a seismically isolated building subjected to earthquake excitation and impact force is studied. The isolation layer performs nonlinearly during the earthquake excitation. Impact force between the seismically isolated structure and the retaining wall is estimated with the proposed method. Uncertainties such as measurement noise, model error in storey stiffness and unexpected environmental disturbances are considered. A real-time substructure testing of an isolated structure is conducted to verify the proposed method. In the experimental study, the linear main structure is taken as numerical substructure while the one of the isolations with additional mass is taken as the nonlinear physical substructure. The force applied by the actuator on the physical substructure is identified and compared with the measured value from the force transducer. The method proposed in this paper is also validated by shaking table test of a seismically isolated steel frame. The acceleration of the ground motion as the unknowns is identified by the proposed method. Results from both numerical simulation and experimental studies indicate that the UKF based force identification method can be used to identify external excitations effectively for the nonlinear
Use of Kalman filter methods in analysis of in-pile LMFBR accident simulations
Meek, C.C.; Doerner, R.C.
1983-01-01
Kalman filter methodology has been applied to inpile liquid-metal fast breeder reactor simulation experiments to obtain estimates of the fuel-clad thermal gap conductance. A transient lumped parameter model of the experiment is developed. An optimal estimate of the state vector chosen to characterize the experiment is obtained through the use of the Kalman filter. From this estimate, the fuel-clad thermal gap conductance is calculated as a function of time into the test and axial position along the length of the fuel pin
Evaluation of sampling methods for Bacillus spore-contaminated HVAC filters.
Calfee, M Worth; Rose, Laura J; Tufts, Jenia; Morse, Stephen; Clayton, Matt; Touati, Abderrahmane; Griffin-Gatchalian, Nicole; Slone, Christina; McSweeney, Neal
2014-01-01
The objective of this study was to compare an extraction-based sampling method to two vacuum-based sampling methods (vacuum sock and 37mm cassette filter) with regards to their ability to recover Bacillus atrophaeus spores (surrogate for Bacillus anthracis) from pleated heating, ventilation, and air conditioning (HVAC) filters that are typically found in commercial and residential buildings. Electrostatic and mechanical HVAC filters were tested, both without and after loading with dust to 50% of their total holding capacity. The results were analyzed by one-way ANOVA across material types, presence or absence of dust, and sampling device. The extraction method gave higher relative recoveries than the two vacuum methods evaluated (p≤0.001). On average, recoveries obtained by the vacuum methods were about 30% of those achieved by the extraction method. Relative recoveries between the two vacuum methods were not significantly different (p>0.05). Although extraction methods yielded higher recoveries than vacuum methods, either HVAC filter sampling approach may provide a rapid and inexpensive mechanism for understanding the extent of contamination following a wide-area biological release incident. Published by Elsevier B.V.
Heeres, Paul; Setiawan, Rineksa; Krol, Maarten Cornelis; Adema, Eduard Hilbrand
2009-12-01
This paper describes two new methods for the determination of NO(2) in the ambient air. The first method consists of free hanging filters with a diameter of 2.5 cm as passive samplers. The filters are impregnated with triethanolamine to bind NO(2). With standard colorimetrical analysis, the amount of NO(2) on the filters is determined. The second method is performed with fritted bubblers filled with Saltzman reagent, where, with a special procedure the absorption efficiencies of the bubblers are determined using ambient air, without the use of standard gases and electronic analytical instruments. The results of the bubblers are used to calibrate the free hanging filters. The two methods were applied simultaneously in the city of Yogyakarta, Indonesia. The methods are inexpensive and very well suited for use in low-budget situations. A characteristic of the free filter is the Sampling Volume, SV. This is the ratio of the amount of NO(2) on the filter and the ambient concentration. With the filter used in this study, the amount of triethanolamine and exposure time, the SV is 0.0166 m(3). The sampling rate (SR) of the filter, 4.6 cm(3)/s, is high. Hourly averaged measurements are performed for 15 hours per day in four busy streets. The measured amounts of NO(2) on the filters varied between 0.57 and 2.02 microg NO(2), at ambient air concentrations of 32 to 141 microg/m(3) NO(2). During the experiments the wind velocity was between 0.2 and 2.0 m/s, the relative humidity between 24 and 83 % and the temperature between 295 K and 311 K. These variations in weather conditions have no influence on the uptake of NO(2).
An assessment of particle filtering methods and nudging for climate state reconstructions
S. Dubinkina (Svetlana); H. Goosse
2013-01-01
htmlabstractUsing the climate model of intermediate complexity LOVECLIM in an idealized framework, we assess three data-assimilation methods for reconstructing the climate state. The methods are a nudging, a particle filter with sequential importance resampling, and a nudging proposal particle
Hyperspectral microscope imaging (HMI) method, which provides both spatial and spectral characteristics of samples, can be effective for foodborne pathogen detection. The acousto-optic tunable filter (AOTF)-based HMI method can be used to characterize spectral properties of biofilms formed by Salmon...
A cognition-based method to ease the computational load for an extended Kalman filter.
Li, Yanpeng; Li, Xiang; Deng, Bin; Wang, Hongqiang; Qin, Yuliang
2014-12-03
The extended Kalman filter (EKF) is the nonlinear model of a Kalman filter (KF). It is a useful parameter estimation method when the observation model and/or the state transition model is not a linear function. However, the computational requirements in EKF are a difficulty for the system. With the help of cognition-based designation and the Taylor expansion method, a novel algorithm is proposed to ease the computational load for EKF in azimuth predicting and localizing under a nonlinear observation model. When there are nonlinear functions and inverse calculations for matrices, this method makes use of the major components (according to current performance and the performance requirements) in the Taylor expansion. As a result, the computational load is greatly lowered and the performance is ensured. Simulation results show that the proposed measure will deliver filtering output with a similar precision compared to the regular EKF. At the same time, the computational load is substantially lowered.
Groeger, S; Dietzsch, M; Burkhardt, T
2011-01-01
For a specific manipulation of friction surfaces it is important to measure and calculate geometrical parameters to derive the tribological behavior. The new functional approach presented in this paper is the calculation of the characteristic lateral extension of the real contact surface as well as the representative contact radius by applying morphological filters to a 3D-set of data. All surface characteristics, including form, waviness, roughness as well as defined microstructures, are extracted holistically with a 3D Coordinate Measuring Instrument or a Form Measuring Instrument, but with the smallest available tip radius. The paper presents the benefit of this holistic extraction method and the application of morphological filtering for the description of the contact form (plateau or sphere), the real contact surface, number of contacts, the typical contact radius and the typical lateral extension of the micro contact plateaus.
A composite passive damping method of the LLCL-filter based grid-tied inverter
Wu, Weimin; Huang, Min; Sun, Yunjie
2012-01-01
This paper investigates the maximum and the minimum gain of the proportional resonant based grid current controller for a grid-tied inverter with a passive damped high-order power filter. It is found that the choice of the controller gain is limited to the local maximum amplitude determined by Q......-factor around the characteristic frequency of the filter and grid impedance. To obtain the Q-factor of a high-order system, an equivalent circuit analysis method is proposed and illustrated through several classical passive damped LCL- and LLCL-filters. It is shown that both the RC parallel damper...... that is in parallel with the capacitor of the LCL-filter or with the Lf-Cf resonant circuit of the LLCL-filter, and the RL series damper in series with the grid-side inductor have their own application limits. Thus, a composite passive damped LLCL-filter for the grid-tied inverter is proposed, which can effectively...
SZOPOS, E.
2012-05-01
Full Text Available This paper presents an iterative method for designing FIR filters that implement arbitrary magnitude characteristics, defined by the user through a set of frequency-magnitude points (frequency samples. The proposed method is based on the non-uniform frequency sampling algorithm. For each iteration a new set of frequency samples is generated, by processing the set used in the previous run; this implies changing the samples location around the previous frequency values and adjusting their magnitude through interpolation. If necessary, additional samples can be introduced, as well. After each iteration the magnitude characteristic of the resulting filter is determined by using the non-uniform DFT and compared with the required one; if the errors are larger than the acceptable levels (set by the user a new iteration is run; the length of the resulting filter and the values of its coefficients are also taken into consideration when deciding a re-run. To demonstrate the efficiency of the proposed method a tool for designing FIR filters that match human audiograms was implemented in LabVIEW. It was shown that the resulting filters have smaller coefficients than the standard one, and can also have lower order, while the errors remain relatively small.
An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation
Wuming Zhang
2016-06-01
Full Text Available Separating point clouds into ground and non-ground measurements is an essential step to generate digital terrain models (DTMs from airborne LiDAR (light detection and ranging data. However, most filtering algorithms need to carefully set up a number of complicated parameters to achieve high accuracy. In this paper, we present a new filtering method which only needs a few easy-to-set integer and Boolean parameters. Within the proposed approach, a LiDAR point cloud is inverted, and then a rigid cloth is used to cover the inverted surface. By analyzing the interactions between the cloth nodes and the corresponding LiDAR points, the locations of the cloth nodes can be determined to generate an approximation of the ground surface. Finally, the ground points can be extracted from the LiDAR point cloud by comparing the original LiDAR points and the generated surface. Benchmark datasets provided by ISPRS (International Society for Photogrammetry and Remote Sensing working Group III/3 are used to validate the proposed filtering method, and the experimental results yield an average total error of 4.58%, which is comparable with most of the state-of-the-art filtering algorithms. The proposed easy-to-use filtering method may help the users without much experience to use LiDAR data and related technology in their own applications more easily.
Sergey Yuryevich Astakhov
2013-03-01
Full Text Available Based on data obtained from examination and subsequent follow-up of 47 patients (50 eyes with refractory glaucoma, an efficacy estimation of a new method of the Ex-PRESSTM filtering device implantation was performed. The data analysis showed that the proposed surgical procedure has a low level of intra- and post-operative complications, is characterized by technical ease, and provides a long term stabilization of the glaucomatous process. Therefore it is possible to draw a conclusion that the Ex-PRESSTM filtering device implantation is an effective method for the treatment of refractory glaucoma.
Quantitative EEG Applying the Statistical Recognition Pattern Method
Engedal, Knut; Snaedal, Jon; Hoegh, Peter
2015-01-01
BACKGROUND/AIM: The aim of this study was to examine the discriminatory power of quantitative EEG (qEEG) applying the statistical pattern recognition (SPR) method to separate Alzheimer's disease (AD) patients from elderly individuals without dementia and from other dementia patients. METHODS...
Miéville, Frédéric A.; Bolard, Gregory; Benkreira, Mohamed; Ayestaran, Paul; Gudinchet, François; Bochud, François; Verdun, Francis R.
2011-03-01
The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters. A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed. In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements. The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.
Wang, Jun; Zheng, Jiao; Lu, Hong; Yan, Qing; Wang, Li; Liu, Jingjing; Hua, Dengxin
2017-11-01
Atmospheric temperature is one of the important parameters for the description of the atmospheric state. Most of the detection approaches to atmospheric temperature monitoring are based on rotational Raman scattering for better understanding atmospheric dynamics, thermodynamics, atmospheric transmission, and radiation. In this paper, we present a fine-filter method based on wavelength division multiplexing, incorporating a fiber Bragg grating in the visible spectrum for the rotational Raman scattering spectrum. To achieve high-precision remote sensing, the strong background noise is filtered out by using the secondary cascaded light paths. Detection intensity and the signal-to-noise ratio are improved by increasing the utilization rate of return signal form atmosphere. Passive temperature compensation is employed to reduce the temperature sensitivity of fiber Bragg grating. In addition, the proposed method provides a feasible solution for the filter system with the merits of miniaturization, high anti-interference, and high stability in the space-based platform.
Analysis of moiré fringes by Wiener filtering: An extension to the Fourier method
Harasse, Sébastien; Yashiro, Wataru; Momose, Atsushi
2012-01-01
In X-ray Talbot interferometry, tilting the phase grating with respect to the absorption grating results in the formation of spatial fringes. The analysis of this moiré pattern, classically performed by the Fourier method, allows the extraction of the sample phase shift information from a single image. In this context, an extension to the Fourier method is proposed. The filter used to extract the fringe information is chosen optimally in the least-squares sense, given models for the zeroth and first order modes, noise and the modulation transfer function. The latter is obtained by measuring the detector response to moiré fringes with increasing frequencies. The obtained Wiener filter allows a better reconstruction of the phase information at all fringe frequencies, compared to the usual box or gaussian filters. This is demonstrated quantitatively by experiments using synchrotron radiation.
FFT swept filtering: a bias-free method for processing fringe signals in absolute gravimeters
Křen, Petr; Pálinkáš, Vojtech; Mašika, Pavel; Val'ko, Miloš
2018-05-01
Absolute gravimeters, based on laser interferometry, are widely used for many applications in geoscience and metrology. Although currently the most accurate FG5 and FG5X gravimeters declare standard uncertainties at the level of 2-3 μGal, their inherent systematic errors affect the gravity reference determined by international key comparisons based predominately on the use of FG5-type instruments. The measurement results for FG5-215 and FG5X-251 clearly showed that the measured g-values depend on the size of the fringe signal and that this effect might be approximated by a linear regression with a slope of up to 0.030 μGal/mV . However, these empirical results do not enable one to identify the source of the effect or to determine a reasonable reference fringe level for correcting g-values in an absolute sense. Therefore, both gravimeters were equipped with new measuring systems (according to Křen et al. in Metrologia 53:27-40, 2016. https://doi.org/10.1088/0026-1394/53/1/27 applied for FG5), running in parallel with the original systems. The new systems use an analogue-to-digital converter HS5 to digitize the fringe signal and a new method of fringe signal analysis based on FFT swept bandpass filtering. We demonstrate that the source of the fringe size effect is connected to a distortion of the fringe signal due to the electronic components used in the FG5(X) gravimeters. To obtain a bias-free g-value, the FFT swept method should be applied for the determination of zero-crossings. A comparison of g-values obtained from the new and the original systems clearly shows that the original system might be biased by approximately 3-5 μGal due to improperly distorted fringe signal processing.
Electronic-projecting Moire method applying CBR-technology
Kuzyakov, O. N.; Lapteva, U. V.; Andreeva, M. A.
2018-01-01
Electronic-projecting method based on Moire effect for examining surface topology is suggested. Conditions of forming Moire fringes and their parameters’ dependence on reference parameters of object and virtual grids are analyzed. Control system structure and decision-making subsystem are elaborated. Subsystem execution includes CBR-technology, based on applying case base. The approach related to analysing and forming decision for each separate local area with consequent formation of common topology map is applied.
Almeida, P. G. S. de; Taveres, F. v. F.; Chernicharo, C. A. I.
2009-07-01
Tricking filters are a very promising alternative for the post treatment of effluents from UASB reactors treating domestic sewage,especially in developing countries. Although a fair amount of information is already available regarding organic mater removal in this combined system, very little is known in relation to nitrogen and surfactant removal in trickling filters post-UASB reactors. Therefore, the purpose of this study was to evaluate and compare the effect evaluate and compare the effect of different application rates and packing media types on trickling filters applied to the post-treatment of effluents from UASB reactors, regarding the removal of ammonia nitrogen and surfactants. (Author)
Almeida, P. G. S. de; Taveres, F. v. F.; Chernicharo, C. A. I.
2009-01-01
Tricking filters are a very promising alternative for the post treatment of effluents from UASB reactors treating domestic sewage,especially in developing countries. Although a fair amount of information is already available regarding organic mater removal in this combined system, very little is known in relation to nitrogen and surfactant removal in trickling filters post-UASB reactors. Therefore, the purpose of this study was to evaluate and compare the effect evaluate and compare the effect of different application rates and packing media types on trickling filters applied to the post-treatment of effluents from UASB reactors, regarding the removal of ammonia nitrogen and surfactants. (Author)
Mundim, Evaldo Cesario
1999-02-01
In this dissertation the Factorial Kriging analysis for the filtering of seismic attributes applied to reservoir characterization is considered. Factorial Kriging works in the spatial, domain in a similar way to the Spectral Analysis in the frequency domain. The incorporation of filtered attributes via External Drift Kriging and Collocated Cokriging in the estimate of reservoir characterization is discussed. Its relevance for the reservoir porous volume calculation is also evaluated based on comparative analysis of the volume risk curves derived from stochastic conditional simulations with collocated variable and stochastic conditional simulations with collocated variable and stochastic conditional simulations with external drift. results prove Factorial Kriging as an efficient technique for the filtering of seismic attributes images, of which geologic features are enhanced. The attribute filtering improves the correlation between the attributes and the well data and the estimates of the reservoir properties. The differences between the estimates obtained by External Drift Kriging and Collocated Cokriging are also reduced. (author)
A Lagrangian meshfree method applied to linear and nonlinear elasticity.
Walker, Wade A
2017-01-01
The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.
A clinical evaluation of the RNCA study using Fourier filtering as a preprocessing method
Robeson, W.; Alcan, K.E.; Graham, M.C.; Palestro, C.; Oliver, F.H.; Benua, R.S.
1984-06-01
Forty-one patients (25 male, 16 female) were studied by Radionuclide Cardangiography (RNCA) in our institution. There were 42 rest studies and 24 stress studies (66 studies total). Sixteen patients were normal, 15 had ASHD, seven had a cardiomyopathy, and three had left-sided valvular regurgitation. Each study was preprocessed using both the standard nine-point smoothing method and Fourier filtering. Amplitude and phase images were also generated. Both preprocessing methods were compared with respect to image quality, border definition, reliability and reproducibility of the LVEF, and cine wall motion interpretation. Image quality and border definition were judged superior by the consensus of two independent observers in 65 of 66 studies (98%) using Fourier filtered data. The LVEF differed between the two processes by greater than .05 in 17 of 66 studies (26%) including five studies in which the LVEF could not be determined using nine-point smoothed data. LV wall motion was normal by both techniques in all control patients by cine analysis. However, cine wall motion analysis using Fourier filtered data demonstrated additional abnormalities in 17 of 25 studies (68%) in the ASHD group, including three uninterpretable studies using nine-point smoothed data. In the cardiomyopathy/valvular heart disease group, ten of 18 studies (56%) had additional wall motion abnormalities using Fourier filtered data (including four uninterpretable studies using nine-point smoothed data). We conclude that Fourier filtering is superior to the nine-point smooth preprocessing method now in general use in terms of image quality, border definition, generation of an LVEF, and cine wall motion analysis. The advent of the array processor makes routine preprocessing by Fourier filtering a feasible technologic advance in the development of the RNCA study.
A clinical evaluation of the RNCA study using Fourier filtering as a preprocessing method
Robeson, W.; Alcan, K.E.; Graham, M.C.; Palestro, C.; Oliver, F.H.; Benua, R.S.
1984-01-01
Forty-one patients (25 male, 16 female) were studied by Radionuclide Cardangiography (RNCA) in our institution. There were 42 rest studies and 24 stress studies (66 studies total). Sixteen patients were normal, 15 had ASHD, seven had a cardiomyopathy, and three had left-sided valvular regurgitation. Each study was preprocessed using both the standard nine-point smoothing method and Fourier filtering. Amplitude and phase images were also generated. Both preprocessing methods were compared with respect to image quality, border definition, reliability and reproducibility of the LVEF, and cine wall motion interpretation. Image quality and border definition were judged superior by the consensus of two independent observers in 65 of 66 studies (98%) using Fourier filtered data. The LVEF differed between the two processes by greater than .05 in 17 of 66 studies (26%) including five studies in which the LVEF could not be determined using nine-point smoothed data. LV wall motion was normal by both techniques in all control patients by cine analysis. However, cine wall motion analysis using Fourier filtered data demonstrated additional abnormalities in 17 of 25 studies (68%) in the ASHD group, including three uninterpretable studies using nine-point smoothed data. In the cardiomyopathy/valvular heart disease group, ten of 18 studies (56%) had additional wall motion abnormalities using Fourier filtered data (including four uninterpretable studies using nine-point smoothed data). We conclude that Fourier filtering is superior to the nine-point smooth preprocessing method now in general use in terms of image quality, border definition, generation of an LVEF, and cine wall motion analysis. The advent of the array processor makes routine preprocessing by Fourier filtering a feasible technologic advance in the development of the RNCA study
Zhao Ningbo; Fu Jin; Zhang Chuan; Liu Huan
2012-01-01
Traditional geochemical processing method sometimes maybe loses some weak anomalies related to mineralization, the authors can avoid the influence of geology background and can solve the problem of recognizing weak anomalies in the low-background and high-background area with the subinterval area median contrast filtering method. In an area of Jiangxi Province, several new anomalies are identified by this method and uranium mineralized prospects are found among them. (authors)
Ping, Jing; Al-Hinai, Omar; Wheeler, Mary F.
2017-01-01
-Gaussian in this case, it is a challenge to estimate fracture distributions by conventional history matching approaches. In this work, a method that combines vector-based level-set parameterization technique and ensemble Kalman filter (EnKF) for estimating fracture
Hu, Shan-Zhou; Chen, Fen-Fei; Zeng, Li-Bo; Wu, Qiong-Shui
2013-01-01
Imaging AOTF is an important optical filter component for new spectral imaging instruments developed in recent years. The principle of imaging AOTF component was demonstrated, and a set of testing methods for some key performances were studied, such as diffraction efficiency, wavelength shift with temperature, homogeneity in space for diffraction efficiency, imaging shift, etc.
Park, Moon Kyu; Kim, Yong Hee; Cha, Kune Ho; Kim, Myung Ki
1998-01-01
A method is described to develop an H∞ filtering method for the dynamic compensation of self-powered neutron detectors normally used for fixed incore instruments. An H∞ norm of the filter transfer matrix is used as the optimization criteria in the worst-case estimation error sense. Filter modeling is performed for discrete-time model. The filter gains are optimized in the sense of noise attenuation level of H∞ setting. By introducing Bounded Real Lemma, the conventional algebraic Riccati inequalities are converted into Linear Matrix Inequalities (LMIs). Finally, the filter design problem is solved via the convex optimization framework using LMIs. The simulation results show that remarkable improvements are achieved in view of the filter response time and the filter design efficiency
Park, M.G.; Kim, Y.H.; Cha, K.H.; Kim, M.K. [Korea Electric Power Research Institute, Taejon (Korea)
1999-07-01
A method is described to develop and H{infinity} filtering method for the dynamic compensation of self-powered neutron detectors normally used for fixed incore instruments. An H{infinity} norm of the filter transfer matrix is used as the optimization criteria in the worst-case estimation error sense. Filter modeling is performed for both continuous- and discrete-time models. The filter gains are optimized in the sense of noise attenuation level of H{infinity} setting. By introducing Bounded Real Lemma, the conventional algebraic Riccati inequalities are converted into Linear Matrix Inequalities (LMIs). Finally, the filter design problem is solved via the convex optimization framework using LMIs. The simulation results show that remarkable improvements are achieved in view of the filter response time and the filter design efficiency. (author). 15 refs., 4 figs., 3 tabs.
A NEW METHOD OF CHANNEL FRICTION INVERSION BASED ON KALMAN FILTER WITH UNKNOWN PARAMETER VECTOR
CHENG Wei-ping; MAO Gen-hai; LIU Guo-hua
2005-01-01
Channel friction is an important parameter in hydraulic analysis.A channel friction parameter inversion method based on Kalman Filter with unknown parameter vector is proposed.Numerical simulations indicate that when the number of monitoring stations exceeds a critical value, the solution is hardly affected.In addition, Kalman Filter with unknown parameter vector is effective only at unsteady state.For the nonlinear equations, computations of sensitivity matrices are time-costly.Two simplified measures can reduce computing time, but not influence the results.One is to reduce sensitivity matrix analysis time, the other is to substitute for sensitivity matrix.
A METHOD FOR RECORDING AND VIEWING STEREOSCOPIC IMAGES IN COLOUR USING MULTICHROME FILTERS
2000-01-01
in a conventional stereogram recorded of the scene. The invention makes use of a colour-based encoding technique and viewing filters selected so that the human observer receives, in one eye, an image of nearly full colour information, in the other eye, an essentially monochrome image supplying the parallactic......The aim of the invention is to create techniques for the encoding, production and viewing of stereograms, supplemented by methods for selecting certain optical filters needed in these novel techniques, thus providing a human observer with stereograms each of which consist of a single image...
Yuanqiang Ren
2017-05-01
Full Text Available Structural health monitoring (SHM of aircraft composite structure is helpful to increase reliability and reduce maintenance costs. Due to the great effectiveness in distinguishing particular guided wave modes and identifying the propagation direction, the spatial-wavenumber filter technique has emerged as an interesting SHM topic. In this paper, a new scanning spatial-wavenumber filter (SSWF based imaging method for multiple damages is proposed to conduct on-line monitoring of aircraft composite structures. Firstly, an on-line multi-damage SSWF is established, including the fundamental principle of SSWF for multiple damages based on a linear piezoelectric (PZT sensor array, and a corresponding wavenumber-time imaging mechanism by using the multi-damage scattering signal. Secondly, through combining the on-line multi-damage SSWF and a PZT 2D cross-shaped array, an image-mapping method is proposed to conduct wavenumber synthesis and convert the two wavenumber-time images obtained by the PZT 2D cross-shaped array to an angle-distance image, from which the multiple damages can be directly recognized and located. In the experimental validation, both simulated multi-damage and real multi-damage introduced by repeated impacts are performed on a composite plate structure. The maximum localization error is less than 2 cm, which shows good performance of the multi-damage imaging method. Compared with the existing spatial-wavenumber filter based damage evaluation methods, the proposed method requires no more than the multi-damage scattering signal and can be performed without depending on any wavenumber modeling or measuring. Besides, this method locates multiple damages by imaging instead of the geometric method, which helps to improve the signal-to-noise ratio. Thus, it can be easily applied to on-line multi-damage monitoring of aircraft composite structures.
Comparison Study of Subspace Identification Methods Applied to Flexible Structures
Abdelghani, M.; Verhaegen, M.; Van Overschee, P.; De Moor, B.
1998-09-01
In the past few years, various time domain methods for identifying dynamic models of mechanical structures from modal experimental data have appeared. Much attention has been given recently to so-called subspace methods for identifying state space models. This paper presents a detailed comparison study of these subspace identification methods: the eigensystem realisation algorithm with observer/Kalman filter Markov parameters computed from input/output data (ERA/OM), the robust version of the numerical algorithm for subspace system identification (N4SID), and a refined version of the past outputs scheme of the multiple-output error state space (MOESP) family of algorithms. The comparison is performed by simulating experimental data using the five mode reduced model of the NASA Mini-Mast structure. The general conclusion is that for the case of white noise excitations as well as coloured noise excitations, the N4SID/MOESP algorithms perform equally well but give better results (improved transfer function estimates, improved estimates of the output) compared to the ERA/OM algorithm. The key computational step in the three algorithms is the approximation of the extended observability matrix of the system to be identified, for N4SID/MOESP, or of the observer for the system to be identified, for the ERA/OM. Furthermore, the three algorithms only require the specification of one dimensioning parameter.
Filtering Airborne LIDAR Data by AN Improved Morphological Method Based on Multi-Gradient Analysis
Li, Y.
2013-05-01
The technology of airborne Light Detection And Ranging (LIDAR) is capable of acquiring dense and accurate 3D geospatial data. Although many related efforts have been made by a lot of researchers in the last few years, LIDAR data filtering is still a challenging task, especially for area with high relief or hybrid geographic features. In order to address the bare-ground extraction from LIDAR point clouds of complex landscapes, a novel morphological filtering algorithm is proposed based on multi-gradient analysis in terms of the characteristic of LIDAR data distribution in this paper. Firstly, point clouds are organized by an index mesh. Then, the multigradient of each point is calculated using the morphological method. And, objects are removed gradually by choosing some points to carry on an improved opening operation constrained by multi-gradient iteratively. 15 sample data provided by ISPRS Working Group III/3 are employed to test the filtering algorithm proposed. These sample data include those environments that may lead to filtering difficulty. Experimental results show that filtering algorithm proposed by this paper is of high adaptability to various scenes including urban and rural areas. Omission error, commission error and total error can be simultaneously controlled in a relatively small interval. This algorithm can efficiently remove object points while preserves ground points to a great degree.
О. О. Gritzay
2016-12-01
Full Text Available Development of the technique for determination of the total neutron cross sections from the measurements of sample transmission by filtered neutrons, scattered on hydrogen is described. One of the methods of the transmission determination TH52Cr from the measurements of 52Cr sample, using average energy shift method for filtered neutron beam is presented. Using two methods of the experimental data processing, one of which is presented in this paper (another in [1], there is presented a set of transmissions, obtained for different samples and for different measurement angles. Two methods are fundamentally different; therefore, we can consider the obtained processing results, using these methods as independent. In future, obtained set of transmissions is planned to be used for determination of the parameters E0, Гn and R/ of the resonance 52Cr at the energy of 50 keV.
The harmonics detection method based on neural network applied ...
user
Keywords: Artificial Neural Networks (ANN), p-q theory, (SAPF), Harmonics, Total ..... Genetic algorithm-based self-learning fuzzy PI controller for shunt active filter, ... Verification of global optimality of the OFC active power filters by means of ...
Applying the Taguchi method for optimized fabrication of bovine ...
SERVER
2008-02-19
Feb 19, 2008 ... Nanobiotechnology Research Lab., School of Chemical Engineering, Babol University of Technology, Po.Box: 484, ... nanoparticle by applying the Taguchi method with characterization of the ... of BSA/ethanol and organic solvent adding rate. ... Sodium aside and all other chemicals were purchased from.
Lee, Jae bong; Kim, Sung Il; Jung, Jaehoon; Ha, Kwang Soon; Kim, Hwan Yeol [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-10-15
Fission products would be released from molten corium pools which are relocated into the lower plenum of reactor pressure vessel, on the concrete pit and in the core catcher. In addition, steam, hydrogen and noncondensable gases such as CO and CO2 are generated during the core damage progression due to loss of coolant and the molten core-concrete interaction. Consequently, the pressure inside the containment could be increased continuously. Filtered containment venting is one action to prevent an uncontrolled release of radioactive fission products caused by an overpressure failure of the containment. After the Fukushima-Daiichi accident which was demonstrated the containment failure, many countries to consider the implementation of filtered containment venting system(FCVS) on nuclear power plant where these are not currently applied. In general evaluation for FCVS is conducted to determine decontamination factor on several conditions (aerosol diameter, submergence depth, water temperature, gas flow, steam flow rate, pressure, operating time,...). It is essential to quantify the mass concentration before and after FCVS for decontamination factor. This paper presents the development of the evaluation facility for filtered containment venting system at KAERI and an experimental investigation for aerosol removal performance. Decontamination factor for the FCVS is determined by filter measurement. The result of the aerosol size distribution measurement shows the aerosol removal performance by an aerosol size.
Pinson, Paul A.
1998-01-01
A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated in barrier material, preferably in the form of a flexible sheet, one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention.
Pinson, P.A.
1998-01-01
A container for hazardous waste materials that includes air or other gas carrying dangerous particulate matter has incorporated barrier material, preferably in the form of a flexible sheet, and one or more filters for the dangerous particulate matter sealably attached to such barrier material. The filter is preferably a HEPA type filter and is preferably chemically bonded to the barrier materials. The filter or filters are preferably flexibly bonded to the barrier material marginally and peripherally of the filter or marginally and peripherally of air or other gas outlet openings in the barrier material, which may be a plastic bag. The filter may be provided with a backing panel of barrier material having an opening or openings for the passage of air or other gas into the filter or filters. Such backing panel is bonded marginally and peripherally thereof to the barrier material or to both it and the filter or filters. A coupling or couplings for deflating and inflating the container may be incorporated. Confining a hazardous waste material in such a container, rapidly deflating the container and disposing of the container, constitutes one aspect of the method of the invention. The chemical bonding procedure for producing the container constitutes another aspect of the method of the invention. 3 figs
FEHL, DAVID LEE; BIGGS, F.; CHANDLER, GORDON A.; STYGAR, WILLIAM A.
2000-01-01
The generalized method of Backus and Gilbert (BG) is described and applied to the inverse problem of obtaining spectra from a 5-channel, filtered array of x-ray detectors (XRD's). This diagnostic is routinely fielded on the Z facility at Sandia National Laboratories to study soft x-ray photons ((le)2300 eV), emitted by high density Z-pinch plasmas. The BG method defines spectral resolution limits on the system of response functions that are in good agreement with the unfold method currently in use. The resolution so defined is independent of the source spectrum. For noise-free, simulated data the BG approximating function is also in reasonable agreement with the source spectrum (150 eV black-body) and the unfold. This function may be used as an initial trial function for iterative methods or a regularization model
The attitude inversion method of geostationary satellites based on unscented particle filter
Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao
2018-04-01
The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.
Fast multiview three-dimensional reconstruction method using cost volume filtering
Lee, Seung Joo; Park, Min Ki; Jang, In Yeop; Lee, Kwan H.
2014-03-01
As the number of customers who want to record three-dimensional (3-D) information using a mobile electronic device increases, it becomes more and more important to develop a method which quickly reconstructs a 3-D model from multiview images. A fast multiview-based 3-D reconstruction method is presented, which is suitable for the mobile environment by constructing a cost volume of the 3-D height field. This method consists of two steps: the construction of a reliable base surface and the recovery of shape details. In each step, the cost volume is constructed using photoconsistency and then it is filtered according to the multiscale. The multiscale-based cost volume filtering allows the 3-D reconstruction to maintain the overall shape and to preserve the shape details. We demonstrate the strength of the proposed method in terms of computation time, accuracy, and unconstrained acquisition environment.
An Automatic Parameter Identification Method for a PMSM Drive with LC-Filter
Bech, Michael Møller; Christensen, Jeppe Haals; Weber, Magnus L.
2016-01-01
of the PMSM fed through an LC-filter. Based on the measured current response, model parameters for both the filter (L, R, C) and the PMSM (L and R) are estimated: First, the frequency response of the system is estimated using Welch Modified Periodogram method and then an optimization algorithm is used to find...... the parameters in an analytical reference model that minimize the model error. To demonstrate the practical feasibility of the method, a fully functional drive including an embedded real-time controller has been built. In addition to modulation, data acquisition and control the whole parameter identification...... method is also implemented on the real-time controller. Based on laboratory experiments on a 22 kW drive, it is concluded that the embedded identification method can estimate the five parameters in less than ten seconds....
Gillet, M.
1986-07-01
This thesis presents a study for the surveillance of the ''primary coolant circuit inventory monitoring'' of a pressurized water reactor. A reference model is developed in view of an automatic system ensuring detection and diagnostic in real time. The methods used for the present application are statistical tests and a method related to pattern recognition. The estimation of failures detected, difficult owing to the non-linearity of the problem, is treated by the least error squares method of the predictor or corrector type, and by filtering. It is in this frame that a new optimized method with superlinear convergence is developed, and that a segmented linearization of the model is introduced, in view of a multiple filtering [fr
Aircraft operability methods applied to space launch vehicles
Young, Douglas
1997-01-01
The commercial space launch market requirement for low vehicle operations costs necessitates the application of methods and technologies developed and proven for complex aircraft systems. The ``building in'' of reliability and maintainability, which is applied extensively in the aircraft industry, has yet to be applied to the maximum extent possible on launch vehicles. Use of vehicle system and structural health monitoring, automated ground systems and diagnostic design methods derived from aircraft applications support the goal of achieving low cost launch vehicle operations. Transforming these operability techniques to space applications where diagnostic effectiveness has significantly different metrics is critical to the success of future launch systems. These concepts will be discussed with reference to broad launch vehicle applicability. Lessons learned and techniques used in the adaptation of these methods will be outlined drawing from recent aircraft programs and implementation on phase 1 of the X-33/RLV technology development program.
Brito, Helio Glauco Ferreira
1996-12-31
This work introduces an analysis and a comparative study of some of the techniques for digital filtering of the voltage and current waveforms from faulted transmission lines. This study is of fundamental importance for the development of algorithms applied to digital protection of electric power systems. The techniques studied are based on the Discrete Fourier Transform theory, the Walsh functions and the Kalman filter theory. Two aspects were emphasized in this study: Firstly, the non-recursive techniques were analysed with the implementation of filters based on Fourier theory and the Walsh functions. Secondly, recursive techniques were analyzed, with the implementation of the filters based on the Kalman theory and once more on the Fourier theory. (author) 56 refs., 25 figs., 16 tabs.
Magnetic stirring welding method applied to nuclear power plant
Hirano, Kenji; Watando, Masayuki; Morishige, Norio; Enoo, Kazuhide; Yasuda, Yuuji
2002-01-01
In construction of a new nuclear power plant, carbon steel and stainless steel are used as base materials for the bottom linear plate of Reinforced Concrete Containment Vessel (RCCV) to achieve maintenance-free requirement, securing sufficient strength of structure. However, welding such different metals is difficult by ordinary method. To overcome the difficulty, the automated Magnetic Stirring Welding (MSW) method that can demonstrate good welding performance was studied for practical use, and weldability tests showed the good results. Based on the study, a new welding device for the MSW method was developed to apply it weld joints of different materials, and it practically used in part of a nuclear power plant. (author)
Linear algebraic methods applied to intensity modulated radiation therapy.
Crooks, S M; Xing, L
2001-10-01
Methods of linear algebra are applied to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as applied to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.
Soft tissue deformation estimation by spatio-temporal Kalman filter finite element method.
Yarahmadian, Mehran; Zhong, Yongmin; Gu, Chengfan; Shin, Jaehyun
2018-01-01
Soft tissue modeling plays an important role in the development of surgical training simulators as well as in robot-assisted minimally invasive surgeries. It has been known that while the traditional Finite Element Method (FEM) promises the accurate modeling of soft tissue deformation, it still suffers from a slow computational process. This paper presents a Kalman filter finite element method to model soft tissue deformation in real time without sacrificing the traditional FEM accuracy. The proposed method employs the FEM equilibrium equation and formulates it as a filtering process to estimate soft tissue behavior using real-time measurement data. The model is temporally discretized using the Newmark method and further formulated as the system state equation. Simulation results demonstrate that the computational time of KF-FEM is approximately 10 times shorter than the traditional FEM and it is still as accurate as the traditional FEM. The normalized root-mean-square error of the proposed KF-FEM in reference to the traditional FEM is computed as 0.0116. It is concluded that the proposed method significantly improves the computational performance of the traditional FEM without sacrificing FEM accuracy. The proposed method also filters noises involved in system state and measurement data.
Input shaping filter methods for the control of structurally flexible, long-reach manipulators
Kwon, Dong-Soo; Hwang, Dong-Hwan; Babcock, S.M.; Burks, B.L.
1993-01-01
Within the Environmental Restoration and Waste Management Program of the US Department of Energy, the remediation of single-shell radioactive waste storage tanks is one of the areas that challenge state-of-the-art equipment and methods. Concepts that utilize long-reach manipulators are being seriously considered for this task. Due to high payload capacity and high length-to-cross-section ratio requirements, these long-reach manipulator systems are expected to exhibit significant structural flexibility. To avoid structural vibrations during operation, various types of shaping filter methods have been investigated. A robust notch filtering method and an impulse shaping method were used as simulation benchmarks. In addition to that, two very different approaches have been developed and compared. One new approach, referred to as a ''feedforward simulation filter,'' uses imbedded simulation with complete knowledge of the system dynamics. The other approach, ''fuzzy shaping method,'' employs a fuzzy logic method to modify the joint trajectory from the desired end-position trajectory without precise knowledge of the system dynamics
Voxel-Based Spatial Filtering Method for Canopy Height Retrieval from Airborne Single-Photon Lidar
Hao Tang
2016-09-01
Full Text Available Airborne single-photon lidar (SPL is a new technology that holds considerable potential for forest structure and carbon monitoring at large spatial scales because it acquires 3D measurements of vegetation faster and more efficiently than conventional lidar instruments. However, SPL instruments use green wavelength (532 nm lasers, which are sensitive to background solar noise, and therefore SPL point clouds require more elaborate noise filtering than other lidar instruments to determine canopy heights, particularly in daytime acquisitions. Histogram-based aggregation is a commonly used approach for removing noise from photon counting lidar data, but it reduces the resolution of the dataset. Here we present an alternate voxel-based spatial filtering method that filters noise points efficiently while largely preserving the spatial integrity of SPL data. We develop and test our algorithms on an experimental SPL dataset acquired over Garrett County in Maryland, USA. We then compare canopy attributes retrieved using our new algorithm with those obtained from the conventional histogram binning approach. Our results show that canopy heights derived using the new algorithm have a strong agreement with field-measured heights (r2 = 0.69, bias = 0.42 m, RMSE = 4.85 m and discrete return lidar heights (r2 = 0.94, bias = 1.07 m, RMSE = 2.42 m. Results are consistently better than height accuracies from the histogram method (field data: r2 = 0.59, bias = 0.00 m, RMSE = 6.25 m; DRL: r2 = 0.78, bias = −0.06 m and RMSE = 4.88 m. Furthermore, we find that the spatial-filtering method retains fine-scale canopy structure detail and has lower errors over steep slopes. We therefore believe that automated spatial filtering algorithms such as the one presented here can support large-scale, canopy structure mapping from airborne SPL data.
Drawer compacted sand filter: a new and innovative method for on-site grey water treatment.
Assayed, Almoayied; Chenoweth, Jonathan; Pedley, Steven
2014-01-01
In this paper, results ofa new sand filter design were presented. The drawer compacted sand filter (DCSF) is a modified design for a sand filter in which the sand layer is broken down into several layers, each of which is 10 cm high and placed in a movable drawer separated by a 10 cm space. A lab-scale DCSF was designed and operated for 330 days fed by synthetic grey water. The response of drawer sand filters to variable hydraulic and organic loading rates (HLR and OLR) in terms of biological oxygen demand (BODs), chemical oxygen demand (COD), total suspended solids (TSS), pH, electrical conductivity and Escherichia coli reductions were evaluated. The HLR was studied by increasing from 72 to 142 L m(-2) day(-1) and OLR was studied by increasing it from 23 to 30 g BOD5 m(-2) day(-1) while keeping the HLR constant at 142 L m(-2) day(-1). Each loading regime was applied for 110 days. Results showed that DCSF was able to remove >90% of organic matter and total suspended solids for all doses. No significant difference was noticed in terms of overall filter efficiency between different loads for all parameters. Significant reduction in BOD5 and COD (P water was drained through the third drawer in all tested loads. The paper concludes that DCSF would be appropriate for use in dense urban areas as its footprint is small and is appropriate for a wide range of users because of its convenience and low maintenance requirements.
Methods of applied mathematics with a software overview
Davis, Jon H
2016-01-01
This textbook, now in its second edition, provides students with a firm grasp of the fundamental notions and techniques of applied mathematics as well as the software skills to implement them. The text emphasizes the computational aspects of problem solving as well as the limitations and implicit assumptions inherent in the formal methods. Readers are also given a sense of the wide variety of problems in which the presented techniques are useful. Broadly organized around the theme of applied Fourier analysis, the treatment covers classical applications in partial differential equations and boundary value problems, and a substantial number of topics associated with Laplace, Fourier, and discrete transform theories. Some advanced topics are explored in the final chapters such as short-time Fourier analysis and geometrically based transforms applicable to boundary value problems. The topics covered are useful in a variety of applied fields such as continuum mechanics, mathematical physics, control theory, and si...
Passive ranging using a filter-based non-imaging method based on oxygen absorption.
Yu, Hao; Liu, Bingqi; Yan, Zongqun; Zhang, Yu
2017-10-01
To solve the problem of poor real-time measurement caused by a hyperspectral imaging system and to simplify the design in passive ranging technology based on oxygen absorption spectrum, a filter-based non-imaging ranging method is proposed. In this method, three bandpass filters are used to obtain the source radiation intensities that are located in the oxygen absorption band near 762 nm and the band's left and right non-absorption shoulders, and a photomultiplier tube is used as the non-imaging sensor of the passive ranging system. Range is estimated by comparing the calculated values of band-average transmission due to oxygen absorption, τ O 2 , against the predicted curve of τ O 2 versus range. The method is tested under short-range conditions. Accuracy of 6.5% is achieved with the designed experimental ranging system at the range of 400 m.
Saito, Masatoshi
2007-11-01
Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity-in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:T1 scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm2 iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components-acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues.
Saito, Masatoshi
2007-01-01
Dual-energy contrast agent-enhanced mammography is a technique of demonstrating breast cancers obscured by a cluttered background resulting from the contrast between soft tissues in the breast. The technique has usually been implemented by exploiting two exposures to different x-ray tube voltages. In this article, another dual-energy approach using the balanced filter method without switching the tube voltages is described. For the spectral optimization of dual-energy mammography using the balanced filters, we applied a theoretical framework reported by Lemacks et al. [Med. Phys. 29, 1739-1751 (2002)] to calculate the signal-to-noise ratio (SNR) in an iodinated contrast agent subtraction image. This permits the selection of beam parameters such as tube voltage and balanced filter material, and the optimization of the latter's thickness with respect to some critical quantity--in this case, mean glandular dose. For an imaging system with a 0.1 mm thick CsI:Tl scintillator, we predict that the optimal tube voltage would be 45 kVp for a tungsten anode using zirconium, iodine, and neodymium balanced filters. A mean glandular dose of 1.0 mGy is required to obtain an SNR of 5 in order to detect 1.0 mg/cm 2 iodine in the resulting clutter-free image of a 5 cm thick breast composed of 50% adipose and 50% glandular tissue. In addition to spectral optimization, we carried out phantom measurements to demonstrate the present dual-energy approach for obtaining a clutter-free image, which preferentially shows iodine, of a breast phantom comprising three major components - acrylic spheres, olive oil, and an iodinated contrast agent. The detection of iodine details on the cluttered background originating from the contrast between acrylic spheres and olive oil is analogous to the task of distinguishing contrast agents in a mixture of glandular and adipose tissues
Wu, Weimin; Lin, Zhe; Sun, Yunjie
2013-01-01
Grid-tied inverters have been widely used to inject the renewable energies into the distributed power generation systems. However, a large variation of the grid impedance challenges the stability of the high-order power filter based grid-tied inverter. Many passive and active damping methods have...... been proposed to overcome this issue. Recently, a composite passive damping method for a high-order power filter based grid-tied inverter with an RC parallel damper and an RL series damper was presented to eliminate this problem, but at the cost of more material and power losses. In this paper...
Which DTW Method Applied to Marine Univariate Time Series Imputation
Phan , Thi-Thu-Hong; Caillault , Émilie; Lefebvre , Alain; Bigand , André
2017-01-01
International audience; Missing data are ubiquitous in any domains of applied sciences. Processing datasets containing missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Therefore, the aim of this paper is to build a framework for filling missing values in univariate time series and to perform a comparison of different similarity metrics used for the imputation task. This allows to suggest the most suitable methods for the imp...
Applying Qualitative Research Methods to Narrative Knowledge Engineering
O'Neill, Brian; Riedl, Mark
2014-01-01
We propose a methodology for knowledge engineering for narrative intelligence systems, based on techniques used to elicit themes in qualitative methods research. Our methodology uses coding techniques to identify actions in natural language corpora, and uses these actions to create planning operators and procedural knowledge, such as scripts. In an iterative process, coders create a taxonomy of codes relevant to the corpus, and apply those codes to each element of that corpus. These codes can...
APPLYING SPECTROSCOPIC METHODS ON ANALYSES OF HAZARDOUS WASTE
Dobrinić, Julijan; Kunić, Marija; Ciganj, Zlatko
2000-01-01
Abstract The paper presents results of measuring the content of heavy and other metals in waste samples from the hazardous waste disposal site of Sovjak near Rijeka. The preliminary design elaboration and the choice of the waste disposal sanification technology were preceded by the sampling and physico-chemical analyses of disposed waste, enabling its categorization. The following spectroscopic methods were applied on metal content analysis: Atomic absorption spectroscopy (AAS) and plas...
A new method of AHP applied to personal credit evaluation
JIANG Ming-hui; XIONG Qi; CAO Jing
2006-01-01
This paper presents a new negative judgment matrix that combines the advantages of the reciprocal judgment matrix and the fuzzy complementary judgment matrix, and then puts forth the properties of this new matrix. In view of these properties, this paper derives a clear sequencing formula for the new negative judgment matrix, which improves the sequencing principle of AHP. Finally, this new method is applied to personal credit evaluation to show its advantages of conciseness and swiftness.
An R-peak detection method that uses an SVD filter and a search back system.
Jung, Woo-Hyuk; Lee, Sang-Goog
2012-12-01
In this paper, we present a method for detecting the R-peak of an ECG signal by using an singular value decomposition (SVD) filter and a search back system. The ECG signal was detected in two phases: the pre-processing phase and the decision phase. The pre-processing phase consisted of the stages for the SVD filter, Butterworth High Pass Filter (HPF), moving average (MA), and squaring, whereas the decision phase consisted of a single stage that detected the R-peak. In the pre-processing phase, the SVD filter removed noise while the Butterworth HPF eliminated baseline wander. The MA removed the remaining noise of the signal that had gone through the SVD filter to make the signal smooth, and squaring played a role in strengthening the signal. In the decision phase, the threshold was used to set the interval before detecting the R-peak. When the latest R-R interval (RRI), suggested by Hamilton et al., was greater than 150% of the previous RRI, the method of detecting the R-peak in such an interval was modified to be 150% or greater than the smallest interval of the two most latest RRIs. When the modified search back system was used, the error rate of the peak detection decreased to 0.29%, compared to 1.34% when the modified search back system was not used. Consequently, the sensitivity was 99.47%, the positive predictivity was 99.47%, and the detection error was 1.05%. Furthermore, the quality of the signal in data with a substantial amount of noise was improved, and thus, the R-peak was detected effectively. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Multiple HEPA filter test methods, July 1, 1974--March 31, 1975
Schuster, B.G.; Osetek, D.J.
1975-08-01
A laboratory apparatus has been constructed for testing two HEPA filters in a series configuration. The apparatus consists of an instrumented wind tunnel in which the HEPA filters are mounted, and an auxiliary wind tunnel for obtaining diluted samples of the challenge aerosol upstream of the first filter. Measurements performed with a single particle aerosol spectrometer demonstrate the capability for measuring overall protection factors of greater than 2.5 x 10 8 . The decay of penetration as a function of time in individual HEPA filters indicates no preferential size discrimination in the range of 0.1 μm to 1.0 μm; nor is there a preferential size discrimination of penetration in this same range. A theoretical feasibility study has been performed on the use of an inhomogeneous electric field/induced aerosol electric dipole interaction for potential use as an air cleaning mechanism. Numerical evaluation of a coaxial cylinder geometry indicates that the method is feasible for collection of particles down to 0.1 μm under typical airflow velocity conditions. Small modifications in the geometry may be incorporated to create an instrument capable of measuring particle size. Geometries other than coaxial cylinders are also under investigation
Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.
2006-01-01
Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.
Sánchez-Úbeda, Juan Pedro; Calvache, María Luisa; Duque, Carlos; López-Chicano, Manuel
2016-11-01
A new methodology has been developed to obtain tidal-filtered time series of groundwater levels in coastal aquifers. Two methods used for oceanography processing and forecasting of sea level data were adapted for this purpose and compared: HA (Harmonic Analysis) and CWT (Continuous Wavelet Transform). The filtering process is generally comprised of two main steps: the detection and fitting of the major tide constituents through the decomposition of the original signal and the subsequent extraction of the complete tidal oscillations. The abilities of the optional HA and CWT methods to decompose and extract the tidal oscillations were assessed by applying them to the data from two piezometers at different depths close to the shoreline of a Mediterranean coastal aquifer (Motril-Salobreña, SE Spain). These methods were applied to three time series of different lengths (one month, one year, and 3.7 years of hourly data) to determine the range of detected frequencies. The different lengths of time series were also used to determine the fit accuracies of the tidal constituents for both the sea level and groundwater heads measurements. The detected tidal constituents were better resolved with increasing depth in the aquifer. The application of these methods yielded a detailed resolution of the tidal components, which enabled the extraction of the major tidal constituents of the sea level measurements from the groundwater heads (e.g., semi-diurnal, diurnal, fortnightly, monthly, semi-annual and annual). In the two wells studied, the CWT method was shown to be a more effective method than HA for extracting the tidal constituents of highest and lowest frequencies from groundwater head measurements.
Komeda, Masao; Kawasaki, Kozo; Obara, Toru
2013-04-01
We studied a new silicon irradiation holder with a neutron filter designed to make the vertical neutron flux profile uniform. Since an irradiation holder has to be made of a low activation material, we applied aluminum blended with B4C as the holder material. Irradiation methods to achieve uniform flux with a filter are discussed using Monte-Carlo calculation code MVP. Validation of the use of the MVP code for the holder's analyses is also discussed via characteristic experiments. Copyright © 2013 Elsevier Ltd. All rights reserved.
Novel biodosimetry methods applied to victims of the Goiania accident
Straume, T.; Langlois, R.G.; Lucas, J.; Jensen, R.H.; Bigbee, W.L.; Ramalho, A.T.; Brandao-Mello, C.E.
1991-01-01
Two biodosimetric methods under development at the Lawrence Livermore National Laboratory were applied to five persons accidentally exposed to a 137Cs source in Goiania, Brazil. The methods used were somatic null mutations at the glycophorin A locus detected as missing proteins on the surface of blood erythrocytes and chromosome translocations in blood lymphocytes detected using fluorescence in-situ hybridization. Biodosimetric results obtained approximately 1 y after the accident using these new and largely unvalidated methods are in general agreement with results obtained immediately after the accident using dicentric chromosome aberrations. Additional follow-up of Goiania accident victims will (1) help provide the information needed to validate these new methods for use in biodosimetry and (2) provide independent estimates of dose
Newton-Krylov methods applied to nonequilibrium radiation diffusion
Knoll, D.A.; Rider, W.J.; Olsen, G.L.
1998-01-01
The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton's method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton's method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step
Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choe, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B; Gupta, Neha; Kohane, Isaac S; Green, Robert C; Kong, Sek Won
2014-08-01
As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false-positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here, we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false-negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous single nucleotide variants (SNVs); 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery in NA12878, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and an ensemble genotyping would be essential to minimize false-positive DNM candidates. © 2014 WILEY PERIODICALS, INC.
Development of evaluation method for hydraulic behavior in Venturi scrubber for filtered venting
Horiguchi, Naoki; Nakao, Yasuhiro; Kaneko, Akiko; Abe, Yutaka; Yoshida, Hiroyuki
2016-01-01
Filtered venting systems have been installed to restart Nuclear Power Plants in Japan after Fukushima Daiichi Nuclear Disaster. Venturi scrubber is main component of one of the systems. To evaluate decontamination performance of the Venturi scrubber for filtered venting, mechanistic evaluation method for hydrodynamic behavior is important. In this paper, our objective is to develop the method. As approaches, we conducted experimental observation under adiabatic (air-water) condition, developed a numerical simulation code with one-dimensional two-fluid model and made verification and validation by comparison between these results in terms of superficial gas, static pressure, superficial liquid velocity, droplet ratio and droplet diameter in Venturi scrubber. As results, we observed the hydrodynamic behavior, developed the code and confirmed that it has capability to evaluate the parameters with following accuracy, superficial gas velocity with +30%, static pressure in throat part with +-10%, superficial liquid velocity with +-80%, droplet diameter with +-30% and droplet ratio with -50%. (author)
A simple method employed for the treatment of filters used in atmospheric pollution studies
Prendez B, M.M.; Ortiz C, J.L.; Garrido, J.I.; Huerta P, R.; Alvarez B, C.; Zolezzi C, S.R.
1983-01-01
A simple and rapid method for the multielement routine analysis of atmospheric particulate matter is described. The samples collected on four different types of filters were treated with HNO 3 and HCl at 110-120 deg C in pyrex glassware. Time required for the different stages of the treatment was determined by using 60 Co, 65 Zn and 137 Cs as radioactive tracers. Atomic absorption spectrophotometry was used to determine the concentration of the elements. The efficiency for 11 elements (Mg, Cr, Mn, Fe, Co, Ni, Cu, Zn, Cd, Hg and Pb) was determined. The method was succesfully employed for the treatment of filters used in atmospheric pollution studies in both urban and rural areas. (author)
Mousavi Anzehaee, Mohammad; Adib, Ahmad; Heydarzadeh, Kobra
2015-10-01
The manner of microtremor data collection and filtering operation and also the method used for processing have a considerable effect on the accuracy of estimation of dynamic soil parameters. In this paper, running variance method was used to improve the automatic detection of data sections infected by local perturbations. In this method, the microtremor data running variance is computed using a sliding window. Then the obtained signal is used to remove the ranges of data affected by perturbations from the original data. Additionally, to determinate the fundamental frequency of a site, this study has proposed a statistical characteristics-based method. Actually, statistical characteristics, such as the probability density graph and the average and the standard deviation of all the frequencies corresponding to the maximum peaks in the H/ V spectra of all data windows, are used to differentiate the real peaks from the false peaks resulting from perturbations. The methods have been applied to the data recorded for the City of Meybod in central Iran. Experimental results show that the applied methods are able to successfully reduce the effects of extensive local perturbations on microtremor data and eventually to estimate the fundamental frequency more accurately compared to other common methods.
Turbulence-cascade interaction noise using an advanced digital filter method
Gea Aguilera, Fernando; Gill, James; Zhang, Xin; Nodé-Langlois, Thomas
2016-01-01
Fan wakes interacting with outlet guide vanes is a major source of noise in modern turbofan engines. In order to study this source of noise, the current work presents two-dimensional simulations of turbulence-cascade interaction noise using a computational aeroacoustic methodology. An advanced digital filter method is used for the generation of isotropic synthetic turbulence in a linearised Euler equation solver. A parameter study is presented to assess the influence of airfoil thickness, mea...
Improved Kalman Filter Method for Measurement Noise Reduction in Multi Sensor RFID Systems
Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon
2011-01-01
Recently, the range of available Radio Frequency Identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Perf...
Improvements in in-situ filter test methods using a total light-scattering detector
Marshall, M.; Stevens, D.C.
1986-01-01
This paper presents research aimed at providing useful data on a commonly used technique; a DOP (di-2-ethylhexyl phthalate) aerosol and a total light-scattering photometer. Methods of increasing the sensitivity of this technique are described. Alternative methods of in-situ filter testing are also considered. The sensitivity of a typical, modern, total light-scattering photometer, as a function of particle diameter, has a broad maximum in mass terms between 0.1 and 0.4 um. At its maximum usable sensitivity the instrument can detect approx. 1 particle/cm 3 . This response can be explained by light scattering theory and particle loss in the instrument inlet. The mass median diameter of the aerosols produced by various DOP generators varies from 0.2 to 1.0μm. Experiments with good quality HEPA filters indicate a maximum penetration for particles of 0.15 - 0.2μm. Details of the studies are given and the consequences discussed. It is shown that filter penetration of -3 % can be measured in-situ with existing equipment. Methods of extending the sensitivity to measure a penetration of approx.10 -5 % are described. (author)
Bayesian target tracking based on particle filter
无
2005-01-01
For being able to deal with the nonlinear or non-Gaussian problems, particle filters have been studied by many researchers. Based on particle filter, the extended Kalman filter (EKF) proposal function is applied to Bayesian target tracking. Markov chain Monte Carlo (MCMC) method, the resampling step, etc novel techniques are also introduced into Bayesian target tracking. And the simulation results confirm the improved particle filter with these techniques outperforms the basic one.
Methods for model selection in applied science and engineering.
Field, Richard V., Jr.
2004-10-01
Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be
A New Synchronous Reference Frame-Based Method for Single-Phase Shunt Active Power Filters
Monfared, Mohammad; Golestan, Saeed; Guerrero, Josep M.
2013-01-01
This paper deals with the design of a novel method in the synchronous reference frame (SRF) to extract the reference compensating current for single-phase shunt active power filters (APFs). Unlike previous works in the SRF, the proposed method has an innovative feature that it does not need...... the fictitious current signal. Frequency-independent operation, accurate reference current extraction and relatively fast transient response are other key features of the presented strategy. The effectiveness of the proposed method is investigated by means of detailed mathematical analysis. The results confirm...
McKenney, Sarah E.; Nosratieh, Anita; Gelskey, Dale; Yang Kai; Huang Shinying; Chen Lin; Boone, John M.
2011-01-01
Purpose: Beam-shaping or ''bow tie'' (BT) filters are used to spatially modulate the x-ray beam in a CT scanner, but the conventional method of step-and-shoot measurement to characterize a beam's profile is tedious and time-consuming. The theory for characterization of bow tie relative attenuation (COBRA) method, which relies on a real-time dosimeter to address the issues of conventional measurement techniques, was previously demonstrated using computer simulations. In this study, the feasibility of the COBRA theory is further validated experimentally through the employment of a prototype real-time radiation meter and a known BT filter. Methods: The COBRA method consisted of four basic steps: (1) The probe was placed at the edge of a scanner's field of view; (2) a real-time signal train was collected as the scanner's gantry rotated with the x-ray beam on; (3) the signal train, without a BT filter, was modeled using peak values measured in the signal train of step 2; and (4) the relative attenuation of the BT filter was estimated from filtered and unfiltered data sets. The prototype probe was first verified to have an isotropic and linear response to incident x-rays. The COBRA method was then tested on a dedicated breast CT scanner with a custom-designed BT filter and compared to the conventional step-and-shoot characterization of the BT filter. Using basis decomposition of dual energy signal data, the thickness of the filter was estimated and compared to the BT filter's manufacturing specifications. The COBRA method was also demonstrated with a clinical whole body CT scanner using the body BT filter. The relative attenuation was calculated at four discrete x-ray tube potentials and used to estimate the thickness of the BT filter. Results: The prototype probe was found to have a linear and isotropic response to x-rays. The relative attenuation produced from the COBRA method fell within the error of the relative attenuation measured with the step-and-shoot method
Phelps, Amanda C [Malibu, CA; Kirby, Kevin K [Calabasas Hills, CA; Gregoire, Daniel J [Thousand Oaks, CA
2012-02-14
A resistively heated diesel particulate filter (DPF). The resistively heated DPF includes a DPF having an inlet surface and at least one resistive coating on the inlet surface. The at least one resistive coating is configured to substantially maintain its resistance in an operating range of the DPF. The at least one resistive coating has a first terminal and a second terminal for applying electrical power to resistively heat up the at least one resistive coating in order to increase the temperature of the DPF to a regeneration temperature. The at least one resistive coating includes metal and semiconductor constituents.
Analysis of concrete beams using applied element method
Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen
2018-03-01
The Applied Element Method (AEM) is a displacement based method of structural analysis. Some of its features are similar to that of Finite Element Method (FEM). In AEM, the structure is analysed by dividing it into several elements similar to FEM. But, in AEM, elements are connected by springs instead of nodes as in the case of FEM. In this paper, background to AEM is discussed and necessary equations are derived. For illustrating the application of AEM, it has been used to analyse plain concrete beam of fixed support condition. The analysis is limited to the analysis of 2-dimensional structures. It was found that the number of springs has no much influence on the results. AEM could predict deflection and reactions with reasonable degree of accuracy.
The Lattice Boltzmann Method applied to neutron transport
Erasmus, B.; Van Heerden, F. A.
2013-01-01
In this paper the applicability of the Lattice Boltzmann Method to neutron transport is investigated. One of the main features of the Lattice Boltzmann method is the simultaneous discretization of the phase space of the problem, whereby particles are restricted to move on a lattice. An iterative solution of the operator form of the neutron transport equation is presented here, with the first collision source as the starting point of the iteration scheme. A full description of the discretization scheme is given, along with the quadrature set used for the angular discretization. An angular refinement scheme is introduced to increase the angular coverage of the problem phase space and to mitigate lattice ray effects. The method is applied to a model problem to investigate its applicability to neutron transport and the results are compared to a reference solution calculated, using MCNP. (authors)
Advanced methods for image registration applied to JET videos
Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)
2015-10-15
Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.
MODEL-ORIENTED METHOD OF DESIGN IMPLEMENTATION WHEN CREATING DIGITAL FILTERS
V. Levinskyi
2016-12-01
Full Text Available This article discusses the example of model-oriented method of design and development of digital low-pass filters (LPF for automatic control systems (ACS. Typically, high frequency noise and disturbance attenuation is carried out by analogue LPF. However, technical implementation of analogue filters higher than the second order arouse certain difficulties related with the need of precise passive components ratings selection (resistors, capacitors. If the noise and disturbances spectral composition is known, it is possible to build digital LPF with the Nyquist frequency greater than the maximum frequency in the noise spectrum. Such possibility has appeared because of cheap, energy-efficient, high-speed 32-bit microcontrollers market entry. They have analogue signals sampling rate of 30 kHz and above. The traditional approach using the “manual” method of filter parameters calculation, obtaining their recurrence expressions and further program implementation requires high qualification and a lot of time consumption from the developer. An alternative to this approach is the model-oriented method of design (MOMD in MatLab environment when in the one environment the design of digital LPF, verificaton of its performance as a part of the ACS, generation and compilation of program codes for selected microcontroller family take place. MOMD can also be used in the designs of bandpass and bandstop filters for adaptive control systems or systems of technical diagnostics. If during the commissioning or the operation of ACS there is a need in digital LPF parameters change then this operation can be performed within half an hour. MOMD technology allows to significantly reduce the time for developing a specific product without loss of quality in its design ‘cause of extensive possibilities of MatLab development environment.
LI Yang
2017-02-01
Full Text Available Aiming at the problem of high frequency noise interference in the ECT data acquisition system，on the basis of analysis of the ECT system data acquisition and control principles，we designed an improved distributed algorithm FIR low-pass digital filter combined with FPGA technology and digital filtering principle. The sampling frequency of the filter is 1 .5 MHz，the pass band cutoff frequency is 20MHz，and the design method is window function. We used the FDATooI toolbox in Matlab to extract and quantify the filter coefficients and the Quarters to simulate the simulation. Experimental results showed that the FIR digital filter can achieve the filtering function of the high frequency signal in the data acquisition system. Compared with the traditional DA algorithm，it has the advantages of small resource consumption and high acquisition speed and some other characteristics.
Classification of Specialized Farms Applying Multivariate Statistical Methods
Zuzana Hloušková
2017-01-01
Full Text Available Classification of specialized farms applying multivariate statistical methods The paper is aimed at application of advanced multivariate statistical methods when classifying cattle breeding farming enterprises by their economic size. Advantage of the model is its ability to use a few selected indicators compared to the complex methodology of current classification model that requires knowledge of detailed structure of the herd turnover and structure of cultivated crops. Output of the paper is intended to be applied within farm structure research focused on future development of Czech agriculture. As data source, the farming enterprises database for 2014 has been used, from the FADN CZ system. The predictive model proposed exploits knowledge of actual size classes of the farms tested. Outcomes of the linear discriminatory analysis multifactor classification method have supported the chance of filing farming enterprises in the group of Small farms (98 % filed correctly, and the Large and Very Large enterprises (100 % filed correctly. The Medium Size farms have been correctly filed at 58.11 % only. Partial shortages of the process presented have been found when discriminating Medium and Small farms.
Ricchiuto, D.; Liserre, M.; Kerekes, Tamas
2011-01-01
Grid-connected converters usually employ an LCL-filter to reduce PWM harmonics. To avoid the wellknown stability problems it is requested to use either passive or active damping methods. Active damping methods avoid losses and preserve the filter effectiveness but they are more sensitive...... to parameters variation. In this paper the robustness of active damping methods is investigated considering those using only the same state variable (grid-side or converter-side current) normally used for current control (filter-based) or those methods using more state-variables (multiloop). Simulation...
A wavelet filtering method for cumulative gamma spectroscopy used in wear measurements
Bianchi, Davide; Lenauer, Claudia; Betz, Gerhard; Vernes, András
2017-01-01
Continuous ultra-mild wear quantification using radioactive isotopes involves measuring very low amounts of activity in limited time intervals. This results in gamma spectra with poor signal-to-noise ratio and hence very scattered wear data, especially during running-in, where wear is intrinsically low. Therefore, advanced filtering methods reducing the wear data scattering and making the calculation of the main peak area more accurate are mandatory. An energy-time dependent threshold for wavelet detail coefficients based on Poisson statistics and using a combined Barwell law for the estimation of the average photon counting rate is then introduced. In this manner, it was shown that the accuracy of running-in wear quantification is enhanced. - Highlights: • Time-dependent Poisson statistics. • Wavelet-based filtering of cumulative gamma spectra. • Improvement of low wear analysis.
Shkvarko Yuriy
2006-01-01
Full Text Available We address a new approach to solve the ill-posed nonlinear inverse problem of high-resolution numerical reconstruction of the spatial spectrum pattern (SSP of the backscattered wavefield sources distributed over the remotely sensed scene. An array or synthesized array radar (SAR that employs digital data signal processing is considered. By exploiting the idea of combining the statistical minimum risk estimation paradigm with numerical descriptive regularization techniques, we address a new fused statistical descriptive regularization (SDR strategy for enhanced radar imaging. Pursuing such an approach, we establish a family of the SDR-related SSP estimators, that encompass a manifold of existing beamforming techniques ranging from traditional matched filter to robust and adaptive spatial filtering, and minimum variance methods.
Metrological evaluation of characterization methods applied to nuclear fuels
Faeda, Kelly Cristina Martins; Lameiras, Fernando Soares; Camarano, Denise das Merces; Ferreira, Ricardo Alberto Neto; Migliorini, Fabricio Lima; Carneiro, Luciana Capanema Silva; Silva, Egonn Hendrigo Carvalho
2010-01-01
In manufacturing the nuclear fuel, characterizations are performed in order to assure the minimization of harmful effects. The uranium dioxide is the most used substance as nuclear reactor fuel because of many advantages, such as: high stability even when it is in contact with water at high temperatures, high fusion point, and high capacity to retain fission products. Several methods are used for characterization of nuclear fuels, such as thermogravimetric analysis for the ratio O / U, penetration-immersion method, helium pycnometer and mercury porosimetry for the density and porosity, BET method for the specific surface, chemical analyses for relevant impurities, and the laser flash method for thermophysical properties. Specific tools are needed to control the diameter and the sphericity of the microspheres and the properties of the coating layers (thickness, density, and degree of anisotropy). Other methods can also give information, such as scanning and transmission electron microscopy, X-ray diffraction, microanalysis, and mass spectroscopy of secondary ions for chemical analysis. The accuracy of measurement and level of uncertainty of the resulting data are important. This work describes a general metrological characterization of some techniques applied to the characterization of nuclear fuel. Sources of measurement uncertainty were analyzed. The purpose is to summarize selected properties of UO 2 that have been studied by CDTN in a program of fuel development for Pressurized Water Reactors (PWR). The selected properties are crucial for thermalhydraulic codes to study basic design accidents. The thermal characterization (thermal diffusivity and thermal conductivity) and the penetration immersion method (density and open porosity) of UO 2 samples were focused. The thermal characterization of UO 2 samples was determined by the laser flash method between room temperature and 448 K. The adaptive Monte Carlo Method was used to obtain the endpoints of the
Nuclear and nuclear related analytical methods applied in environmental research
Popescu, Ion V.; Gheboianu, Anca; Bancuta, Iulian; Cimpoca, G. V; Stihi, Claudia; Radulescu, Cristiana; Oros Calin; Frontasyeva, Marina; Petre, Marian; Dulama, Ioana; Vlaicu, G.
2010-01-01
Nuclear Analytical Methods can be used for research activities on environmental studies like water quality assessment, pesticide residues, global climatic change (transboundary), pollution and remediation. Heavy metal pollution is a problem associated with areas of intensive industrial activity. In this work the moss bio monitoring technique was employed to study the atmospheric deposition in Dambovita County Romania. Also, there were used complementary nuclear and atomic analytical methods: Neutron Activation Analysis (NAA), Atomic Absorption Spectrometry (AAS) and Inductively Coupled Plasma Atomic Emission Spectrometry (ICP-AES). These high sensitivity analysis methods were used to determine the chemical composition of some samples of mosses placed in different areas with different pollution industrial sources. The concentrations of Cr, Fe, Mn, Ni and Zn were determined. The concentration of Fe from the same samples was determined using all these methods and we obtained a very good agreement, in statistical limits, which demonstrate the capability of these analytical methods to be applied on a large spectrum of environmental samples with the same results. (authors)
Applied systems ecology: models, data, and statistical methods
Eberhardt, L L
1976-01-01
In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.
Analysis of Brick Masonry Wall using Applied Element Method
Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen
2018-03-01
The Applied Element Method (AEM) is a versatile tool for structural analysis. Analysis is done by discretising the structure as in the case of Finite Element Method (FEM). In AEM, elements are connected by a set of normal and shear springs instead of nodes. AEM is extensively used for the analysis of brittle materials. Brick masonry wall can be effectively analyzed in the frame of AEM. The composite nature of masonry wall can be easily modelled using springs. The brick springs and mortar springs are assumed to be connected in series. The brick masonry wall is analyzed and failure load is determined for different loading cases. The results were used to find the best aspect ratio of brick to strengthen brick masonry wall.
Thermally stimulated current method applied to highly irradiated silicon diodes
Pintilie, I; Pintilie, I; Moll, Michael; Fretwurst, E; Lindström, G
2002-01-01
We propose an improved method for the analysis of Thermally Stimulated Currents (TSC) measured on highly irradiated silicon diodes. The proposed TSC formula for the evaluation of a set of TSC spectra obtained with different reverse biases leads not only to the concentration of electron and hole traps visible in the spectra but also gives an estimation for the concentration of defects which not give rise to a peak in the 30-220 K TSC temperature range (very shallow or very deep levels). The method is applied to a diode irradiated with a neutron fluence of phi sub n =1.82x10 sup 1 sup 3 n/cm sup 2.
Fingerprinting Localization Method Based on TOA and Particle Filtering for Mines
Boming Song
2017-01-01
Full Text Available Accurate target localization technology plays a very important role in ensuring mine safety production and higher production efficiency. The localization accuracy of a mine localization system is influenced by many factors. The most significant factor is the non-line of sight (NLOS propagation error of the localization signal between the access point (AP and the target node (Tag. In order to improve positioning accuracy, the NLOS error must be suppressed by an optimization algorithm. However, the traditional optimization algorithms are complex and exhibit poor optimization performance. To solve this problem, this paper proposes a new method for mine time of arrival (TOA localization based on the idea of comprehensive optimization. The proposed method utilizes particle filtering to reduce the TOA data error, and the positioning results are further optimized with fingerprinting based on the Manhattan distance. This proposed method combines the advantages of particle filtering and fingerprinting localization. It reduces algorithm complexity and has better error suppression performance. The experimental results demonstrate that, as compared to the symmetric double-sided two-way ranging (SDS-TWR method or received signal strength indication (RSSI based fingerprinting method, the proposed method has a significantly improved localization performance, and the environment adaptability is enhanced.
A coupling method for a cardiovascular simulation model which includes the Kalman filter.
Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya
2012-01-01
Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.
Hybrid electrokinetic method applied to mix contaminated soil
Mansour, H.; Maria, E. [Dept. of Building Civil and Environmental Engineering, Concordia Univ., Montreal (Canada)
2001-07-01
Several industrials and municipal areas in North America are contaminated with heavy metals and petroleum products. This mix contamination presents a particularly difficult task for remediation when is exposed in clayey soil. The objective of this research was to find a method to cleanup mix contaminated clayey soils. Finally, a multifunctional hybrid electrokinetic method was investigated. Clayey soil was contaminated with lead and nickel (heavy metals) at the level of 1000 ppm and phenanthrene (PAH) of 600 ppm. Electrokinetic surfactant supply system was applied to mobilize, transport and removal of phenanthrene. A chelation agent (EDTA) was also electrokinetically supplied to mobilize heavy metals. The studies were performed on 8 lab scale electrokinetic cells. The mix contaminated clayey soil was subjected to DC total voltage gradient of 0.3 V/cm. Supplied liquids (surfactant and EDTA) were introduced in different periods of time (22 days, 42 days) in order to optimize the most excessive removal of contaminants. The ph, electrical parameters, volume supplied, and volume discharged was monitored continuously during each experiment. At the end of these tests soil and cathalyte were subjected to physico-chemical analysis. The paper discusses results of experiments including the optimal energy use, removal efficiency of phenanthrene, as well, transport and removal of heavy metals. The results of this study can be applied for in-situ hybrid electrokinetic technology to remediate clayey sites contaminated with petroleum product mixed with heavy metals (e.g. manufacture Gas Plant Sites). (orig.)
Contribution to the improvement of the sodium chloride air filter test method
Delhaye, J.; Michel, J.
1977-01-01
The essential feature of the test method initially developed by the Porton Down Chemical Defence Establishment and modified subsequently by the Atomic Energy Research Establishment at Harwell have been adopted for the testing of high efficiency filters by the European Committee of Manufacturers of Equipment for Air Treatment (EUROVENT). The method has also been studied in the context of the ISO. The Heating and Ventilation Industries, Technical Centre (CETIAT), which uses this method, has drawn attention to a number of imperfections which affect reproductibility. It proposes changes which should have the effect of making the method reproducible not only in a given laboratory but also from one laboratory to another. It will then be possible to carry out studies to compare this method with other similar ones, in particular the fluorescin method (Standard NF X 44 011). The work carried out by CETIAT was concerned mainly with the following: aerosol generation, the velocity spectra in sampling sections, photometer calibration
Matrix pencil method-based reference current generation for shunt active power filters
Terriche, Yacine; Golestan, Saeed; Guerrero, Josep M.
2018-01-01
response and works well under distorted and unbalanced voltage. Moreover, the proposed method can estimate the voltage phase accurately; this property enables the algorithm to compensate for both power factor and current unbalance. The effectiveness of the proposed method is verified using simulation...... are using the discrete Fourier transform (DFT) in the frequency domain or the instantaneous p–q theory and the synchronous reference frame in the time domain. The DFT, however, suffers from the picket-fence effect and spectral leakage. On the other hand, the DFT takes at least one cycle of the nominal...... frequency. The time-domain methods show a weakness under voltage distortion, which requires prior filtering techniques. The aim of this study is to present a fast yet effective method for generating the RCC for SAPFs. The proposed method, which is based on the matrix pencil method, has a fast dynamic...
Du, Chuan; Wang, Jiadao; Chen, Zhifu; Chen, Darong
2014-01-01
Graphical abstract: - Highlights: • A method for fabricating durable superhydrophobic filter paper was developed. • Oil–water separation efficiency exceeds 99% using the as-prepared filter paper. • The as-prepared filter paper has good recyclability and durability. • The method is easy, low cost and can be industrialized. - Abstract: A method for manufacturing durable superhydrophobic and superoleophilic filter paper for oil–water separation was developed via colloidal deposition. A porous film composed of PTFE nanoparticles was formed on filter paper, which was superhydrophobic with a water contact angle of 155.5° and superoleophilic with an oil contact angle of 0°. The obtained filter paper could separate a series of oil–water mixtures effectively with high separation efficiencies over 99%. Besides, the as-prepared filter paper kept stable superhydrophobicity and high separation efficiency even after 30 cycle times and could also work well under harsh environmental conditions like strong acidic or alkaline solutions, high temperature and ultraviolet irradiation. Compared with other approaches for fabricating oil–water materials, this approach is able to fabricate full-scale durable and practical oil–water materials easily and economically. The as-prepared filter paper is a promising candidate for oil–water separation
A Multifactorial Analysis of Reconstruction Methods Applied After Total Gastrectomy
Oktay Büyükaşık
2010-12-01
Full Text Available Aim: The aim of this study was to evaluate the reconstruction methods applied after total gastrectomy in terms of postoperative symptomology and nutrition. Methods: This retrospective study was conducted on 31 patients who underwent total gastrectomy due to gastric cancer in 2. Clinic of General Surgery, SSK Ankara Training Hospital. 6 different reconstruction methods were used and analyzed in terms of age, sex and postoperative complications. One from esophagus and two biopsy specimens from jejunum were taken through upper gastrointestinal endoscopy from all cases, and late period morphological and microbiological changes were examined. Postoperative weight change, dumping symptoms, reflux esophagitis, solid/liquid dysphagia, early satiety, postprandial pain, diarrhea and anorexia were assessed. Results: Of 31 patients,18 were males and 13 females; the youngest one was 33 years old, while the oldest- 69 years old. It was found that reconstruction without pouch was performed in 22 cases and with pouch in 9 cases. Early satiety, postprandial pain, dumping symptoms, diarrhea and anemia were found most commonly in cases with reconstruction without pouch. The rate of bacterial colonization of the jejunal mucosa was identical in both groups. Reflux esophagitis was most commonly seen in omega esophagojejunostomy (EJ, while the least-in Roux-en-Y, Tooley and Tanner 19 EJ. Conclusion: Reconstruction with pouch performed after total gastrectomy is still a preferable method. (The Medical Bulletin of Haseki 2010; 48:126-31
Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective
Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith
2010-01-01
The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…
Analytical methods applied to diverse types of Brazilian propolis
Marcucci Maria
2011-06-01
Full Text Available Abstract Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented.
Teaching organization theory for healthcare management: three applied learning methods.
Olden, Peter C
2006-01-01
Organization theory (OT) provides a way of seeing, describing, analyzing, understanding, and improving organizations based on patterns of organizational design and behavior (Daft 2004). It gives managers models, principles, and methods with which to diagnose and fix organization structure, design, and process problems. Health care organizations (HCOs) face serious problems such as fatal medical errors, harmful treatment delays, misuse of scarce nurses, costly inefficiency, and service failures. Some of health care managers' most critical work involves designing and structuring their organizations so their missions, visions, and goals can be achieved-and in some cases so their organizations can survive. Thus, it is imperative that graduate healthcare management programs develop effective approaches for teaching OT to students who will manage HCOs. Guided by principles of education, three applied teaching/learning activities/assignments were created to teach OT in a graduate healthcare management program. These educationalmethods develop students' competency with OT applied to HCOs. The teaching techniques in this article may be useful to faculty teaching graduate courses in organization theory and related subjects such as leadership, quality, and operation management.
Six Sigma methods applied to cryogenic coolers assembly line
Ventre, Jean-Marc; Germain-Lacour, Michel; Martin, Jean-Yves; Cauquil, Jean-Marc; Benschop, Tonny; Griot, René
2009-05-01
Six Sigma method have been applied to manufacturing process of a rotary Stirling cooler: RM2. Name of the project is NoVa as main goal of the Six Sigma approach is to reduce variability (No Variability). Project has been based on the DMAIC guideline following five stages: Define, Measure, Analyse, Improve, Control. Objective has been set on the rate of coolers succeeding performance at first attempt with a goal value of 95%. A team has been gathered involving people and skills acting on the RM2 manufacturing line. Measurement System Analysis (MSA) has been applied to test bench and results after R&R gage show that measurement is one of the root cause for variability in RM2 process. Two more root causes have been identified by the team after process mapping analysis: regenerator filling factor and cleaning procedure. Causes for measurement variability have been identified and eradicated as shown by new results from R&R gage. Experimental results show that regenerator filling factor impacts process variability and affects yield. Improved process haven been set after new calibration process for test bench, new filling procedure for regenerator and an additional cleaning stage have been implemented. The objective for 95% coolers succeeding performance test at first attempt has been reached and kept for a significant period. RM2 manufacturing process is now managed according to Statistical Process Control based on control charts. Improvement in process capability have enabled introduction of sample testing procedure before delivery.
An Effective Gap Filtering Method for Landsat ETM+ SLC-Off Data
Seulki Lee
2016-01-01
Full Text Available The Landsat 7 Enhanced Thematic Mapper Plus (ETM+ scan line corrector (SLC failed on 31 May 2003, causing the SLC to turn off. Many gap-filled products were developed and deployed to combat this situation. The majority of these products used a primary image taken by the SLC when functioning properly in an attempt to correct SLC-off images. However, temporal atmospheric elements could not be reliably reflected using a primary image, and therefore the corrected image was not viable for use by monitoring systems. To bypass this limitation, this study has developed the Gap Interpolation and Filtering (GIF method that relies on one-dimensional interpolation filtering to conveniently recover pixels within a single image at a high level of accuracy without borrowing from images acquired at a different time or by another sensor. The GIF method was compared to two other methods—Global Linear Histogram Match (GLHM, and the Local Linear Histogram Match (LLHM—both developed by National Aeronautics and Space Administration (NASA and United States Geological Survey (USGS to determine its accuracy. The GIF method accuracy was found superior in land, sea, and cloud imaging. In particular, its sea and cloud images returned Root Mean Square Error (RMSE values close to or less than 1. We expect the GIF method developed in this research to be of invaluable aid to monitoring systems that depend heavily on Landsat imagery.
da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues
2015-01-01
This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.
Metrological evaluation of characterization methods applied to nuclear fuels
Faeda, Kelly Cristina Martins; Lameiras, Fernando Soares; Camarano, Denise das Merces; Ferreira, Ricardo Alberto Neto; Migliorini, Fabricio Lima; Carneiro, Luciana Capanema Silva; Silva, Egonn Hendrigo Carvalho, E-mail: kellyfisica@gmail.co, E-mail: fernando.lameiras@pq.cnpq.b, E-mail: dmc@cdtn.b, E-mail: ranf@cdtn.b, E-mail: flmigliorini@hotmail.co, E-mail: lucsc@hotmail.co, E-mail: egonn@ufmg.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)
2010-07-01
In manufacturing the nuclear fuel, characterizations are performed in order to assure the minimization of harmful effects. The uranium dioxide is the most used substance as nuclear reactor fuel because of many advantages, such as: high stability even when it is in contact with water at high temperatures, high fusion point, and high capacity to retain fission products. Several methods are used for characterization of nuclear fuels, such as thermogravimetric analysis for the ratio O / U, penetration-immersion method, helium pycnometer and mercury porosimetry for the density and porosity, BET method for the specific surface, chemical analyses for relevant impurities, and the laser flash method for thermophysical properties. Specific tools are needed to control the diameter and the sphericity of the microspheres and the properties of the coating layers (thickness, density, and degree of anisotropy). Other methods can also give information, such as scanning and transmission electron microscopy, X-ray diffraction, microanalysis, and mass spectroscopy of secondary ions for chemical analysis. The accuracy of measurement and level of uncertainty of the resulting data are important. This work describes a general metrological characterization of some techniques applied to the characterization of nuclear fuel. Sources of measurement uncertainty were analyzed. The purpose is to summarize selected properties of UO{sub 2} that have been studied by CDTN in a program of fuel development for Pressurized Water Reactors (PWR). The selected properties are crucial for thermalhydraulic codes to study basic design accidents. The thermal characterization (thermal diffusivity and thermal conductivity) and the penetration immersion method (density and open porosity) of UO{sub 2} samples were focused. The thermal characterization of UO{sub 2} samples was determined by the laser flash method between room temperature and 448 K. The adaptive Monte Carlo Method was used to obtain the endpoints of
Diaz-Guerra, J.P.; Bayon, A.
1981-01-01
An X-ray fluorescence method for the determination of As, Ba, Co, Cr, Cu, Fe, Hg, Mn, Ni, Pb, Se, U, V and Zn collected on P.V.C. filters in concentration ranges from 0,6 to 1000μg, depending on the element, is described. A sequential automatic spectrometer with a chromium tube is used for tho Ba determination, while As, Hg, Pb, Se and U are bottler determined with a molybdenum one. For the rest of the elements a tungsten target is preferred. The interferences between AsK α 1 ,2- PbL α 1 ,2 and CrK α 1 ,2-Vkβ 1 ,3 lines are corrected by applying specific coefficients. The radial variation of the primary X-ray beam intensity on the irradiated surface has been specially studied with chromium, gold, molybdenum and tungsten tubes. For that purpose different x-ray wavelengths in the range 9,89 A to 0,56 A have been selected. The curves obtained show a rather high heterogeneity for the excitation source. This conclusion implies the need for an homogeneous distribution of elements on the filter. (Author) 7 refs
COMPARISON OF ULTRASOUND IMAGE FILTERING METHODS BY MEANS OF MULTIVARIABLE KURTOSIS
Mariusz Nieniewski
2017-06-01
Full Text Available Comparison of the quality of despeckled US medical images is complicated because there is no image of a human body that would be free of speckles and could serve as a reference. A number of various image metrics are currently used for comparison of filtering methods; however, they do not satisfactorily represent the visual quality of images and medical expert’s satisfaction with images. This paper proposes an innovative use of relative multivariate kurtosis for the evaluation of the most important edges in an image. Multivariate kurtosis allows one to introduce an order among the filtered images and can be used as one of the metrics for image quality evaluation. At present there is no method which would jointly consider individual metrics. Furthermore, these metrics are typically defined by comparing the noisy original and filtered images, which is incorrect since the noisy original cannot serve as a golden standard. In contrast to this, the proposed kurtosis is the absolute measure, which is calculated independently of any reference image and it agrees with the medical expert’s satisfaction to a large extent. The paper presents a numerical procedure for calculating kurtosis and describes results of such calculations for a computer-generated noisy image, images of a general purpose phantom and a cyst phantom, as well as real-life images of thyroid and carotid artery obtained with SonixTouch ultrasound machine. 16 different methods of image despeckling are compared via kurtosis. The paper shows that visually more satisfactory despeckling results are associated with higher kurtosis, and to a certain degree kurtosis can be used as a single metric for evaluation of image quality.
Applying systems ergonomics methods in sport: A systematic review.
Hulme, Adam; Thompson, Jason; Plant, Katherine L; Read, Gemma J M; Mclean, Scott; Clacy, Amanda; Salmon, Paul M
2018-04-16
As sports systems become increasingly more complex, competitive, and technology-centric, there is a greater need for systems ergonomics methods to consider the performance, health, and safety of athletes in context with the wider settings in which they operate. Therefore, the purpose of this systematic review was to identify and critically evaluate studies which have applied a systems ergonomics research approach in the context of sports performance and injury management. Five databases (PubMed, Scopus, ScienceDirect, Web of Science, and SPORTDiscus) were searched for the dates 01 January 1990 to 01 August 2017, inclusive, for original peer-reviewed journal articles and conference papers. Reported analyses were underpinned by a recognised systems ergonomics method, and study aims were related to the optimisation of sports performance (e.g. communication, playing style, technique, tactics, or equipment), and/or the management of sports injury (i.e. identification, prevention, or treatment). A total of seven articles were identified. Two articles were focussed on understanding and optimising sports performance, whereas five examined sports injury management. The methods used were the Event Analysis of Systemic Teamwork, Cognitive Work Analysis (the Work Domain Analysis Abstraction Hierarchy), Rasmussen's Risk Management Framework, and the Systems Theoretic Accident Model and Processes method. The individual sport application was distance running, whereas the team sports contexts examined were cycling, football, Australian Football League, and rugby union. The included systems ergonomics applications were highly flexible, covering both amateur and elite sports contexts. The studies were rated as valuable, providing descriptions of injury controls and causation, the factors influencing injury management, the allocation of responsibilities for injury prevention, as well as the factors and their interactions underpinning sports performance. Implications and future
The virtual fields method applied to spalling tests on concrete
Forquin P.
2012-08-01
Full Text Available For one decade spalling techniques based on the use of a metallic Hopkinson bar put in contact with a concrete sample have been widely employed to characterize the dynamic tensile strength of concrete at strain-rates ranging from a few tens to two hundreds of s−1. However, the processing method mainly based on the use of the velocity profile measured on the rear free surface of the sample (Novikov formula remains quite basic and an identification of the whole softening behaviour of the concrete is out of reach. In the present paper a new processing method is proposed based on the use of the Virtual Fields Method (VFM. First, a digital high speed camera is used to record the pictures of a grid glued on the specimen. Next, full-field measurements are used to obtain the axial displacement field at the surface of the specimen. Finally, a specific virtual field has been defined in the VFM equation to use the acceleration map as an alternative ‘load cell’. This method applied to three spalling tests allowed to identify Young’s modulus during the test. It was shown that this modulus is constant during the initial compressive part of the test and decreases in the tensile part when micro-damage exists. It was also shown that in such a simple inertial test, it was possible to reconstruct average axial stress profiles using only the acceleration data. Then, it was possible to construct local stress-strain curves and derive a tensile strength value.
Multiple HEPA filter test methods. Progress report, January--December 1977
Schuster, B.; Kyle, T.; Osetek, D.
1978-09-01
Tandem high-efficiency particulate air (HEPA) filter efficiency measurements have been successfully performed on a large number of 20,000 CFM installations. The testing procedure relies on the use of a laser intracavity particle spectrometer and a very high-volume thermal dioctyl phthalate aerosol generator designed and constructed specifically for this purpose. For systems that cannot be tested in this fashion, work has been initiated on the generation and detection of a fluorescent self-identifying aerosol to eliminate the background problem. General candidate aerosols and methods to disperse them have been uncovered. Two distinct detection concepts have evolved for the measurement of size and concentration of these particles
Magnetic filter apparatus and method for generating cold plasma in semicoductor processing
Vella, Michael C.
1996-01-01
Disclosed herein is a system and method for providing a plasma flood having a low electron temperature to a semiconductor target region during an ion implantation process. The plasma generator providing the plasma is coupled to a magnetic filter which allows ions and low energy electrons to pass therethrough while retaining captive the primary or high energy electrons. The ions and low energy electrons form a "cold plasma" which is diffused in the region of the process surface while the ion implantation process takes place.
Magnetic filter apparatus and method for generating cold plasma in semiconductor processing
Vella, M.C.
1996-08-13
Disclosed herein is a system and method for providing a plasma flood having a low electron temperature to a semiconductor target region during an ion implantation process. The plasma generator providing the plasma is coupled to a magnetic filter which allows ions and low energy electrons to pass therethrough while retaining captive the primary or high energy electrons. The ions and low energy electrons form a ``cold plasma`` which is diffused in the region of the process surface while the ion implantation process takes place. 15 figs.
Li, Biyuan; Tang, Chen; Gao, Guannan; Chen, Mingming; Tang, Shuwei; Lei, Zhenkun
2017-06-01
Filtering off speckle noise from a fringe image is one of the key tasks in electronic speckle pattern interferometry (ESPI). In general, ESPI fringe images can be divided into three categories: low-density fringe images, high-density fringe images, and variable-density fringe images. In this paper, we first present a general filtering method based on variational image decomposition that can filter speckle noise for ESPI fringe images with various densities. In our method, a variable-density ESPI fringe image is decomposed into low-density fringes, high-density fringes, and noise. A low-density fringe image is decomposed into low-density fringes and noise. A high-density fringe image is decomposed into high-density fringes and noise. We give some suitable function spaces to describe low-density fringes, high-density fringes, and noise, respectively. Then we construct several models and numerical algorithms for ESPI fringe images with various densities. And we investigate the performance of these models via our extensive experiments. Finally, we compare our proposed models with the windowed Fourier transform method and coherence enhancing diffusion partial differential equation filter. These two methods may be the most effective filtering methods at present. Furthermore, we use the proposed method to filter a collection of the experimentally obtained ESPI fringe images with poor quality. The experimental results demonstrate the performance of our proposed method.
Analysis of Filter-Bank-Based Methods for Fast Serial Acquisition of BOC-Modulated Signals
Elena Simona Lohan
2007-09-01
Full Text Available Binary-offset-carrier (BOC signals, selected for Galileo and modernized GPS systems, pose significant challenges for the code acquisition, due to the ambiguities (deep fades which are present in the envelope of the correlation function (CF. This is different from the BPSK-modulated CDMA signals, where the main correlation lobe spans over 2-chip interval, without any ambiguities or deep fades. To deal with the ambiguities due to BOC modulation, one solution is to use lower steps of scanning the code phases (i.e., lower than the traditional step of 0.5 chips used for BPSK-modulated CDMA signals. Lowering the time-bin steps entails an increase in the number of timing hypotheses, and, thus, in the acquisition times. An alternative solution is to transform the ambiguous CF into an Ã¢Â€ÂœunambiguousÃ¢Â€Â CF, via adequate filtering of the signal. A generalized class of frequency-based unambiguous acquisition methods is proposed here, namely the filter-bank-based (FBB approaches. The detailed theoretical analysis of FBB methods is given for serial-search single-dwell acquisition in single path static channels and a comparison is made with other ambiguous and unambiguous BOC acquisition methods existing in the literature.
An optical method for characterizing carbon content in ceramic pot filters.
Goodwin, J Y; Elmore, A C; Salvinelli, C; Reidmeyer, Mary R
2017-08-01
Ceramic pot filter (CPF) technology is a relatively common means of household water treatment in developing areas, and performance characteristics of CPFs have been characterized using production CPFs, experimental CPFs fabricated in research laboratories, and ceramic disks intended to be CPF surrogates. There is evidence that CPF manufacturers do not always fire their products according to best practices and the result is incomplete combustion of the pore forming material and the creation of a carbon core in the final CPFs. Researchers seldom acknowledge the existence of potential existence of carbon cores, and at least one CPF producer has postulated that the carbon may be beneficial in terms of final water quality because of the presence of activated carbon in consumer filters marketed in the Western world. An initial step in characterizing the presence and impact of carbon cores is the characterization of those cores. An optical method which may be more viable to producers relative to off-site laboratory analysis of carbon content has been developed and verified. The use of the optical method is demonstrated via preliminary disinfection and flowrate studies, and the results of these studies indicate that the method may be of use in studying production kiln operation.
Flood Hazard Mapping by Applying Fuzzy TOPSIS Method
Han, K. Y.; Lee, J. Y.; Keum, H.; Kim, B. J.; Kim, T. H.
2017-12-01
There are lots of technical methods to integrate various factors for flood hazard mapping. The purpose of this study is to suggest the methodology of integrated flood hazard mapping using MCDM(Multi Criteria Decision Making). MCDM problems involve a set of alternatives that are evaluated on the basis of conflicting and incommensurate criteria. In this study, to apply MCDM to assessing flood risk, maximum flood depth, maximum velocity, and maximum travel time are considered as criterion, and each applied elements are considered as alternatives. The scheme to find the efficient alternative closest to a ideal value is appropriate way to assess flood risk of a lot of element units(alternatives) based on various flood indices. Therefore, TOPSIS which is most commonly used MCDM scheme is adopted to create flood hazard map. The indices for flood hazard mapping(maximum flood depth, maximum velocity, and maximum travel time) have uncertainty concerning simulation results due to various values according to flood scenario and topographical condition. These kind of ambiguity of indices can cause uncertainty of flood hazard map. To consider ambiguity and uncertainty of criterion, fuzzy logic is introduced which is able to handle ambiguous expression. In this paper, we made Flood Hazard Map according to levee breach overflow using the Fuzzy TOPSIS Technique. We confirmed the areas where the highest grade of hazard was recorded through the drawn-up integrated flood hazard map, and then produced flood hazard map can be compared them with those indicated in the existing flood risk maps. Also, we expect that if we can apply the flood hazard map methodology suggested in this paper even to manufacturing the current flood risk maps, we will be able to make a new flood hazard map to even consider the priorities for hazard areas, including more varied and important information than ever before. Keywords : Flood hazard map; levee break analysis; 2D analysis; MCDM; Fuzzy TOPSIS
Simultaneous pattern recognition and track fitting by the Kalman filtering method
Billoir, P.
1990-01-01
A progressive pattern recognition algorithm based on the Kalman filtering method has been tested. The algorithm starts from a small track segment or from a fitted track of a neighbouring detector, then extends the candidate tracks by adding measured points one by one. The fitted parameters and weight matrix of the candidate track are updated when adding a point, and give an increasing precision on prediction of the next point. Thus, pattern recognition and track fitting can be accomplished simultaneously. The method has been implemented and tested for track reconstruction for the vertex detector of the ZEUS experiment at DESY. Detailed procedures of the method and its performance are presented. Its flexibility is described as well. (orig.)
An Indoor Slam Method Based on Kinect and Multi-Feature Extended Information Filter
Chang, M.; Kang, Z.
2017-09-01
Based on the frame of ORB-SLAM in this paper the transformation parameters between adjacent Kinect image frames are computed using ORB keypoints, from which priori information matrix and information vector are calculated. The motion update of multi-feature extended information filter is then realized. According to the point cloud data formed by depth image, ICP algorithm was used to extract the point features of the point cloud data in the scene and built an observation model while calculating a-posteriori information matrix and information vector, and weakening the influences caused by the error accumulation in the positioning process. Furthermore, this paper applied ORB-SLAM frame to realize autonomous positioning in real time in interior unknown environment. In the end, Lidar was used to get data in the scene in order to estimate positioning accuracy put forward in this paper.
AN INDOOR SLAM METHOD BASED ON KINECT AND MULTI-FEATURE EXTENDED INFORMATION FILTER
M. Chang
2017-09-01
Full Text Available Based on the frame of ORB-SLAM in this paper the transformation parameters between adjacent Kinect image frames are computed using ORB keypoints, from which priori information matrix and information vector are calculated. The motion update of multi-feature extended information filter is then realized. According to the point cloud data formed by depth image, ICP algorithm was used to extract the point features of the point cloud data in the scene and built an observation model while calculating a-posteriori information matrix and information vector, and weakening the influences caused by the error accumulation in the positioning process. Furthermore, this paper applied ORB-SLAM frame to realize autonomous positioning in real time in interior unknown environment. In the end, Lidar was used to get data in the scene in order to estimate positioning accuracy put forward in this paper.
Implicit LES using adaptive filtering
Sun, Guangrui; Domaradzki, Julian A.
2018-04-01
In implicit large eddy simulations (ILES) numerical dissipation prevents buildup of small scale energy in a manner similar to the explicit subgrid scale (SGS) models. If spectral methods are used the numerical dissipation is negligible but it can be introduced by applying a low-pass filter in the physical space, resulting in an effective ILES. In the present work we provide a comprehensive analysis of the numerical dissipation produced by different filtering operations in a turbulent channel flow simulated using a non-dissipative, pseudo-spectral Navier-Stokes solver. The amount of numerical dissipation imparted by filtering can be easily adjusted by changing how often a filter is applied. We show that when the additional numerical dissipation is close to the subgrid-scale (SGS) dissipation of an explicit LES the overall accuracy of ILES is also comparable, indicating that periodic filtering can replace explicit SGS models. A new method is proposed, which does not require any prior knowledge of a flow, to determine the filtering period adaptively. Once an optimal filtering period is found, the accuracy of ILES is significantly improved at low implementation complexity and computational cost. The method is general, performing well for different Reynolds numbers, grid resolutions, and filter shapes.
Lai, Ting-Yu; Chen, Hsiao-I; Shih, Cho-Chiang; Kuo, Li-Chieh; Hsu, Hsiu-Yun; Huang, Chih-Chung
2016-01-01
Information about tendon displacement is important for allowing clinicians to not only quantify preoperative tendon injuries but also to identify any adhesive scaring between tendon and adjacent tissue. The Fisher-Tippett (FT) similarity measure has recently been shown to be more accurate than the Laplacian sum of absolute differences (SAD) and Gaussian sum of squared differences (SSD) similarity measures for tracking tendon displacement in ultrasound B-mode images. However, all of these similarity measures can easily be influenced by the quality of the ultrasound image, particularly its signal-to-noise ratio. Ultrasound images of injured hands are unfortunately often of poor quality due to the presence of adhesive scars. The present study investigated a novel Kalman-filter scheme for overcoming this problem. Three state-of-the-art tracking methods (FT, SAD, and SSD) were used to track the displacements of phantom and cadaver tendons, while FT was used to track human tendons. These three tracking methods were combined individually with the proposed Kalman-filter (K1) scheme and another Kalman-filter scheme used in a previous study to optimize the displacement trajectories of the phantom and cadaver tendons. The motion of the human extensor digitorum communis tendon was measured in the present study using the FT-K1 scheme. The experimental results indicated that SSD exhibited better accuracy in the phantom experiments, whereas FT exhibited better performance for tracking real tendon motion in the cadaver experiments. All three tracking methods were influenced by the signal-to-noise ratio of the images. On the other hand, the K1 scheme was able to optimize the tracking trajectory of displacement in all experiments, even from a location with a poor image quality. The human experimental data indicated that the normal tendons were displaced more than the injured tendons, and that the motion ability of the injured tendon was restored after appropriate rehabilitation
Cryogenic filter method produces super-pure helium and helium isotopes
Hildebrandt, A. F.
1964-01-01
Helium is purified when cooled in a low pressure environment until it becomes superfluid. The liquid helium is then filtered through iron oxide particles. Heating, cooling and filtering processes continue until the purified liquid helium is heated to a gas.
Applying sociodramatic methods in teaching transition to palliative care.
Baile, Walter F; Walters, Rebecca
2013-03-01
We introduce the technique of sociodrama, describe its key components, and illustrate how this simulation method was applied in a workshop format to address the challenge of discussing transition to palliative care. We describe how warm-up exercises prepared 15 learners who provide direct clinical care to patients with cancer for a dramatic portrayal of this dilemma. We then show how small-group brainstorming led to the creation of a challenging scenario wherein highly optimistic family members of a 20-year-old young man with terminal acute lymphocytic leukemia responded to information about the lack of further anticancer treatment with anger and blame toward the staff. We illustrate how the facilitators, using sociodramatic techniques of doubling and role reversal, helped learners to understand and articulate the hidden feelings of fear and loss behind the family's emotional reactions. By modeling effective communication skills, the facilitators demonstrated how key communication skills, such as empathic responses to anger and blame and using "wish" statements, could transform the conversation from one of conflict to one of problem solving with the family. We also describe how we set up practice dyads to give the learners an opportunity to try out new skills with each other. An evaluation of the workshop and similar workshops we conducted is presented. Copyright © 2013 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
Comparative Study of Different Methods for Soot Sensing and Filter Monitoring in Diesel Exhausts
Feulner, Markus; Hagen, Gunter; Hottner, Kathrin; Redel, Sabrina; M?ller, Andreas; Moos, Ralf
2017-01-01
Due to increasingly tighter emission limits for diesel and gasoline engines, especially concerning particulate matter emissions, particulate filters are becoming indispensable devices for exhaust gas after treatment. Thereby, for an efficient engine and filter control strategy and a cost-efficient filter design, reliable technologies to determine the soot load of the filters and to measure particulate matter concentrations in the exhaust gas during vehicle operation are highly needed. In this...
Development and testing of the detector for monitoring radon double-filter method
Sevcik, P.
2008-01-01
Applications of physics in the study of radon transport processes in the atmosphere and of testing of atmospheric transport models require sensitive detection devices with low maintenance requirements. The most precise devices involved in the worldwide monitoring program of the atmosphere (GAW) determine volume activity of radon from a variety of daughter products of 222 Rn, resulting in a working volume of the detector (double-filter method). The purpose of this work was to explore theoretically and experimentally the possibilities and limits of a particular simple implementation of this procedure. Tested apparatus consists of a 200 dm 3 chamber (metal drum), where are developed transformation products of radon and semiconductor detector with surface barrier, which registers α particles from the conversion of daughter products 222 Rn collected on a filter at the outlet of the chamber. Testing of the apparatus takes place in the atmosphere with higher concentrations of radon. The measured variations of volume activities 222 Rn have the same character as the variations of radon concentration in the air in laboratory. Minimum detectable activity at 95% significance level is 16.0 Bq.m -3 at a pumping speed of the air 20 dm 3 .min - 1 and 13.0 Bq.m -3 at a pumping rate 24 dm 3 .min -1 . These values are still too high for using the apparatus for measuring in external atmosphere. The main limit of the apparatus is a capture of transformation products arising on the inner walls of the chamber (plate-out effect). The effectiveness of collecting 218 Po from the chamber on the filter in our measurements was only 2.8%. But we managed to increase it to about 20% by adding aerosol delivery systems into production chamber of transformation products of radon. It turns out that based on this principle can be made sensitive and continuously working monitor of radon. (author)
Ilin Dmitry
2017-01-01
Full Text Available Predictive learning services perform aggregation and homogenization of open data from public sources, in particular from the online recruitment agencies. However, the sample of vacancies may contain various percentage of noise due to the frequent occurrence of homonyms. This article will consider two approaches of noise reduction: the first one is based on the cosine similarity and the second one is based on the contextual words.
Electrically heated particulate filter regeneration methods and systems for hybrid vehicles
Gonze, Eugene V.; Paratore, Jr., Michael J.
2010-10-12
A control system for controlling regeneration of a particulate filter for a hybrid vehicle is provided. The system generally includes a regeneration module that controls current to the particulate filter to initiate regeneration. An engine control module controls operation of an engine of the hybrid vehicle based on the control of the current to the particulate filter.
Applying multi-resolution numerical methods to geodynamics
Davies, David Rhodri
Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled
Analytic methods in applied probability in memory of Fridrikh Karpelevich
Suhov, Yu M
2002-01-01
This volume is dedicated to F. I. Karpelevich, an outstanding Russian mathematician who made important contributions to applied probability theory. The book contains original papers focusing on several areas of applied probability and its uses in modern industrial processes, telecommunications, computing, mathematical economics, and finance. It opens with a review of Karpelevich's contributions to applied probability theory and includes a bibliography of his works. Other articles discuss queueing network theory, in particular, in heavy traffic approximation (fluid models). The book is suitable
Reactor calculation in coarse mesh by finite element method applied to matrix response method
Nakata, H.
1982-01-01
The finite element method is applied to the solution of the modified formulation of the matrix-response method aiming to do reactor calculations in coarse mesh. Good results are obtained with a short running time. The method is applicable to problems where the heterogeneity is predominant and to problems of evolution in coarse meshes where the burnup is variable in one same coarse mesh, making the cross section vary spatially with the evolution. (E.G.) [pt
Böning, G.; Schäfer, M.; Grupp, U.; Kaul, D.; Kahn, J.; Pavel, M.; Maurer, M.; Denecke, T.; Hamm, B.; Streitparth, F.
2015-01-01
Highlights: • Iterative reconstruction (IR) in staging CT provides equal objective image quality compared to filtered back projection (FBP). • IR delivers excellent subjective quality and reduces effective dose compared to FBP. • In patients with neuroendocrine tumor (NET) or may other hypervascular abdominal tumors IR can be used without scarifying diagnostic confidence. - Abstract: Objective: To investigate whether dose reduction via adaptive statistical iterative reconstruction (ASIR) affects image quality and diagnostic accuracy in neuroendocrine tumor (NET) staging. Methods: A total of 28 NET patients were enrolled in the study. Inclusion criteria were histologically proven NET and visible tumor in abdominal computed tomography (CT). In an intraindividual study design, the patients underwent a baseline CT (filtered back projection, FBP) and follow-up CT (ASIR 40%) using matched scan parameters. Image quality was assessed subjectively using a 5-grade scoring system and objectively by determining signal-to-noise ratio (SNR) and contrast-to-noise ratios (CNRs). Applied volume computed tomography dose index (CTDI vol ) of each scan was taken from the dose report. Results: ASIR 40% significantly reduced CTDI vol (10.17 ± 3.06 mGy [FBP], 6.34 ± 2.25 mGy [ASIR] (p < 0.001) by 37.6% and significantly increased CNRs (complete tumor-to-liver, 2.76 ± 1.87 [FBP], 3.2 ± 2.32 [ASIR]) (p < 0.05) (complete tumor-to-muscle, 2.74 ± 2.67 [FBP], 4.31 ± 4.61 [ASIR]) (p < 0.05) compared to FBP. Subjective scoring revealed no significant changes for diagnostic confidence (5.0 ± 0 [FBP], 5.0 ± 0 [ASIR]), visibility of suspicious lesion (4.8 ± 0.5 [FBP], 4.8 ± 0.5 [ASIR]) and artifacts (5.0 ± 0 [FBP], 5.0 ± 0 [ASIR]). ASIR 40% significantly decreased scores for noise (4.3 ± 0.6 [FBP], 4.0 ± 0.8 [ASIR]) (p < 0.05), contrast (4.4 ± 0.6 [FBP], 4.1 ± 0.8 [ASIR]) (p < 0.001) and visibility of small structures (4.5 ± 0.7 [FBP], 4.3 ± 0.8 [ASIR]) (p < 0
Böning, G., E-mail: georg.boening@charite.de [Department of Radiology, Charité, Humboldt-University Medical School, Charitéplatz 1, 10117 Berlin (Germany); Schäfer, M.; Grupp, U. [Department of Radiology, Charité, Humboldt-University Medical School, Charitéplatz 1, 10117 Berlin (Germany); Kaul, D. [Department of Radiation Oncology, Charité, Humboldt-University Medical School, Charitéplatz 1, 10117 Berlin (Germany); Kahn, J. [Department of Radiology, Charité, Humboldt-University Medical School, Charitéplatz 1, 10117 Berlin (Germany); Pavel, M. [Department of Gastroenterology, Charité, Humboldt-University Medical School, Charitéplatz 1, 10117 Berlin (Germany); Maurer, M.; Denecke, T.; Hamm, B.; Streitparth, F. [Department of Radiology, Charité, Humboldt-University Medical School, Charitéplatz 1, 10117 Berlin (Germany)
2015-08-15
Highlights: • Iterative reconstruction (IR) in staging CT provides equal objective image quality compared to filtered back projection (FBP). • IR delivers excellent subjective quality and reduces effective dose compared to FBP. • In patients with neuroendocrine tumor (NET) or may other hypervascular abdominal tumors IR can be used without scarifying diagnostic confidence. - Abstract: Objective: To investigate whether dose reduction via adaptive statistical iterative reconstruction (ASIR) affects image quality and diagnostic accuracy in neuroendocrine tumor (NET) staging. Methods: A total of 28 NET patients were enrolled in the study. Inclusion criteria were histologically proven NET and visible tumor in abdominal computed tomography (CT). In an intraindividual study design, the patients underwent a baseline CT (filtered back projection, FBP) and follow-up CT (ASIR 40%) using matched scan parameters. Image quality was assessed subjectively using a 5-grade scoring system and objectively by determining signal-to-noise ratio (SNR) and contrast-to-noise ratios (CNRs). Applied volume computed tomography dose index (CTDI{sub vol}) of each scan was taken from the dose report. Results: ASIR 40% significantly reduced CTDI{sub vol} (10.17 ± 3.06 mGy [FBP], 6.34 ± 2.25 mGy [ASIR] (p < 0.001) by 37.6% and significantly increased CNRs (complete tumor-to-liver, 2.76 ± 1.87 [FBP], 3.2 ± 2.32 [ASIR]) (p < 0.05) (complete tumor-to-muscle, 2.74 ± 2.67 [FBP], 4.31 ± 4.61 [ASIR]) (p < 0.05) compared to FBP. Subjective scoring revealed no significant changes for diagnostic confidence (5.0 ± 0 [FBP], 5.0 ± 0 [ASIR]), visibility of suspicious lesion (4.8 ± 0.5 [FBP], 4.8 ± 0.5 [ASIR]) and artifacts (5.0 ± 0 [FBP], 5.0 ± 0 [ASIR]). ASIR 40% significantly decreased scores for noise (4.3 ± 0.6 [FBP], 4.0 ± 0.8 [ASIR]) (p < 0.05), contrast (4.4 ± 0.6 [FBP], 4.1 ± 0.8 [ASIR]) (p < 0.001) and visibility of small structures (4.5 ± 0.7 [FBP], 4.3 ± 0.8 [ASIR]) (p < 0
A Generalized Autocovariance Least-Squares Method for Kalman Filter Tuning
Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad
2008-01-01
This paper discusses a method for estimating noise covariances from process data. In linear stochastic state-space representations the true noise covariances are generally unknown in practical applications. Using estimated covariances a Kalman filter can be tuned in order to increase the accuracy...... of the state estimates. There is a linear relationship between covariances and autocovariance. Therefore, the covariance estimation problem can be stated as a least-squares problem, which can be solved as a symmetric semidefinite least-squares problem. This problem is convex and can be solved efficiently...... by interior-point methods. A numerical algorithm for solving the symmetric is able to handle systems with mutually correlated process noise and measurement noise. (c) 2007 Elsevier Ltd. All rights reserved....
Characterization of filter cartridges from the IEA-R1 reactor by radiochemical method
Geraldo, Bianca; Vicente, Roberto; Ferreira, Robson J.; Goes, Marcos M.; Marumo, Julio T.
2015-01-01
The filter cartridges used in water purification system of research nuclear reactor IEA-R1 are considered radioactive wastes after their useful life. The characterization of these wastes is one of the stages of management, which aims to identify and quantify the radionuclides present, including those known as 'difficult to measure' (DTM) radionuclides. Establish a radiochemical analysis methodology for this type of waste is a difficult job, not only by the application of these techniques, but also by the amount of radionuclides that should be analyzed. In the waste produced in a nuclear reactor, the most important radionuclides are fission products, activation products and transuranic elements. Since these radionuclides emit gamma radiation not measurable in its decay process and consequently are difficult to measure, their concentrations can be estimated by indirect methods such as scale factors. This method is used to evaluate the DTM concentration, which is represented by alpha and beta nuclides using the correlation between them and the radionuclide key, a gamma emitter. The objective of this work is to describe a radiochemical analysis methodology for gamma emitter nuclides, present in the filter cartridges, evaluating the activity and concentrations by destructive assays. At the same time, two studies have been performed by non-destructive assays, the first one based on dose rates and the point kernel method to correlate the results and the second one based on calibration efficiency with Monte Carlo method. These studies belong to the radioactive waste characterization program that has been conducted at the Waste Management Laboratory of Nuclear and Energy Research Institute, IPEN-CNEN/SP. (author)
Characterization of filter cartridges from the IEA-R1 reactor by radiochemical method
Geraldo, Bianca; Vicente, Roberto; Ferreira, Robson J.; Goes, Marcos M.; Marumo, Julio T., E-mail: bgeraldo@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2015-07-01
The filter cartridges used in water purification system of research nuclear reactor IEA-R1 are considered radioactive wastes after their useful life. The characterization of these wastes is one of the stages of management, which aims to identify and quantify the radionuclides present, including those known as 'difficult to measure' (DTM) radionuclides. Establish a radiochemical analysis methodology for this type of waste is a difficult job, not only by the application of these techniques, but also by the amount of radionuclides that should be analyzed. In the waste produced in a nuclear reactor, the most important radionuclides are fission products, activation products and transuranic elements. Since these radionuclides emit gamma radiation not measurable in its decay process and consequently are difficult to measure, their concentrations can be estimated by indirect methods such as scale factors. This method is used to evaluate the DTM concentration, which is represented by alpha and beta nuclides using the correlation between them and the radionuclide key, a gamma emitter. The objective of this work is to describe a radiochemical analysis methodology for gamma emitter nuclides, present in the filter cartridges, evaluating the activity and concentrations by destructive assays. At the same time, two studies have been performed by non-destructive assays, the first one based on dose rates and the point kernel method to correlate the results and the second one based on calibration efficiency with Monte Carlo method. These studies belong to the radioactive waste characterization program that has been conducted at the Waste Management Laboratory of Nuclear and Energy Research Institute, IPEN-CNEN/SP. (author)
Multilevel ensemble Kalman filtering
Hoel, Haakon
2016-01-08
The ensemble Kalman filter (EnKF) is a sequential filtering method that uses an ensemble of particle paths to estimate the means and covariances required by the Kalman filter by the use of sample moments, i.e., the Monte Carlo method. EnKF is often both robust and efficient, but its performance may suffer in settings where the computational cost of accurate simulations of particles is high. The multilevel Monte Carlo method (MLMC) is an extension of classical Monte Carlo methods which by sampling stochastic realizations on a hierarchy of resolutions may reduce the computational cost of moment approximations by orders of magnitude. In this work we have combined the ideas of MLMC and EnKF to construct the multilevel ensemble Kalman filter (MLEnKF) for the setting of finite dimensional state and observation spaces. The main ideas of this method is to compute particle paths on a hierarchy of resolutions and to apply multilevel estimators on the ensemble hierarchy of particles to compute Kalman filter means and covariances. Theoretical results and a numerical study of the performance gains of MLEnKF over EnKF will be presented. Some ideas on the extension of MLEnKF to settings with infinite dimensional state spaces will also be presented.
Multilevel ensemble Kalman filtering
Hoel, Haakon; Chernov, Alexey; Law, Kody; Nobile, Fabio; Tempone, Raul
2016-01-01
The ensemble Kalman filter (EnKF) is a sequential filtering method that uses an ensemble of particle paths to estimate the means and covariances required by the Kalman filter by the use of sample moments, i.e., the Monte Carlo method. EnKF is often both robust and efficient, but its performance may suffer in settings where the computational cost of accurate simulations of particles is high. The multilevel Monte Carlo method (MLMC) is an extension of classical Monte Carlo methods which by sampling stochastic realizations on a hierarchy of resolutions may reduce the computational cost of moment approximations by orders of magnitude. In this work we have combined the ideas of MLMC and EnKF to construct the multilevel ensemble Kalman filter (MLEnKF) for the setting of finite dimensional state and observation spaces. The main ideas of this method is to compute particle paths on a hierarchy of resolutions and to apply multilevel estimators on the ensemble hierarchy of particles to compute Kalman filter means and covariances. Theoretical results and a numerical study of the performance gains of MLEnKF over EnKF will be presented. Some ideas on the extension of MLEnKF to settings with infinite dimensional state spaces will also be presented.
Adams-Bashforth-Moulton method with Savitzky-Golay filter to reduce reactivity fluctuations
Suescun-Diaz, Daniel; Rasero Causil, Diego A. [Univ. Surcolombiana, Neiva (Colombia). Dept. de Ciencias Exactas y Naturales; Figueroa-Jimenez, Jorge H. [Pontificia Universidad Javeriana Cali, Cali (Colombia). Dept. de Ciencias Naturales y Matematicas
2017-12-15
In this paper we present a new method of calculating reactivity with fluctuation reduction. First we propose the generalized predictor and corrector using the fourth order Adams-Basforth-Moulton (ABM) for the numerical solution of the point kinetic equations for the calculation of reactivity, without using the history of nuclear power. Due to the nature of point kinetic equations, we use modifiers of the different predictors to increase precision in the approximation obtained. Secondly, we use the filter known as Savitzky-Golay (SG), which permits the reduction of the fluctuation in reactivity. It is known that the SG filter smoothing without diminishing the value of the nuclear power, irrespective of its form; this guarantees the reduction of Gaussian random noise levels, which is distributed around the average value of the nuclear power of up to σ = 0.1, with a time step h = 0.01 s. This formulation uses the Gram polynomial approximation, with a degree d = 2 the results show better values in the maximum difference in reactivity in comparison with that reported in literature.
Zilber, Nicolas A.; Katayama, Yoshinori; Iramina, Keiji; Erich, Wintermantel
2010-05-01
A new approach is proposed to test the efficiency of methods, such as the Kalman filter and the independent component analysis (ICA), when applied to remove the artifacts induced by transcranial magnetic stimulation (TMS) from electroencephalography (EEG). By using EEG recordings corrupted by TMS induction, the shape of the artifacts is approximately described with a model based on an equivalent circuit simulation. These modeled artifacts are subsequently added to other EEG signals—this time not influenced by TMS. The resulting signals prove of interest since we also know their form without the pseudo-TMS artifacts. Therefore, they enable us to use a fit test to compare the signals we obtain after removing the artifacts with the original signals. This efficiency test turned out very useful in comparing the methods between them, as well as in determining the parameters of the filtering that give satisfactory results with the automatic ICA.
Zhong Wu
2017-04-01
Full Text Available Since AASHTO released the Mechanistic-Empirical Pavement Design Guide (MEPDG for public review in 2004, many highway research agencies have performed sensitivity analyses using the prototype MEPDG design software. The information provided by the sensitivity analysis is essential for design engineers to better understand the MEPDG design models and to identify important input parameters for pavement design. In literature, different studies have been carried out based on either local or global sensitivity analysis methods, and sensitivity indices have been proposed for ranking the importance of the input parameters. In this paper, a regional sensitivity analysis method, Monte Carlo filtering (MCF, is presented. The MCF method maintains many advantages of the global sensitivity analysis, while focusing on the regional sensitivity of the MEPDG model near the design criteria rather than the entire problem domain. It is shown that the information obtained from the MCF method is more helpful and accurate in guiding design engineers in pavement design practices. To demonstrate the proposed regional sensitivity method, a typical three-layer flexible pavement structure was analyzed at input level 3. A detailed procedure to generate Monte Carlo runs using the AASHTOWare Pavement ME Design software was provided. The results in the example show that the sensitivity ranking of the input parameters in this study reasonably matches with that in a previous study under a global sensitivity analysis. Based on the analysis results, the strengths, practical issues, and applications of the MCF method were further discussed.
Tessaro, Ana Paula G.; Vicente, Roberto
2015-01-01
The acceptance of radioactive waste in a repository depends primarily on knowledge of the radioisotopic inventory of the material, according to regulations established by regulatory agencies. The primary characterization is also a fundamental action to determine further steps in the management of the radioactive wastes. The aim of this work is to report the development of non-destructive methods for primary characterization of filters cartridges discarded as radioactive waste. The filters cartridges are used in the water polishing system of the IEA-R1 reactor retaining the particles in suspension in the reactor cooling water. The IEA-R1 is a pool type reactor with a thermal power of 5 MW, moderated and cooled with light water. It is located in the Energy and Nuclear Research Institute (IPEN-CNEN), in São Paulo, Brazil. The cartridge filters become radioactive waste when they are saturated and do not meet the required flow for the proper operation of the water polishing system. The activities of gamma emitters present in the filters are determined using gamma spectrometry, dose rate measurements and the Point Kernel Method to correlate results from both measurements. For the primary characterization, one alternative method is the radiochemical analysis of slices taken from each filter, what presents the disadvantage of higher exposures personnel and contamination risks. Another alternative method is the calibration of the measurement geometry of a gamma spectrometer, which requires the production of a standard filter. Both methods are necessary but can not be used in operational routine of radioactive waste management owing to cost and complexity. The method described can be used to determine routinely the radioactive inventory of these filters and other radioactive wastes, avoiding the necessity of destructive radiochemical analysis, or the necessity of calibrating the geometry of measurement. (author)
Meleiro L.A.C.
2000-01-01
Full Text Available Most advanced computer-aided control applications rely on good dynamics process models. The performance of the control system depends on the accuracy of the model used. Typically, such models are developed by conducting off-line identification experiments on the process. These experiments for identification often result in input-output data with small output signal-to-noise ratio, and using these data results in inaccurate model parameter estimates [1]. In this work, a multivariable adaptive self-tuning controller (STC was developed for a biotechnological process application. Due to the difficulties involving the measurements or the excessive amount of variables normally found in industrial process, it is proposed to develop "soft-sensors" which are based fundamentally on artificial neural networks (ANN. A second approach proposed was set in hybrid models, results of the association of deterministic models (which incorporates the available prior knowledge about the process being modeled with artificial neural networks. In this case, kinetic parameters - which are very hard to be accurately determined in real time industrial plants operation - were obtained using ANN predictions. These methods are especially suitable for the identification of time-varying and nonlinear models. This advanced control strategy was applied to a fermentation process to produce ethyl alcohol (ethanol in industrial scale. The reaction rate considered for substratum consumption, cells and ethanol productions are validated with industrial data for typical operating conditions. The results obtained show that the proposed procedure in this work has a great potential for application.
Valuing national effects of digital health investments: an applied method.
Hagens, Simon; Zelmer, Jennifer; Frazer, Cassandra; Gheorghiu, Bobby; Leaver, Chad
2015-01-01
This paper describes an approach which has been applied to value national outcomes of investments by federal, provincial and territorial governments, clinicians and healthcare organizations in digital health. Hypotheses are used to develop a model, which is revised and populated based upon the available evidence. Quantitative national estimates and qualitative findings are produced and validated through structured peer review processes. This methodology has applied in four studies since 2008.
Dose rate reduction method for NMCA applied BWR plants
Nagase, Makoto; Aizawa, Motohiro; Ito, Tsuyoshi; Hosokawa, Hideyuki; Varela, Juan; Caine, Thomas
2012-09-01
BRAC (BWR Radiation Assessment and Control) dose rate is used as an indicator of the incorporation of activated corrosion by products into BWR recirculation piping, which is known to be a significant contributor to dose rate received by workers during refueling outages. In order to reduce radiation exposure of the workers during the outage, it is desirable to keep BRAC dose rates as low as possible. After HWC was adopted to reduce IGSCC, a BRAC dose rate increase was observed in many plants. As a countermeasure to these rapid dose rate increases under HWC conditions, Zn injection was widely adopted in United States and Europe resulting in a reduction of BRAC dose rates. However, BRAC dose rates in several plants remain high, prompting the industry to continue to investigate methods to achieve further reductions. In recent years a large portion of the BWR fleet has adopted NMCA (NobleChem TM ) to enhance the hydrogen injection effect to suppress SCC. After NMCA, especially OLNC (On-Line NobleChem TM ), BRAC dose rates were observed to decrease. In some OLNC applied BWR plants this reduction was observed year after year to reach a new reduced equilibrium level. This dose rate reduction trends suggest the potential dose reduction might be obtained by the combination of Pt and Zn injection. So, laboratory experiments and in-plant tests were carried out to evaluate the effect of Pt and Zn on Co-60 deposition behaviour. Firstly, laboratory experiments were conducted to study the effect of noble metal deposition on Co deposition on stainless steel surfaces. Polished type 316 stainless steel coupons were prepared and some of them were OLNC treated in the test loop before the Co deposition test. Water chemistry conditions to simulate HWC were as follows: Dissolved oxygen, hydrogen and hydrogen peroxide were below 5 ppb, 100 ppb and 0 ppb (no addition), respectively. Zn was injected to target a concentration of 5 ppb. The test was conducted up to 1500 hours at 553 K. Test
A Method of Effective Quarry Water Purifying Using Artificial Filtering Arrays
Tyulenev, M.; Garina, E.; Khoreshok, A.; Litvin, O.; Litvin, Y.; Maliukhina, E.
2017-01-01
The development of open pit mining in the large coal basins of Russia and other countries increases their negative impact on the environment. Along with the damage of land and air pollution by dust and combustion gases of blasting, coal pits have a significant negative impact on water resources. Polluted quarry water worsens the ecological situation on a much larger area than covered by air pollution and land damage. This significantly worsens the conditions of people living in cities and towns located near the coal pits, and complicates the subsequent restoration of the environment, irreversibly destroying the nature. Therefore, the research of quarry wastewater purifying is becoming an important mater for scholars of technical colleges and universities in the regions with developing open-pit mining. This paper describes the method of determining the basic parameters of the artificial filtering arrays formed on coal pits of Kuzbass (Western Siberia, Russia), and gives recommendations on its application.
A Decoupling Control Method for Shunt Hybrid Active Power Filter Based on Generalized Inverse System
Xin Li
2017-01-01
Full Text Available In this paper, a novel decoupling control method based on generalized inverse system is presented to solve the problem of SHAPF (Shunt Hybrid Active Power Filter possessing the characteristics of 2-input-2-output nonlinearity and strong coupling. Based on the analysis of operation principle, the mathematical model of SHAPF is firstly built, which is verified to be invertible using interactor algorithm; then the generalized inverse system of SHAPF is obtained to connect in series with the original system so that the composite system is decoupled under the generalized inverse system theory. The PI additional controller is finally designed to control the decoupled 1-order pseudolinear system to make it possible to adjust the performance of the subsystem. The simulation results demonstrated by MATLAB show that the presented generalized inverse system strategy can realise the dynamic decoupling of SHAPF. And the control system has fine dynamic and static performance.
A rapid and economic in-house DNA purification method using glass syringe filters.
Yun-Cheol Kim
Full Text Available BACKGROUND: Purity, yield, speed and cost are important considerations in plasmid purification, but it is difficult to achieve all of these at the same time. Currently, there are many protocols and kits for DNA purification, however none maximize all four considerations. METHODOLOGY/PRINCIPAL FINDINGS: We now describe a fast, efficient and economic in-house protocol for plasmid preparation using glass syringe filters. Plasmid yield and quality as determined by enzyme digestion and transfection efficiency were equivalent to the expensive commercial kits. Importantly, the time required for purification was much less than that required using a commercial kit. CONCLUSIONS/SIGNIFICANCE: This method provides DNA yield and quality similar to that obtained with commercial kits, but is more rapid and less costly.
Lay-Ekuakille, Aimé; Pariset, Carlo; Trotta, Amerigo
2010-01-01
The FDM (filter diagonalization method), an interesting technique used in nuclear magnetic resonance data processing for tackling FFT (fast Fourier transform) limitations, can be used by considering pipelines, especially complex configurations, as a vascular apparatus with arteries, veins, capillaries, etc. Thrombosis, which might occur in humans, can be considered as a leakage for the complex pipeline, the human vascular apparatus. The choice of eigenvalues in FDM or in spectra-based techniques is a key issue in recovering the solution of the main equation (for FDM) or frequency domain transformation (for FFT) in order to determine the accuracy in detecting leaks in pipelines. This paper deals with the possibility of improving the leak detection accuracy of the FDM technique thanks to a robust algorithm by assessing the problem of eigenvalues, making it less experimental and more analytical using Tikhonov-based regularization techniques. The paper starts from the results of previous experimental procedures carried out by the authors
Advances in analytical methods and occurrence of organic UV-filters in the environment — A review
Ramos, Sara; Homem, Vera; Alves, Arminda; Santos, Lúcia
2015-01-01
UV-filters are a group of compounds designed mainly to protect skin against UVA and UVB radiation, but they are also included in plastics, furniture, etc., to protect products from light damage. Their massive use in sunscreens for skin protection has been increasing due to the awareness of the chronic and acute effects of UV radiation. Some organic UV-filters have raised significant concerns in the past few years for their continuous usage, persistent input and potential threat to ecological environment and human health. UV-filters end up in wastewater and because wastewater treatment plants are not efficient in removing them, lipophilic compounds tend to sorb onto sludge and hydrophilics end up in river water, contaminating the existing biota. To better understand the risk associated with UV-filters in the environment a thorough review regarding their physicochemical properties, toxicity and environmental degradation, analytical methods and their occurrence was conducted. Higher UV-filter concentrations were found in rivers, reaching 0.3 mg/L for the most studied family, the benzophenone derivatives. Concentrations in the ng to μg/L range were also detected for the p-aminobenzoic acid, cinnamate, crylene and benzoyl methane derivatives in lake and sea water. Although at lower levels (few ng/L), UV-filters were also found in tap and groundwater. Swimming pool water is also a sink for UV-filters and its chlorine by-products, at the μg/L range, highlighting the benzophenone and benzimidazole derivatives. Soils and sediments are not frequently studied, but concentrations in the μg/L range have already been found especially for the benzophenone and crylene derivatives. Aquatic biota is frequently studied and UV-filters are found in the ng/g-dw range with higher values for fish and mussels. It has been concluded that more information regarding UV-filter degradation studies both in water and sediments is necessary and environmental occurrences should be monitored more
Advances in analytical methods and occurrence of organic UV-filters in the environment — A review
Ramos, Sara; Homem, Vera, E-mail: vhomem@fe.up.pt; Alves, Arminda; Santos, Lúcia
2015-09-01
UV-filters are a group of compounds designed mainly to protect skin against UVA and UVB radiation, but they are also included in plastics, furniture, etc., to protect products from light damage. Their massive use in sunscreens for skin protection has been increasing due to the awareness of the chronic and acute effects of UV radiation. Some organic UV-filters have raised significant concerns in the past few years for their continuous usage, persistent input and potential threat to ecological environment and human health. UV-filters end up in wastewater and because wastewater treatment plants are not efficient in removing them, lipophilic compounds tend to sorb onto sludge and hydrophilics end up in river water, contaminating the existing biota. To better understand the risk associated with UV-filters in the environment a thorough review regarding their physicochemical properties, toxicity and environmental degradation, analytical methods and their occurrence was conducted. Higher UV-filter concentrations were found in rivers, reaching 0.3 mg/L for the most studied family, the benzophenone derivatives. Concentrations in the ng to μg/L range were also detected for the p-aminobenzoic acid, cinnamate, crylene and benzoyl methane derivatives in lake and sea water. Although at lower levels (few ng/L), UV-filters were also found in tap and groundwater. Swimming pool water is also a sink for UV-filters and its chlorine by-products, at the μg/L range, highlighting the benzophenone and benzimidazole derivatives. Soils and sediments are not frequently studied, but concentrations in the μg/L range have already been found especially for the benzophenone and crylene derivatives. Aquatic biota is frequently studied and UV-filters are found in the ng/g-dw range with higher values for fish and mussels. It has been concluded that more information regarding UV-filter degradation studies both in water and sediments is necessary and environmental occurrences should be monitored more
Influence of Post-treatment Methods on Pressure Change of Filter Bag
Yihua Yin
2016-01-01
Full Text Available The PPS needle-punched non-woven filters with different post-treatments were studied by filter testing system. The pressure drop was measured at various filtration velocity, dust deposition time and the temperature during the experiment; and the effect of dust-cleaning as the consequence of pressure of filter bag was measured. The results showed that post-treatments transformed the surfaces of filters, and the dust formation differed greatly. Excessively high filtration velocity decreased the peak pressure in the process of dust-cleaning. The pressure of filter bag was increased as the dust layers were thickened. The higher temperature in filtration rose the peak pressure of filter bag, but decreased the rate of rising.
Method and apparatus for a combination moving bed thermal treatment reactor and moving bed filter
Badger, Phillip C.; Dunn, Jr., Kenneth J.
2015-09-01
A moving bed gasification/thermal treatment reactor includes a geometry in which moving bed reactor particles serve as both a moving bed filter and a heat carrier to provide thermal energy for thermal treatment reactions, such that the moving bed filter and the heat carrier are one and the same to remove solid particulates or droplets generated by thermal treatment processes or injected into the moving bed filter from other sources.
Attia, Khalid A. M.; El-Abasawi, Nasr M.; El-Olemy, Ahmed; Serag, Ahmed
2018-02-01
Five simple spectrophotometric methods were developed for the determination of simeprevir in the presence of its oxidative degradation product namely, ratio difference, mean centering, derivative ratio using the Savitsky-Golay filters, second derivative and continuous wavelet transform. These methods are linear in the range of 2.5-40 μg/mL and validated according to the ICH guidelines. The obtained results of accuracy, repeatability and precision were found to be within the acceptable limits. The specificity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. Furthermore, these methods were statistically comparable to RP-HPLC method and good results were obtained. So, they can be used for the routine analysis of simeprevir in quality-control laboratories.
Muon radiography method for fundamental and applied research
Alexandrov, A. B.; Vladymyrov, M. S.; Galkin, V. I.; Goncharova, L. A.; Grachev, V. M.; Vasina, S. G.; Konovalova, N. S.; Malovichko, A. A.; Managadze, A. K.; Okat'eva, N. M.; Polukhina, N. G.; Roganova, T. M.; Starkov, N. I.; Tioukov, V. E.; Chernyavsky, M. M.; Shchedrina, T. V.
2017-12-01
This paper focuses on the basic principles of the muon radiography method, reviews the major muon radiography experiments, and presents the first results in Russia obtained by the authors using this method based on emulsion track detectors.
Methodical Aspects of Applying Strategy Map in an Organization
Piotr Markiewicz
2013-01-01
One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC). The method was c...
The research of radar target tracking observed information linear filter method
Chen, Zheng; Zhao, Xuanzhi; Zhang, Wen
2018-05-01
Aiming at the problems of low precision or even precision divergent is caused by nonlinear observation equation in radar target tracking, a new filtering algorithm is proposed in this paper. In this algorithm, local linearization is carried out on the observed data of the distance and angle respectively. Then the kalman filter is performed on the linearized data. After getting filtered data, a mapping operation will provide the posteriori estimation of target state. A large number of simulation results show that this algorithm can solve above problems effectively, and performance is better than the traditional filtering algorithm for nonlinear dynamic systems.
Celaya, Jose R.; Saxen, Abhinav; Goebel, Kai
2012-01-01
This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process and how it relates to uncertainty representation, management, and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function and the true remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for the two while considering prognostics in making critical decisions.
Xixiang Liu
2014-01-01
Full Text Available In the initial alignment process of strapdown inertial navigation system (SINS, large initial misalignment angles always bring nonlinear problem, which causes alignment failure when the classical linear error model and standard Kalman filter are used. In this paper, the problem of large misalignment angles in SINS initial alignment is investigated, and the key reason for alignment failure is given as the state covariance from Kalman filter cannot represent the true one during the steady filtering process. According to the analysis, an alignment method for SINS based on multiresetting the state covariance matrix of Kalman filter is designed to deal with large initial misalignment angles, in which classical linear error model and standard Kalman filter are used, but the state covariance matrix should be multireset before the steady process until large misalignment angles are decreased to small ones. The performance of the proposed method is evaluated by simulation and car test, and the results indicate that the proposed method can fulfill initial alignment with large misalignment angles effectively and the alignment accuracy of the proposed method is as precise as that of alignment with small misalignment angles.
Classical and modular methods applied to Diophantine equations
Dahmen, S.R.
2008-01-01
Deep methods from the theory of elliptic curves and modular forms have been used to prove Fermat's last theorem and solve other Diophantine equations. These so-called modular methods can often benefit from information obtained by other, classical, methods from number theory; and vice versa. In our
Hemming, S.; Kempkes, F.; Braak, van der N.; Dueck, T.A.; Marissen, A.
2006-01-01
Wageningen UR investigated the potentials of several NIR-filtering methods to be applied in Dutch horticulture. NIR-filtering can be done by the greenhouse covering or by internal or external moveable screens. The objective of this investigation was to quantify the effect of different NIR-filtering
The pseudo-harmonics method applied to depletion calculation
Silva, F.C. da; Amaral, J.A.C.; Thome, Z.D.
1989-01-01
In this paper, a new method for performing depletion calculations, based on the use of the Pseudo-Harmonics perturbation method, was developed. The fuel burnup was considered as a global perturbation and the multigroup difusion equations were rewriten in such a way as to treat the soluble boron concentration as the eigenvalue. By doing this, the critical boron concentration can be obtained by a perturbation method. A test of the new method was performed for a H 2 O-colled, D 2 O-moderated reactor. Comparison with direct calculation showed that this method is very accurate and efficient. (author) [pt
Garcia, S.; Perez, R. M.
2014-01-01
A study on the comparison and evaluation of a miniaturized extraction method for the determination of selected PACs in sample filters is presented. The main objective was the optimization and development of simple, rapid and low cost methods, minimizing the use of extracting solvent volume. The work also includes a study on the intermediate precision. (Author)
Ahunbay, Ergun E; Ates, O; Li, X A
2016-08-01
In a situation where a couch shift for patient positioning is not preferred or prohibited (e.g., MR-linac), segment aperture morphing (SAM) can address target dislocation and deformation. For IMRT/VMAT with flattening-filter-free (FFF) beams, however, SAM method would lead to an adverse translational dose effect due to the beam unflattening. Here the authors propose a new two-step process to address both the translational effect of FFF beams and the target deformation. The replanning method consists of an offline and an online step. The offline step is to create a series of preshifted-plans (PSPs) obtained by a so-called "warm start" optimization (starting optimization from the original plan, rather than from scratch) at a series of isocenter shifts. The PSPs all have the same number of segments with very similar shapes, since the warm start optimization only adjusts the MLC positions instead of regenerating them. In the online step, a new plan is obtained by picking the closest PSP or linearly interpolating the MLC positions and the monitor units of the closest PSPs for the shift determined from the image of the day. This two-step process is completely automated and almost instantaneous (no optimization or dose calculation needed). The previously developed SAM algorithm is then applied for daily deformation. The authors tested the method on sample prostate and pancreas cases. The two-step interpolation method can account for the adverse dose effects from FFF beams, while SAM corrects for the target deformation. Plan interpolation method is effective in diminishing the unflat beam effect and may allow reducing the required number of PSPs. The whole process takes the same time as the previously reported SAM process (5-10 min). The new two-step method plus SAM can address both the translation effects of FFF beams and target deformation, and can be executed in full automation except the delineation of target contour required by the SAM process.
Drews, Martin; Lauritzen, Bent; Madsen, Henrik
2005-01-01
A Kalman filter method is discussed for on-line estimation of radioactive release and atmospheric dispersion from a time series of off-site radiation monitoring data. The method is based on a state space approach, where a stochastic system equation describes the dynamics of the plume model...... parameters, and the observables are linked to the state variables through a static measurement equation. The method is analysed for three simple state space models using experimental data obtained at a nuclear research reactor. Compared to direct measurements of the atmospheric dispersion, the Kalman filter...... estimates are found to agree well with the measured parameters, provided that the radiation measurements are spread out in the cross-wind direction. For less optimal detector placement it proves difficult to distinguish variations in the source term and plume height; yet the Kalman filter yields consistent...
Li N.
2017-01-01
Full Text Available Affected by the unstable pulse radiation and the pulsar directional errors, the statistical characteristics of the pulsar measurement noise may vary with time slowly and cannot be accurately determined, which cause the filtering accuracy of the extended Kalman filter(EKF in pulsar navigation positioning system decline sharply or even diverge. To solve this problem, an adaptive extended Kalman filtering algorithm based on the empirical mode decomposition(EMD is proposed. In this method, the high frequency noise is separated from measurement information of pulsar by the method of EMD, and the noise variance can be estimated to update the parameters of EKF. The simulation results demonstrate that compared with conventional EKF, the proposed method can adaptively track the change of the measurement noise, and still keeps high estimation accuracy with unknown measurement noise, the positioning accuracy of the pulsar navigation is improved simultaneously.
Pan, L.; Lewicki, J.L.; Oldenburg, C.M.; Fischer, M.L.
2010-02-28
We use process-based modeling techniques to characterize the temporal features of natural biologically controlled surface CO{sub 2} fluxes and the relationships between the assimilation and respiration fluxes. Based on these analyses, we develop a signal-enhancing technique that combines a novel time-window splitting scheme, a simple median filtering, and an appropriate scaling method to detect potential signals of leakage of CO{sub 2} from geologic carbon sequestration sites from within datasets of net near-surface CO{sub 2} flux measurements. The technique can be directly applied to measured data and does not require subjective gap filling or data-smoothing preprocessing. Preliminary application of the new method to flux measurements from a CO{sub 2} shallow-release experiment appears promising for detecting a leakage signal relative to background variability. The leakage index of ?2 was found to span the range of biological variability for various ecosystems as determined by observing CO{sub 2} flux data at various control sites for a number of years.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-08
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Ke Li
2016-01-01
Full Text Available A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF and Diagnostic Bayesian Network (DBN is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO. To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA is proposed to evaluate the sensitiveness of symptom parameters (SPs for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-01
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006
Waste classification and methods applied to specific disposal sites
Rogers, V.C.
1979-01-01
An adequate definition of the classes of radioactive wastes is necessary to regulating the disposal of radioactive wastes. A classification system is proposed in which wastes are classified according to characteristics relating to their disposal. Several specific sites are analyzed with the methodology in order to gain insights into the classification of radioactive wastes. Also presented is the analysis of ocean dumping as it applies to waste classification. 5 refs
Fehl, D. L.; Chandler, G. A.; Stygar, W. A.; Olson, R. E.; Ruiz, C. L.; Hohlfelder, J. J.; Mix, L. P.; Biggs, F.; Berninger, M.; Frederickson, P. O.; Frederickson, R.
2010-12-01
An algorithm for spectral reconstructions (unfolds) and spectrally integrated flux estimates from data obtained by a five-channel, filtered x-ray-detector array (XRD) is described in detail and characterized. This diagnostic is a broad-channel spectrometer, used primarily to measure time-dependent soft x-ray flux emitted by z-pinch plasmas at the Z pulsed-power accelerator (Sandia National Laboratories, Albuquerque, New Mexico, USA), and serves as both a plasma probe and a gauge of accelerator performance. The unfold method, suitable for online analysis, arises naturally from general assumptions about the x-ray source and spectral properties of the channel responses; a priori constraints control the ill-posed nature of the inversion. The unfolded spectrum is not assumed to be Planckian. This study is divided into two consecutive papers. This paper considers three major issues: (a) Formulation of the unfold method.—The mathematical background, assumptions, and procedures leading to the algorithm are described: the spectral reconstruction Sunfold(E,t)—five histogram x-ray bins j over the x-ray interval, 137≤E≤2300eV at each time step t—depends on the shape and overlap of the calibrated channel responses and on the maximum electrical power delivered to the plasma. The x-ray flux Funfold is estimated as ∫Sunfold(E,t)dE. (b) Validation with simulations.—Tests of the unfold algorithm with known static and time-varying spectra are described. These spectra included—but were not limited to—Planckian spectra Sbb(E,T) (25≤T≤250eV), from which noise-free channel data were simulated and unfolded. For Planckian simulations with 125≤T≤250eV and typical responses, the binwise unfold values Sj and the corresponding binwise averages ⟨Sbb⟩j agreed to ˜20%, except where Sbb≪max{Sbb}. Occasionally, unfold values Sj≲0 (artifacts) were encountered. The algorithm recovered ≳90% of the x-ray flux over the wider range, 75≤T≤250eV. For lower T, the
D. L. Fehl
2010-12-01
Full Text Available An algorithm for spectral reconstructions (unfolds and spectrally integrated flux estimates from data obtained by a five-channel, filtered x-ray-detector array (XRD is described in detail and characterized. This diagnostic is a broad-channel spectrometer, used primarily to measure time-dependent soft x-ray flux emitted by z-pinch plasmas at the Z pulsed-power accelerator (Sandia National Laboratories, Albuquerque, New Mexico, USA, and serves as both a plasma probe and a gauge of accelerator performance. The unfold method, suitable for online analysis, arises naturally from general assumptions about the x-ray source and spectral properties of the channel responses; a priori constraints control the ill-posed nature of the inversion. The unfolded spectrum is not assumed to be Planckian. This study is divided into two consecutive papers. This paper considers three major issues: (a Formulation of the unfold method.—The mathematical background, assumptions, and procedures leading to the algorithm are described: the spectral reconstruction S_{unfold}(E,t—five histogram x-ray bins j over the x-ray interval, 137≤E≤2300 eV at each time step t—depends on the shape and overlap of the calibrated channel responses and on the maximum electrical power delivered to the plasma. The x-ray flux F_{unfold} is estimated as ∫S_{unfold}(E,tdE. (b Validation with simulations.—Tests of the unfold algorithm with known static and time-varying spectra are described. These spectra included—but were not limited to—Planckian spectra S_{bb}(E,T (25≤T≤250 eV, from which noise-free channel data were simulated and unfolded. For Planckian simulations with 125≤T≤250 eV and typical responses, the binwise unfold values S_{j} and the corresponding binwise averages ⟨S_{bb}⟩_{j} agreed to ∼20%, except where S_{bb}≪max{S_{bb}}. Occasionally, unfold values S_{j}≲0 (artifacts were encountered. The algorithm recovered ≳90% of the x
nuclear and atomic methods applied in the determination of some
NAA is a quantitative and qualitative method for the precise determination of a number of major, minor and trace elements in different types of geological, environmental and biological samples. It is based on nuclear reaction between neutron and target nuclei of a sample material. It is a useful method for the simultaneous.
Instructions for applying inverse method for reactivity measurement
Milosevic, M.
1988-11-01
This report is a brief description of the completed method for reactivity measurement. It contains description of the experimental procedure needed instrumentation and computer code IM for determining reactivity. The objective of this instructions manual is to enable experiments and reactivity measurement on any critical system according to the methods adopted at the RB reactor
Wu, Weimin; Liu, Yuan; He, Yuanbin
2017-01-01
Grid-tied voltage source inverters using LCL filter have been widely adopted in distributed power generation systems (DPGSs). As high-order LCL filters contain multiple resonant frequencies, switching harmonics generated by the inverter and current harmonics generated by the active/passive loads...... innovative damping methods have been proposed. A comprehensive overview on those contributions and their classification on the inverter- and grid-side damping measures are presented. Based on the concept of the impedance-based stability analysis, all damping methods can ensure the system stability...
Grigoriev, Yu A.; Proletarskaya, V. A.; Ermakov, E. Yu; Ermakov, O. Yu
2017-10-01
A new method was developed with a cascading Bloom filter (CBF) for executing SQL queries in the Apache Spark parallel computing environment. It includes the representation of the original query in the form of several subqueries, the development of a connection graph and the transformation of subqueries, the definition of connections where it is necessary to use Bloom filters, the representation of the graph in terms of Spark. On the example of the query Q3 of the TPC-H test, full-scale experiments were carried out, which confirmed the effectiveness of the developed method.
Amanda L. Fox; Dean E. Eisenhauer; Michael G. Dosskey
2005-01-01
Vegetated filters (buffers) are used to intercept overland runoff and reduce sediment and other contaminant loads to streams (Dosskey, 2001). Filters function by reducing runoff velocity and volume, thus enhancing sedimentation and infiltration. lnfiltration is the main mechanism for soluble contaminant removal, but it also plays a role in suspended particle removal....
Comparative Study of Different Methods for Soot Sensing and Filter Monitoring in Diesel Exhausts
Feulner, Markus; Hagen, Gunter; Hottner, Kathrin; Redel, Sabrina; Müller, Andreas; Moos, Ralf
2017-01-01
Due to increasingly tighter emission limits for diesel and gasoline engines, especially concerning particulate matter emissions, particulate filters are becoming indispensable devices for exhaust gas after treatment. Thereby, for an efficient engine and filter control strategy and a cost-efficient filter design, reliable technologies to determine the soot load of the filters and to measure particulate matter concentrations in the exhaust gas during vehicle operation are highly needed. In this study, different approaches for soot sensing are compared. Measurements were conducted on a dynamometer diesel engine test bench with a diesel particulate filter (DPF). The DPF was monitored by a relatively new microwave-based approach. Simultaneously, a resistive type soot sensor and a Pegasor soot sensing device as a reference system measured the soot concentration exhaust upstream of the DPF. By changing engine parameters, different engine out soot emission rates were set. It was found that the microwave-based signal may not only indicate directly the filter loading, but by a time derivative, the engine out soot emission rate can be deduced. Furthermore, by integrating the measured particulate mass in the exhaust, the soot load of the filter can be determined. In summary, all systems coincide well within certain boundaries and the filter itself can act as a soot sensor. PMID:28218700
Comparative Study of Different Methods for Soot Sensing and Filter Monitoring in Diesel Exhausts.
Feulner, Markus; Hagen, Gunter; Hottner, Kathrin; Redel, Sabrina; Müller, Andreas; Moos, Ralf
2017-02-18
Due to increasingly tighter emission limits for diesel and gasoline engines, especially concerning particulate matter emissions, particulate filters are becoming indispensable devices for exhaust gas after treatment. Thereby, for an efficient engine and filter control strategy and a cost-efficient filter design, reliable technologies to determine the soot load of the filters and to measure particulate matter concentrations in the exhaust gas during vehicle operation are highly needed. In this study, different approaches for soot sensing are compared. Measurements were conducted on a dynamometer diesel engine test bench with a diesel particulate filter (DPF). The DPF was monitored by a relatively new microwave-based approach. Simultaneously, a resistive type soot sensor and a Pegasor soot sensing device as a reference system measured the soot concentration exhaust upstream of the DPF. By changing engine parameters, different engine out soot emission rates were set. It was found that the microwave-based signal may not only indicate directly the filter loading, but by a time derivative, the engine out soot emission rate can be deduced. Furthermore, by integrating the measured particulate mass in the exhaust, the soot load of the filter can be determined. In summary, all systems coincide well within certain boundaries and the filter itself can act as a soot sensor.
Comparative Study of Different Methods for Soot Sensing and Filter Monitoring in Diesel Exhausts
Markus Feulner
2017-02-01
Full Text Available Due to increasingly tighter emission limits for diesel and gasoline engines, especially concerning particulate matter emissions, particulate filters are becoming indispensable devices for exhaust gas after treatment. Thereby, for an efficient engine and filter control strategy and a cost-efficient filter design, reliable technologies to determine the soot load of the filters and to measure particulate matter concentrations in the exhaust gas during vehicle operation are highly needed. In this study, different approaches for soot sensing are compared. Measurements were conducted on a dynamometer diesel engine test bench with a diesel particulate filter (DPF. The DPF was monitored by a relatively new microwave-based approach. Simultaneously, a resistive type soot sensor and a Pegasor soot sensing device as a reference system measured the soot concentration exhaust upstream of the DPF. By changing engine parameters, different engine out soot emission rates were set. It was found that the microwave-based signal may not only indicate directly the filter loading, but by a time derivative, the engine out soot emission rate can be deduced. Furthermore, by integrating the measured particulate mass in the exhaust, the soot load of the filter can be determined. In summary, all systems coincide well within certain boundaries and the filter itself can act as a soot sensor.
Wang, Lutao; Xiao, Jun; Chai, Hua
2015-08-01
The successful suppression of clutter arising from stationary or slowly moving tissue is one of the key issues in medical ultrasound color blood imaging. Remaining clutter may cause bias in the mean blood frequency estimation and results in a potentially misleading description of blood-flow. In this paper, based on the principle of general wall-filter, the design process of three classes of filters, infinitely impulse response with projection initialization (Prj-IIR), polynomials regression (Pol-Reg), and eigen-based filters are previewed and analyzed. The performance of the filters was assessed by calculating the bias and variance of a mean blood velocity using a standard autocorrelation estimator. Simulation results show that the performance of Pol-Reg filter is similar to Prj-IIR filters. Both of them can offer accurate estimation of mean blood flow speed under steady clutter conditions, and the clutter rejection ability can be enhanced by increasing the ensemble size of Doppler vector. Eigen-based filters can effectively remove the non-stationary clutter component, and further improve the estimation accuracy for low speed blood flow signals. There is also no significant increase in computation complexity for eigen-based filters when the ensemble size is less than 10.
The spectral volume method as applied to transport problems
McClarren, Ryan G.
2011-01-01
We present a new spatial discretization for transport problems: the spectral volume method. This method, rst developed by Wang for computational fluid dynamics, divides each computational cell into several sub-cells and enforces particle balance on each of these sub-cells. Also, these sub-cells are used to build a polynomial reconstruction in the cell. The idea of dividing cells into many cells is a generalization of the simple corner balance and other similar schemes. The spectral volume method preserves particle conservation and preserves the asymptotic diffusion limit. We present results from the method on two transport problems in slab geometry using discrete ordinates and second through sixth order spectral volume schemes. The numerical results demonstrate the accuracy and preservation of the diffusion limit of the spectral volume method. Future work will explore possible bene ts of the scheme for high-performance computing and for resolving diffusive boundary layers. (author)
Literature Review of Applying Visual Method to Understand Mathematics
Yu Xiaojuan
2015-01-01
Full Text Available As a new method to understand mathematics, visualization offers a new way of understanding mathematical principles and phenomena via image thinking and geometric explanation. It aims to deepen the understanding of the nature of concepts or phenomena and enhance the cognitive ability of learners. This paper collates and summarizes the application of this visual method in the understanding of mathematics. It also makes a literature review of the existing research, especially with a visual demonstration of Euler’s formula, introduces the application of this method in solving relevant mathematical problems, and points out the differences and similarities between the visualization method and the numerical-graphic combination method, as well as matters needing attention for its application.
Wang Li
Full Text Available Biomarkers in exhaled breath are useful for respiratory disease diagnosis in human volunteers. Conventional methods that collect non-volatile biomarkers, however, necessitate an extensive dilution and sanitation processes that lowers collection efficiencies and convenience of use. Electret filter emerged in recent decade to collect virus biomarkers in exhaled breath given its simplicity and effectiveness. To investigate the capability of electret filters to collect protein biomarkers, a model that consists of an atomizer that produces protein aerosol and an electret filter that collects albumin and carcinoembryonic antigen-a typical biomarker in lung cancer development- from the atomizer is developed. A device using electret filter as the collecting medium is designed to collect human albumin from exhaled breath of 6 volunteers. Comparison of the collecting ability between the electret filter method and other 2 reported methods is finally performed based on the amounts of albumin collected from human exhaled breath. In conclusion, a decreasing collection efficiency ranging from 17.6% to 2.3% for atomized albumin aerosol and 42% to 12.5% for atomized carcinoembryonic antigen particles is found; moreover, an optimum volume of sampling human exhaled breath ranging from 100 L to 200 L is also observed; finally, the self-designed collecting device shows a significantly better performance in collecting albumin from human exhaled breath than the exhaled breath condensate method (p0.05. In summary, electret filters are potential in collecting non-volatile biomarkers in human exhaled breath not only because it was simpler, cheaper and easier to use than traditional methods but also for its better collecting performance.
Methodical Aspects of Applying Strategy Map in an Organization
Piotr Markiewicz
2013-06-01
Full Text Available One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC. The method was created as a result of implementing the project “Measuring performance in the Organization of the future” of 1990, completed by a team under the supervision of David Norton (Kaplan, Norton 2002. The developed method was used first of all to evaluate performance by decomposition of a strategy into four perspectives and identification of measures of achievement. In the middle of 1990s the method was improved by enriching it, first of all, with a strategy map, in which the process of transition of intangible assets into tangible financial effects is reflected (Kaplan, Norton 2001. Strategy map enables illustration of cause and effect relationship between processes in all four perspectives and performance indicators at the level of organization. The purpose of the study being prepared is to present methodical conditions of using strategy maps in the strategy implementation process in organizations of different nature.
Applying a life cycle approach to project management methods
Biggins, David; Trollsund, F.; Høiby, A.L.
2016-01-01
Project management is increasingly important to organisations because projects are the method\\ud by which organisations respond to their environment. A key element within project management\\ud is the standards and methods that are used to control and conduct projects, collectively known as\\ud project management methods (PMMs) and exemplified by PRINCE2, the Project Management\\ud Institute’s and the Association for Project Management’s Bodies of Knowledge (PMBOK and\\ud APMBOK. The purpose of t...
Method for curing alkyd resin compositions by applying ionizing radiation
Watanabe, T.; Murata, K.; Maruyama, T.
1975-01-01
An alkyd resin composition is prepared by dissolving a polymerizable alkyd resin having from 10 to 50 percent of oil length into a vinyl monomer. The polymerizable alkyd resin is obtained by a half-esterification reaction of an acid anhydride having a polymerizable unsaturated group and an alkyd resin modified with conjugated unsaturated oil having at least one reactive hydroxyl group per one molecule. The alkyd resin composition thus obtained is coated on an article, and ionizing radiation is applied on the article to cure the coated film thereon. (U.S.)
Wille, M-L; Langton, C M; Zapf, M; Ruiter, N V; Gemmeke, H
2015-01-01
The quality of ultrasound computed tomography imaging is primarily determined by the accuracy of ultrasound transit time measurement. A major problem in analysis is the overlap of signals making it difficult to detect the correct transit time. The current standard is to apply a matched-filtering approach to the input and output signals. This study compares the matched-filtering technique with active set deconvolution to derive a transit time spectrum from a coded excitation chirp signal and the measured output signal. The ultrasound wave travels in a direct and a reflected path to the receiver, resulting in an overlap in the recorded output signal. The matched-filtering and deconvolution techniques were applied to determine the transit times associated with the two signal paths. Both techniques were able to detect the two different transit times; while matched-filtering has a better accuracy (0.13 μs versus 0.18 μs standard deviations), deconvolution has a 3.5 times improved side-lobe to main-lobe ratio. A higher side-lobe suppression is important to further improve image fidelity. These results suggest that a future combination of both techniques would provide improved signal detection and hence improved image fidelity. (note)
The integral equation method applied to eddy currents
Biddlecombe, C.S.; Collie, C.J.; Simkin, J.; Trowbridge, C.W.
1976-04-01
An algorithm for the numerical solution of eddy current problems is described, based on the direct solution of the integral equation for the potentials. In this method only the conducting and iron regions need to be divided into elements, and there are no boundary conditions. Results from two computer programs using this method for iron free problems for various two-dimensional geometries are presented and compared with analytic solutions. (author)
Study of spectral response of a neutron filter. Design of a method to adjust spectra
Colomb-Dolci, F.
1999-02-01
The first part of this thesis describes an experimental method which intends to determine a neutron spectrum in the epithermal range [1 eV -10 keV]. Based on measurements of reaction rates provided by activation foils, it gives flux level in each energy range corresponding to each probe. This method can be used in any reactor location or in a neutron beam. It can determine scepter on eight energy groups, five groups in the epithermal range. The second part of this thesis presents a study of an epithermal neutron beam design, in the frame of Neutron Capture Therapy. A beam tube was specially built to test filters made up of different materials. Its geometry was designed to favour epithermal neutron crossing and to cut thermal and fast neutrons. A code scheme was validated to simulate the device response with a Monte Carlo code. Measurements were made at ISIS reactor and experimental spectra were compared to calculated ones. This validated code scheme was used to simulate different materials usable as shields in the tube. A study of these shields is presented at the end of this thesis. (author)
Papaya Tree Detection with UAV Images Using a GPU-Accelerated Scale-Space Filtering Method
Hao Jiang
2017-07-01
Full Text Available The use of unmanned aerial vehicles (UAV can allow individual tree detection for forest inventories in a cost-effective way. The scale-space filtering (SSF algorithm is commonly used and has the capability of detecting trees of different crown sizes. In this study, we made two improvements with regard to the existing method and implementations. First, we incorporated SSF with a Lab color transformation to reduce over-detection problems associated with the original luminance image. Second, we ported four of the most time-consuming processes to the graphics processing unit (GPU to improve computational efficiency. The proposed method was implemented using PyCUDA, which enabled access to NVIDIA’s compute unified device architecture (CUDA through high-level scripting of the Python language. Our experiments were conducted using two images captured by the DJI Phantom 3 Professional and a most recent NVIDIA GPU GTX1080. The resulting accuracy was high, with an F-measure larger than 0.94. The speedup achieved by our parallel implementation was 44.77 and 28.54 for the first and second test image, respectively. For each 4000 × 3000 image, the total runtime was less than 1 s, which was sufficient for real-time performance and interactive application.
Computational Methods to Work as First-Pass Filter in Deleterious SNP Analysis of Alkaptonuria
R. Magesh
2012-01-01
Full Text Available A major challenge in the analysis of human genetic variation is to distinguish functional from nonfunctional SNPs. Discovering these functional SNPs is one of the main goals of modern genetics and genomics studies. There is a need to effectively and efficiently identify functionally important nsSNPs which may be deleterious or disease causing and to identify their molecular effects. The prediction of phenotype of nsSNPs by computational analysis may provide a good way to explore the function of nsSNPs and its relationship with susceptibility to disease. In this context, we surveyed and compared variation databases along with in silico prediction programs to assess the effects of deleterious functional variants on protein functions. In other respects, we attempted these methods to work as first-pass filter to identify the deleterious substitutions worth pursuing for further experimental research. In this analysis, we used the existing computational methods to explore the mutation-structure-function relationship in HGD gene causing alkaptonuria.
Computational Methods to Work as First-Pass Filter in Deleterious SNP Analysis of Alkaptonuria
Magesh, R.; George Priya Doss, C.
2012-01-01
A major challenge in the analysis of human genetic variation is to distinguish functional from nonfunctional SNPs. Discovering these functional SNPs is one of the main goals of modern genetics and genomics studies. There is a need to effectively and efficiently identify functionally important nsSNPs which may be deleterious or disease causing and to identify their molecular effects. The prediction of phenotype of nsSNPs by computational analysis may provide a good way to explore the function of nsSNPs and its relationship with susceptibility to disease. In this context, we surveyed and compared variation databases along with in silico prediction programs to assess the effects of deleterious functional variants on protein functions. In other respects, we attempted these methods to work as first-pass filter to identify the deleterious substitutions worth pursuing for further experimental research. In this analysis, we used the existing computational methods to explore the mutation-structure-function relationship in HGD gene causing alkaptonuria. PMID:22606059
Signal Enhancement with Variable Span Linear Filters
Benesty, Jacob; Christensen, Mads Græsbøll; Jensen, Jesper Rindom
This book introduces readers to the novel concept of variable span speech enhancement filters, and demonstrates how it can be used for effective noise reduction in various ways. Further, the book provides the accompanying Matlab code, allowing readers to easily implement the main ideas discussed....... Variable span filters combine the ideas of optimal linear filters with those of subspace methods, as they involve the joint diagonalization of the correlation matrices of the desired signal and the noise. The book shows how some well-known filter designs, e.g. the minimum distortion, maximum signal......-to-noise ratio, Wiener, and tradeoff filters (including their new generalizations) can be obtained using the variable span filter framework. It then illustrates how the variable span filters can be applied in various contexts, namely in single-channel STFT-based enhancement, in multichannel enhancement in both...
Analog Electronic Filters Theory, Design and Synthesis
Dimopoulos, Hercules G
2012-01-01
Filters are essential subsystems in a huge variety of electronic systems. Filter applications are innumerable; they are used for noise reduction, demodulation, signal detection, multiplexing, sampling, sound and speech processing, transmission line equalization and image processing, to name just a few. In practice, no electronic system can exist without filters. They can be found in everything from power supplies to mobile phones and hard disk drives and from loudspeakers and MP3 players to home cinema systems and broadband Internet connections. This textbook introduces basic concepts and methods and the associated mathematical and computational tools employed in electronic filter theory, synthesis and design. This book can be used as an integral part of undergraduate courses on analog electronic filters. Includes numerous, solved examples, applied examples and exercises for each chapter. Includes detailed coverage of active and passive filters in an independent but correlated manner. Emphasizes real filter...
Signal enhancement with variable span linear filters
Benesty, Jacob; Jensen, Jesper R
2016-01-01
This book introduces readers to the novel concept of variable span speech enhancement filters, and demonstrates how it can be used for effective noise reduction in various ways. Further, the book provides the accompanying Matlab code, allowing readers to easily implement the main ideas discussed. Variable span filters combine the ideas of optimal linear filters with those of subspace methods, as they involve the joint diagonalization of the correlation matrices of the desired signal and the noise. The book shows how some well-known filter designs, e.g. the minimum distortion, maximum signal-to-noise ratio, Wiener, and tradeoff filters (including their new generalizations) can be obtained using the variable span filter framework. It then illustrates how the variable span filters can be applied in various contexts, namely in single-channel STFT-based enhancement, in multichannel enhancement in both the time and STFT domains, and, lastly, in time-domain binaural enhancement. In these contexts, the properties of ...
Glangeaud F.
2006-11-01
Full Text Available The spectral matrix computed from VSP-traces transfer functions contains information about each wave making up the VSP data set. Using a filter based on the eigenvectors of the spectral matrix leads to a decomposition of input traces in eigensections. The eigensections associated with the largest eigenvalues contain the contribution of the correlated seismic events. Signal space is denoted as the sum of these eigensections. Other eigensections represent noise. When the different waves making up the VSP have very different amplitudes, decomposition of input traces into eigensections leads to wave separation without any required knowledge about the apparent velocities of the waves. Limitations of wave separation by the multichannel filtering are a function of the scalar product values of the waves (in frequency domain and of the relative wave amplitudes. The spectral matrix filtering can always be used to enhance signal-to-noise ratio on VSP data. The eigenvalues of the spectral matrix can be used to estimate the signal-to-noise ratio as a function of frequency. It is possible to qualify the behavior of a VSP tool in a well and to detect some resonant frequencies probably generated by poor coupling. Field data examples are shown. The first example shows data recorded in a vertical well whose converted shear waves are separated from upgoing and downgoing compressional waves using a spectral matrix filter. This field case shows the efficiency of the spectral matrix filter in extracting weak events. The second example shows data recorded in a highly deviated well, where very close apparent velocity events are successfully separated by use of spectral matrix filtering. La technique de filtrage matriciel, quel que soit le type de données auxquelles elle est appliquée, permet d'améliorer le rapport signal sur bruit, de quantifier l'évolution du rapport signal sur bruit en fonction de la fréquence, d'identifier les différents signaux composant les
Optimization of filter loading
Turney, J.H.; Gardiner, D.E.; Sacramento Municipal Utility District, Herald, CA)
1985-01-01
The introduction of 10 CFR Part 61 has created potential difficulties in the disposal of spent cartridge filters. When this report was prepared, Rancho Seco had no method of packaging and disposing of class B or C filters. This work examined methods to minimize the total operating cost of cartridge filters while maintaining them below the class A limit. It was found that by encapsulating filters in cement the filter operating costs could be minimized
Baiao, D.; Medina, F.; Ochando, M.; Varandas, C.
2009-01-01
The TJ-II plasma soft X-ray emission was studied in order to establish an adequate setup for an electron temperature diagnostic suitable for high density, with spatial and temporal resolutions, based on the two-filters method. The preliminary experimental results reported were obtained with two diagnostics (an X-ray PHA based on a Ge detector and a tomography system) already installed in TJ-II stellarator. These results lead to the conclusion that the two-filters method was a suitable option for an electron temperature diagnostic for high-density plasmas in TJ-II. We present the design and fi rst results obtained with a prototype for the measurement of electron temperature in TJ-II plasmas heated with energetic neutral beams. This system consists in two AXUV20A detectors which measure the soft X-ray plasma emissivity trough beryllium filters of different thickness. From the two-filters technique it is possible to estimate the electron temperature. The analyses carried out allowed concluding which filter thicknesses are most suited for TJ-II plasmas, and enhanced the need of a computer code to simulate signals and plasma compositions. (Author) 7 refs.
Iwamoto, Hiroyuki; Tanaka, Nobuo; Hill, Simon G
2010-01-01
This paper concerns the active vibration control of a rectangular panel using smart sensors from the viewpoint of an active wave control theory. The objective of this paper is to present a new type of filter which enables the measurement of the wave amplitude of a rectangular panel in real time for the application of an adaptive feedforward control system which inactivates vibration modes. Firstly, a novel wave filtering method using smart PVDF sensors is proposed. It is found that the shaping function of smart sensors is a complex function. To realize the smart sensor in a practical situation, a Hilbert transformer is utilized to implement a phase shifter of 90° for broadband frequencies. Then, from the viewpoint of a numerical analysis, the characteristics of the proposed wave filter and the performance of the adaptive feedforward control system using the wave filter are discussed. Finally, experiments implementing the active wave control theory which uses the proposed wave filter are conducted, demonstrating the validity of the proposed method in suppressing the vibration of a rectangular panel
Baiao, D.; Medina, F.; Ochando, M.; Varandas, C.
2009-07-01
The TJ-II plasma soft X-ray emission was studied in order to establish an adequate setup for an electron temperature diagnostic suitable for high density, with spatial and temporal resolutions, based on the two-filters method. The preliminary experimental results reported were obtained with two diagnostics (an X-ray PHA based on a Ge detector and a tomography system) already installed in TJ-II stellarator. These results lead to the conclusion that the two-filters method was a suitable option for an electron temperature diagnostic for high-density plasmas in TJ-II. We present the design and fi rst results obtained with a prototype for the measurement of electron temperature in TJ-II plasmas heated with energetic neutral beams. This system consists in two AXUV20A detectors which measure the soft X-ray plasma emissivity trough beryllium filters of different thickness. From the two-filters technique it is possible to estimate the electron temperature. The analyses carried out allowed concluding which filter thicknesses are most suited for TJ-II plasmas, and enhanced the need of a computer code to simulate signals and plasma compositions. (Author) 7 refs.
Apply of torque method at rationalization of work
Bandurová Miriam
2001-03-01
Full Text Available Aim of the study was to analyse consumption of time for profession - cylinder grinder, by torque method.Method of torque following is used for detection of sorts and size of time slope, on detection of portion of individual sorts of time consumption and cause of time slope. By this way it is possible to find out coefficient of employment and recovery of workers in organizational unit. Advantage of torque survey is low costs on informations acquirement, non-fastidiousness per worker and observer, which is easy trained. It is mentally acceptable method for objects of survey.Finding and detection of reserves in activity of cylinders grinder result of torque was surveys. Loss of time presents till 8% of working time. In 5 - shift service and average occupiying of shift by 4,4 grinder ( from statistic information of service , loss at grinder of cylinders are for whole centre 1,48 worker.According presented information it was recommended to cancel one job place - grinder of cylinders - and reduce state about one grinder. Next job place isn't possible cancel, because grindery of cylinders must to adapt to the grind line by number of polished cylinders in shift and semi - finishing of polished cylinders can not be high for often changes in area of grinding and sortiment changes.By this contribution we confirmed convenience of exploitation of torque method as one of the methods using during the job rationalization.
Thermoluminescence as a dating method applied to the Morocco Neolithic
Ousmoi, M.
1989-09-01
Thermoluminescence is an absolute dating method which is well adapted to the study of burnt clays and so of the prehistoric ceramics belonging to the Neolithic period. The purpose of this study is to establish a first absolute chronology of the septentrional morocco Neolithic between 3000 and 7000 years before us and some improvements of the TL dating. The first part of the thesis contains some hypothesis about the morocco Neolithic and some problems to solve. Then we study the TL dating method along with new process to ameliorate the quality of the results like the shift of quartz TL peaks or the crushing of samples. The methods which were employed using 24 samples belonging to various civilisations are: the quartz inclusion method and the fine grain technique. For the dosimetry, several methods were used: determination of the K 2 O contents, alpha counting, site dosimetry using TL dosimeters and a scintillation counter. The results which were found bring some interesting answers to the archeologic question and ameliorate the chronologic schema of the Northern morocco Neolithic: development of the old cardial Neolithic in the North, and perhaps in the center of Morocco (the region of Rabat), between 5500 and 7000 before us. Development of the recent middle Neolithic around 4000-5000 before us, with a protocampaniforme (Skhirat), little older than the campaniforme recognized in the south of Spain. Development of the bronze age around 2000-4000 before us [fr
Perez-Garcia, H; Barquero, R
The correct determination and delineation of tumor/organ size is crucial in 2-D imaging in 131 I therapy. These images are usually obtained using a system composed of a Gamma camera and high-energy collimator, although the system can produce artifacts in the image. This article analyses these artifacts and describes a correction filter that can eliminate those collimator artifacts. Using free software, ImageJ, a central profile in the image is obtained and analyzed. Two components can be seen in the fluctuation of the profile: one associated with the stochastic nature of the radiation, plus electronic noise and the other periodically across the position in space due to the collimator. These frequencies are analytically obtained and compared with the frequencies in the Fourier transform of the profile. A specially developed filter removes the artifacts in the 2D Fourier transform of the DICOM image. This filter is tested using a 15-cm-diameter Petri dish with 131 I radioactive water (big object size) image, a 131 I clinical pill (small object size) image, and an image of the remainder of the lesion of two patients treated with 3.7GBq (100mCi), and 4.44GBq (120mCi) of 131 I, respectively, after thyroidectomy. The artifact is due to the hexagonal periodic structure of the collimator. The use of the filter on large-sized images reduces the fluctuation by 5.8-3.5%. In small-sized images, the FWHM can be determined in the filtered image, while this is impossible in the unfiltered image. The definition of tumor boundary and the visualization of the activity distribution inside patient lesions improve drastically when the filter is applied to the corresponding images obtained with HE gamma camera. The HURRA filter removes the artifact of high-energy collimator artifacts in planar images obtained with a Gamma camera without reducing the image resolution. It can be applied in any study of patient quantification because the number of counts remains invariant. The filter makes
Modal method for crack identification applied to reactor recirculation pump
Miller, W.H.; Brook, R.
1991-01-01
Nuclear reactors have been operating and producing useful electricity for many years. Within the last few years, several plants have found cracks in the reactor coolant pump shaft near the thermal barrier. The modal method and results described herein show the analytical results of using a Modal Analysis test method to determine the presence, size, and location of a shaft crack. The authors have previously demonstrated that the test method can analytically and experimentally identify shaft cracks as small as five percent (5%) of the shaft diameter. Due to small differences in material property distribution, the attempt to identify cracks smaller than 3% of the shaft diameter has been shown to be impractical. The rotor dynamics model includes a detailed motor rotor, external weights and inertias, and realistic total support stiffness. Results of the rotor dynamics model have been verified through a comparison with on-site vibration test data
Boron autoradiography method applied to the study of steels
Gugelmeier, R.; Barcelo, G.N.; Boado, J.H.; Fernandez, C.
1986-01-01
The boron state, contained in the steel microestructure, is determined. The autoradiography by neutrons is used, permiting to obtain boron distribution images by means of additional information which is difficult to acquire by other methods. The application of the method is described, based on the neutronic irradiation of a polished steel sample, over which a celulose nitrate sheet or other appropriate material is fixed to constitute the detector. The particles generated by the neutron-boron interaction affect the detector sheet, which is subsequently revealed with a chemical treatment and can be observed at the optical microscope. In the case of materials used for the construction of nuclear reactors, special attention must be given to the presence of boron, since owing to the exceptionaly high capacity of neutron absorption, lowest quantities of boron acquire importance. The adaption of the method to metallurgical problems allows the obtainment of a correlation between the boron distribution images and the material's microstructure. (M.E.L.) [es
Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model
Oluwaseun Egbelowo
2017-05-01
Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.
Applying Nyquist's method for stability determination to solar wind observations
Klein, Kristopher G.; Kasper, Justin C.; Korreck, K. E.; Stevens, Michael L.
2017-10-01
The role instabilities play in governing the evolution of solar and astrophysical plasmas is a matter of considerable scientific interest. The large number of sources of free energy accessible to such nearly collisionless plasmas makes general modeling of unstable behavior, accounting for the temperatures, densities, anisotropies, and relative drifts of a large number of populations, analytically difficult. We therefore seek a general method of stability determination that may be automated for future analysis of solar wind observations. This work describes an efficient application of the Nyquist instability method to the Vlasov dispersion relation appropriate for hot, collisionless, magnetized plasmas, including the solar wind. The algorithm recovers the familiar proton temperature anisotropy instabilities, as well as instabilities that had been previously identified using fits extracted from in situ observations in Gary et al. (2016). Future proposed applications of this method are discussed.
Efficient electronic structure methods applied to metal nanoparticles
Larsen, Ask Hjorth
of efficient approaches to density functional theory and the application of these methods to metal nanoparticles. We describe the formalism and implementation of localized atom-centered basis sets within the projector augmented wave method. Basis sets allow for a dramatic increase in performance compared....... The basis set method is used to study the electronic effects for the contiguous range of clusters up to several hundred atoms. The s-electrons hybridize to form electronic shells consistent with the jellium model, leading to electronic magic numbers for clusters with full shells. Large electronic gaps...... and jumps in Fermi level near magic numbers can lead to alkali-like or halogen-like behaviour when main-group atoms adsorb onto gold clusters. A non-self-consistent NewnsAnderson model is used to more closely study the chemisorption of main-group atoms on magic-number Au clusters. The behaviour at magic...
Variance reduction methods applied to deep-penetration problems
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course
Yiin, L.-M.; Lu, S.-E.; Sannoh, Sulaiman; Lim, B.S.; Rhoads, G.G.
2004-01-01
We conducted a cleaning trial in 40 northern New Jersey homes where home renovation and remodeling (R and R) activities were undertaken. Two cleaning protocols were used in the study: a specific method recommended by the US Department of Housing and Urban Development (HUD), in the 1995 'Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing', using a high-efficiency particulate air (HEPA)-filtered vacuum cleaner and a tri-sodium phosphate solution (TSP); and an alternative method using a household vacuum cleaner and a household detergent. Eligible homes were built before the 1970s with potential lead-based paint and had recent R and R activities without thorough cleaning. The two cleaning protocols were randomly assigned to the participants' homes and followed the HUD-recommended three-step procedure: vacuuming, wet washing, and repeat vacuuming. Wipe sampling was conducted on floor surfaces or windowsills before and after cleaning to evaluate the efficacy. All floor and windowsill data indicated that both methods (TSP/HEPA and non-TSP/non-HEPA) were effective in reducing lead loading on the surfaces (P<0.001). When cleaning was applied to surfaces with initial lead loading above the clearance standards, the reductions were even greater, above 95% for either cleaning method. The mixed-effect model analysis showed no significant difference between the two methods. Baseline lead loading was found to be associated with lead loading reduction significantly on floors (P<0.001) and marginally on windowsills (P=0.077). Such relations were different between the two cleaning methods significantly on floors (P<0.001) and marginally on windowsills (P=0.066), with the TSP/HEPA method being favored for higher baseline levels and the non-TSP/non-HEPA method for lower baseline levels. For the 10 homes with lead abatement, almost all post-cleaning lead loadings were below the standards using either cleaning method. Based on our results, we recommend that
Automated microaneurysm detection method based on double ring filter in retinal fundus images
Mizutani, Atsushi; Muramatsu, Chisako; Hatanaka, Yuji; Suemori, Shinsuke; Hara, Takeshi; Fujita, Hiroshi
2009-02-01
The presence of microaneurysms in the eye is one of the early signs of diabetic retinopathy, which is one of the leading causes of vision loss. We have been investigating a computerized method for the detection of microaneurysms on retinal fundus images, which were obtained from the Retinopathy Online Challenge (ROC) database. The ROC provides 50 training cases, in which "gold standard" locations of microaneurysms are provided, and 50 test cases without the gold standard locations. In this study, the computerized scheme was developed by using the training cases. Although the results for the test cases are also included, this paper mainly discusses the results for the training cases because the "gold standard" for the test cases is not known. After image preprocessing, candidate regions for microaneurysms were detected using a double-ring filter. Any potential false positives located in the regions corresponding to blood vessels were removed by automatic extraction of blood vessels from the images. Twelve image features were determined, and the candidate lesions were classified into microaneurysms or false positives using the rule-based method and an artificial neural network. The true positive fraction of the proposed method was 0.45 at 27 false positives per image. Forty-two percent of microaneurysms in the 50 training cases were considered invisible by the consensus of two co-investigators. When the method was evaluated for visible microaneurysms, the sensitivity for detecting microaneurysms was 65% at 27 false positives per image. Our computerized detection scheme could be improved for helping ophthalmologists in the early diagnosis of diabetic retinopathy.
Non-perturbative methods applied to multiphoton ionization
Brandi, H.S.; Davidovich, L.; Zagury, N.
1982-09-01
The use of non-perturbative methods in the treatment of atomic ionization is discussed. Particular attention is given to schemes of the type proposed by Keldysh where multiphoton ionization and tunnel auto-ionization occur for high intensity fields. These methods are shown to correspond to a certain type of expansion of the T-matrix in the intra-atomic potential; in this manner a criterium concerning the range of application of these non-perturbative schemes is suggested. A brief comparison between the ionization rate of atoms in the presence of linearly and circularly polarized light is presented. (Author) [pt
On second quantization methods applied to classical statistical mechanics
Matos Neto, A.; Vianna, J.D.M.
1984-01-01
A method of expressing statistical classical results in terms of mathematical entities usually associated to quantum field theoretical treatment of many particle systems (Fock space, commutators, field operators, state vector) is discussed. It is developed a linear response theory using the 'second quantized' Liouville equation introduced by Schonberg. The relationship of this method to that of Prigogine et al. is briefly analyzed. The chain of equations and the spectral representations for the new classical Green's functions are presented. Generalized operators defined on Fock space are discussed. It is shown that the correlation functions can be obtained from Green's functions defined with generalized operators. (Author) [pt
Development of gel-filter method for high enrichment of low-molecular weight proteins from serum.
Lingsheng Chen
Full Text Available The human serum proteome has been extensively screened for biomarkers. However, the large dynamic range of protein concentrations in serum and the presence of highly abundant and large molecular weight proteins, make identification and detection changes in the amount of low-molecular weight proteins (LMW, molecular weight ≤ 30kDa difficult. Here, we developed a gel-filter method including four layers of different concentration of tricine SDS-PAGE-based gels to block high-molecular weight proteins and enrich LMW proteins. By utilizing this method, we identified 1,576 proteins (n = 2 from 10 μL serum. Among them, 559 (n = 2 proteins belonged to LMW proteins. Furthermore, this gel-filter method could identify 67.4% and 39.8% more LMW proteins than that in representative methods of glycine SDS-PAGE and optimized-DS, respectively. By utilizing SILAC-AQUA approach with labeled recombinant protein as internal standard, the recovery rate for GST spiked in serum during the treatment of gel-filter, optimized-DS, and ProteoMiner was 33.1 ± 0.01%, 18.7 ± 0.01% and 9.6 ± 0.03%, respectively. These results demonstrate that the gel-filter method offers a rapid, highly reproducible and efficient approach for screening biomarkers from serum through proteomic analyses.
Yong Wang
2013-06-01
Full Text Available Solar radiation is an important input for various land-surface energy balance models. Global solar radiation data retrieved from the Japanese Geostationary Meteorological Satellite 5 (GMS-5/Visible and Infrared Spin Scan Radiometer (VISSR has been widely used in recent years. However, due to the impact of clouds, aerosols, solar elevation angle and bidirectional reflection, spatial or temporal deficiencies often exist in solar radiation datasets that are derived from satellite remote sensing, which can seriously affect the accuracy of application models of land-surface energy balance. The goal of reconstructing radiation data is to simulate the seasonal variation patterns of solar radiation, using various statistical and numerical analysis methods to interpolate the missing observations and optimize the whole time-series dataset. In the current study, a reconstruction method based on data assimilation is proposed. Using a Kalman filter as the assimilation algorithm, the retrieved radiation values are corrected through the continuous introduction of local in-situ global solar radiation (GSR provided by the China Meteorological Data Sharing Service System (Daily radiation dataset_Version 3 which were collected from 122 radiation data collection stations over China. A complete and optimal set of time-series data is ultimately obtained. This method is applied and verified in China’s northern agricultural areas (humid regions, semi-humid regions and semi-arid regions in a warm temperate zone. The results show that the mean value and standard deviation of the reconstructed solar radiation data series are significantly improved, with greater consistency with ground-based observations than the series before reconstruction. The method implemented in this study provides a new solution for the time-series reconstruction of surface energy parameters, which can provide more reliable data for scientific research and regional renewable-energy planning.
Review of PCMS and heat transfer enhancement methods applied ...
Most available PCMs have low thermal conductivity making heat transfer enhancement necessary for power applications. The various methods of heat transfer enhancement in latent heat storage systems were also reviewed systematically. The review showed that three commercially - available PCMs are suitable in the ...
E-LEARNING METHOD APPLIED TO TECHNICAL GRAPHICS SUBJECTS
GOANTA Adrian Mihai
2011-11-01
Full Text Available The paper presents some of the author’s endeavors in creating video courses for the students from the Faculty of Engineering in Braila related to subjects involving technical graphics . There are also mentioned the steps taken in completing the method and how to achieve a feedback on the rate of access to these types of courses by the students.
Selection vector filter framework
Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.
2003-10-01
We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.
Reservoir History Matching Using Ensemble Kalman Filters with Anamorphosis Transforms
Aman, Beshir M.
2012-12-01
This work aims to enhance the Ensemble Kalman Filter performance by transforming the non-Gaussian state variables into Gaussian variables to be a step closer to optimality. This is done by using univariate and multivariate Box-Cox transformation. Some History matching methods such as Kalman filter, particle filter and the ensemble Kalman filter are reviewed and applied to a test case in the reservoir application. The key idea is to apply the transformation before the update step and then transform back after applying the Kalman correction. In general, the results of the multivariate method was promising, despite the fact it over-estimated some variables.
Bernardin, B.; Le Guillou, G.; Parcy, JP.
1981-04-01
Usual spectral methods, based on temperature fluctuation analysis, aiming at thermocouple time constant identification are using an equipment too much sophisticated for on-line application. It is shown that numerical filtering is optimal for this application, the equipment is simpler than for spectral methods and less samples of signals are needed for the same accuracy. The method is described and a parametric study was performed using a temperature noise simulator [fr
Current Human Reliability Analysis Methods Applied to Computerized Procedures
Ronald L. Boring
2012-06-01
Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.
Probabilist methods applied to electric source problems in nuclear safety
Carnino, A.; Llory, M.
1979-01-01
Nuclear Safety has frequently been asked to quantify safety margins and evaluate the hazard. In order to do so, the probabilist methods have proved to be the most promising. Without completely replacing determinist safety, they are now commonly used at the reliability or availability stages of systems as well as for determining the likely accidental sequences. In this paper an application linked to the problem of electric sources is described, whilst at the same time indicating the methods used. This is the calculation of the probable loss of all the electric sources of a pressurized water nuclear power station, the evaluation of the reliability of diesels by event trees of failures and the determination of accidental sequences which could be brought about by the 'total electric source loss' initiator and affect the installation or the environment [fr
Theoretical and applied aerodynamics and related numerical methods
Chattot, J J
2015-01-01
This book covers classical and modern aerodynamics, theories and related numerical methods, for senior and first-year graduate engineering students, including: -The classical potential (incompressible) flow theories for low speed aerodynamics of thin airfoils and high and low aspect ratio wings. - The linearized theories for compressible subsonic and supersonic aerodynamics. - The nonlinear transonic small disturbance potential flow theory, including supercritical wing sections, the extended transonic area rule with lift effect, transonic lifting line and swept or oblique wings to minimize wave drag. Unsteady flow is also briefly discussed. Numerical simulations based on relaxation mixed-finite difference methods are presented and explained. - Boundary layer theory for all Mach number regimes and viscous/inviscid interaction procedures used in practical aerodynamics calculations. There are also four chapters covering special topics, including wind turbines and propellers, airplane design, flow analogies and h...
Applying probabilistic methods for assessments and calculations for accident prevention
Anon.
1984-01-01
The guidelines for the prevention of accidents require plant design-specific and radioecological calculations to be made in order to show that maximum acceptable expsoure values will not be exceeded in case of an accident. For this purpose, main parameters affecting the accident scenario have to be determined by probabilistic methods. This offers the advantage that parameters can be quantified on the basis of unambigious and realistic criteria, and final results can be defined in terms of conservativity. (DG) [de
Applying flow chemistry: methods, materials, and multistep synthesis.
McQuade, D Tyler; Seeberger, Peter H
2013-07-05
The synthesis of complex molecules requires control over both chemical reactivity and reaction conditions. While reactivity drives the majority of chemical discovery, advances in reaction condition control have accelerated method development/discovery. Recent tools include automated synthesizers and flow reactors. In this Synopsis, we describe how flow reactors have enabled chemical advances in our groups in the areas of single-stage reactions, materials synthesis, and multistep reactions. In each section, we detail the lessons learned and propose future directions.
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
The colour analysis method applied to homogeneous rocks
Halász Amadé
2015-12-01
Full Text Available Computer-aided colour analysis can facilitate cyclostratigraphic studies. Here we report on a case study involving the development of a digital colour analysis method for examination of the Boda Claystone Formation which is the most suitable in Hungary for the disposal of high-level radioactive waste. Rock type colours are reddish brown or brownish red, or any shade between brown and red. The method presented here could be used to differentiate similar colours and to identify gradual transitions between these; the latter are of great importance in a cyclostratigraphic analysis of the succession. Geophysical well-logging has demonstrated the existence of characteristic cyclic units, as detected by colour and natural gamma. Based on our research, colour, natural gamma and lithology correlate well. For core Ib-4, these features reveal the presence of orderly cycles with thicknesses of roughly 0.64 to 13 metres. Once the core has been scanned, this is a time- and cost-effective method.
Applying Hierarchical Task Analysis Method to Discovery Layer Evaluation
Marlen Promann
2015-03-01
Full Text Available Libraries are implementing discovery layers to offer better user experiences. While usability tests have been helpful in evaluating the success or failure of implementing discovery layers in the library context, the focus has remained on its relative interface benefits over the traditional federated search. The informal site- and context specific usability tests have offered little to test the rigor of the discovery layers against the user goals, motivations and workflow they have been designed to support. This study proposes hierarchical task analysis (HTA as an important complementary evaluation method to usability testing of discovery layers. Relevant literature is reviewed for the discovery layers and the HTA method. As no previous application of HTA to the evaluation of discovery layers was found, this paper presents the application of HTA as an expert based and workflow centered (e.g. retrieving a relevant book or a journal article method to evaluating discovery layers. Purdue University’s Primo by Ex Libris was used to map eleven use cases as HTA charts. Nielsen’s Goal Composition theory was used as an analytical framework to evaluate the goal carts from two perspectives: a users’ physical interactions (i.e. clicks, and b user’s cognitive steps (i.e. decision points for what to do next. A brief comparison of HTA and usability test findings is offered as a way of conclusion.
Evaluation of Slow Release Fertilizer Applying Chemical and Spectroscopic methods
AbdEl-Kader, A.A.; Al-Ashkar, E.A.
2005-01-01
Controlled-release fertilizer offers a number of advantages in relation to crop production in newly reclaimed soils. Butadiene styrene latex emulsion is one of the promising polymer for different purposes. In this work, laboratory evaluation of butadiene styrene latex emulsion 24/76 polymer loaded with a mixed fertilizer was carried out. Macro nutrients (N, P and K) and micro-nutrients(Zn, Fe, and Cu) were extracted by basic extract from the polymer fertilizer mixtures. Micro-sampling technique was investigated and applied to measure Zn, Fe, and Cu using flame atomic absorption spectrometry in order to overcome the nebulization difficulties due to high salt content samples. The cumulative releases of macro and micro-nutrients have been assessed. From the obtained results, it is clear that the release depends on both nutrients and polymer concentration in the mixture. Macro-nutrients are released more efficient than micro-nutrients of total added. Therefore it can be used for minimizing micro-nutrients hazard in soils
Kulchitskii, Yu A; Vinogradov, V B
2005-01-01
The constructed ATLAS detector at the LHC will have the great physics discovery potential, in particular in the detection of a heavy Higgs boson. Calorimeters will play a crucial role in it. It is necessary to have confidence that the calorimeters will perform as expected. With the aim of understanding of performance of the ATLAS Tile hadronic calorimeter to electrons 12\\% of modules have been exposed in electron beams with various energies by three possible ways: cell-scan at $\\theta =20^o$ at the centers of the front face cells, $\\eta$-scan and tilerow scan at $\\theta = 90^o$ for the module side cells. We have extracted the electron energy resolutions of the $EBM-$ (ANL-44), $EBM+$ (IFA-42) and $BM$ (JINR-55) Modules of the ATLAS Tile Calorimeter at energies E = 10, 20, 50, 100 and 180 GeV and $\\theta = 20^o$ and $90^o $ and $\\eta$ scan from the July 2002 testbeam run data using the flat filter method of the PMT signal reconstruction. We have determined the statistical and constant terms for the electron en...
Electron Energy Resolution of the ATLAS TILECAL Modules with Fit Filter Method (July 2002 test beam)
Kulchitskii, Yu A; Vinogradov, V B
2006-01-01
The constructed ATLAS detector at the LHC will have the great physics discovery potential, in particular in the detection of a heavy Higgs boson. Calorimeters will play a crucial role in it. It is necessary to have confidence that the calorimeters will perform as expected. With the aim of understanding of performance of the ATLAS Tile hadronic calorimeter to electrons 12\\% of modules have been exposed in electron beams with various energies by three possible ways: cell-scan at $\\theta =20^o$ at the centers of the front face cells, $\\eta$-scan and tilerow scan at $\\theta = 90^o$ for the module side cells. We have extracted the electron energy resolutions of the $EBM-$ (ANL-44), $EBM+$ (IFA-42) and $BM$ (JINR-55) Modules of the ATLAS Tile Calorimeter at energies E = 10, 20, 50, 100 and 180 GeV and $\\theta = 20^o$ and $90^o $ and $\\eta$ scan from the July 2002 testbeam run data using the fit filter method of the PMT signal reconstruction. We have determined the statistical and constant terms for the electron ene...
Phase Coordinate System and p-q Theory Based Methods in Active Filtering Implementation
POPESCU, M.
2013-02-01
Full Text Available This paper is oriented towards implementation of the main theories of powers in the compensating current generation stage of a three-phase three-wire shunt active power system. The system control is achieved through a dSPACE 1103 platform which is programmed under the Matlab/Simulink environment. Four calculation blocks included in a specifically designed Simulink library are successively implemented in the experimental setup. The first two approaches, namely those based on the Fryze-Buchholz-Depenbrock theory and the generalized instantaneous reactive power theory, make use of phase quantities without any transformation of the coordinate system and provide the basis for calculating the compensating current when total compensation is desired. The others are based on the p-q theory concepts and require the direct and reverse transformation to/from the two-phases stationary reference frame. They are used for total compensation and partial compensation of the current harmonic distortion. The experimental results, in terms of active filtering performances, validate the control strategies implementation and provide arguments in choosing the most appropriate method.
Meng, Xi; Nguyen, Bao D; Ridge, Clark; Shaka, A J
2009-01-01
High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to "reduced-dimensionality" strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the filter diagonalization method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra-dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths.
A center-median filtering method for detection of temporal variation in coronal images
Plowman Joseph
2016-01-01
Full Text Available Events in the solar corona are often widely separated in their timescales, which can allow them to be identified when they would otherwise be confused with emission from other sources in the corona. Methods for cleanly separating such events based on their timescales are thus desirable for research in the field. This paper develops a technique for identifying time-varying signals in solar coronal image sequences which is based on a per-pixel running median filter and an understanding of photon-counting statistics. Example applications to “EIT waves” (named after EIT, the EUV Imaging Telescope on the Solar and Heliospheric Observatory and small-scale dynamics are shown, both using 193 Å data from the Atmospheric Imaging Assembly (AIA on the Solar Dynamics Observatory. The technique is found to discriminate EIT waves more cleanly than the running and base difference techniques most commonly used. It is also demonstrated that there is more signal in the data than is commonly appreciated, finding that the waves can be traced to the edge of the AIA field of view when the data are rebinned to increase the signal-to-noise ratio.
The lumped heat capacity method applied to target heating
Rickards, J.
2013-01-01
The temperature of metal samples was measured while they were bombarded by the beam from the a particle accelerator. The evolution of the temperature with time can be explained using the lumped heat capacity method of heat transfer. A strong dependence on the type of mounting was found. Se midió la temperatura de muestras metálicas al ser bombardeadas por el haz de iones del Acelerador Pelletron del Instituto de Física. La evolución de la temperatura con el tiempo se puede explicar usando ...
Comparison of Endotoxin Exposure Assessment by Bioaerosol Impinger and Filter-Sampling Methods
Duchaine, Caroline; Thorne, Peter S.; Mériaux, Anne; Grimard, Yan; Whitten, Paul; Cormier, Yvon
2001-01-01
Environmental assessment data collected in two prior occupational hygiene studies of swine barns and sawmills allowed the comparison of concurrent, triplicate, side-by-side endotoxin measurements using air sampling filters and bioaerosol impingers. Endotoxin concentrations in impinger solutions and filter eluates were assayed using the Limulus amebocyte lysate assay. In sawmills, impinger sampling yielded significantly higher endotoxin concentration measurements and lower variances than filte...
Gil-Cacho, Jose M.; van Waterschoot, Toon; Moonen, Marc
2014-01-01
to the FDAF-PEM-AFROW algorithm. We show that FDAF-PEM-AFROW is by construction related to the best linear unbiased estimate (BLUE) of the echo path. We depart from this framework to show an improvement in performance with respect to other adaptive filters minimizing the BLUE criterion, namely the PEM......In this paper, we propose a new framework to tackle the double-talk (DT) problem in acoustic echo cancellation (AEC). It is based on a frequency-domain adaptive filter (FDAF) implementation of the so-called prediction error method adaptive filtering using row operations (PEM-AFROW) leading...... regularization (VR) algorithms. The FDAF-PEM-AFROW versions significantly outperform the original versions in every simulation. In terms of computational complexity, the FDAF-PEM-AFROW versions are themselves about two orders of magnitude cheaper than the original versions....
Modern analytic methods applied to the art and archaeology
Tenorio C, M. D.; Longoria G, L. C.
2010-01-01
The interaction of diverse areas as the analytic chemistry, the history of the art and the archaeology has allowed the development of a variety of techniques used in archaeology, in conservation and restoration. These methods have been used to date objects, to determine the origin of the old materials and to reconstruct their use and to identify the degradation processes that affect the integrity of the art works. The objective of this chapter is to offer a general vision on the researches that have been realized in the Instituto Nacional de Investigaciones Nucleares (ININ) in the field of cultural goods. A series of researches carried out in collaboration with national investigators and of the foreigner is described shortly, as well as with the great support of degree students and master in archaeology of the National School of Anthropology and History, since one of the goals that have is to diffuse the knowledge of the existence of these techniques among the young archaeologists, so that they have a wider vision of what they could use in an in mediate future and they can check hypothesis with scientific methods. (Author)
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen
2016-01-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.
2016-12-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation
Arotaritei, D.; Rotariu, C.
2015-09-01
In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).
Frequency domain methods applied to forecasting electricity markets
Trapero, Juan R.; Pedregal, Diego J.
2009-01-01
The changes taking place in electricity markets during the last two decades have produced an increased interest in the problem of forecasting, either load demand or prices. Many forecasting methodologies are available in the literature nowadays with mixed conclusions about which method is most convenient. This paper focuses on the modeling of electricity market time series sampled hourly in order to produce short-term (1 to 24 h ahead) forecasts. The main features of the system are that (1) models are of an Unobserved Component class that allow for signal extraction of trend, diurnal, weekly and irregular components; (2) its application is automatic, in the sense that there is no need for human intervention via any sort of identification stage; (3) the models are estimated in the frequency domain; and (4) the robustness of the method makes possible its direct use on both load demand and price time series. The approach is thoroughly tested on the PJM interconnection market and the results improve on classical ARIMA models. (author)
Interesting Developments in Testing Methods Applied to Foundation Piles
Sobala, Dariusz; Tkaczyński, Grzegorz
2017-10-01
Both: piling technologies and pile testing methods are a subject of current development. New technologies, providing larger diameters or using in-situ materials, are very demanding in terms of providing proper quality of execution of works. That concerns the material quality and continuity which define the integral strength of pile. On the other side we have the capacity of the ground around the pile and its ability to carry the loads transferred by shaft and pile base. Inhomogeneous nature of soils and a relatively small amount of tested piles imposes very good understanding of small amount of results. In some special cases the capacity test itself form an important cost in the piling contract. This work presents a brief description of selected testing methods and authors remarks based on cooperation with Universities constantly developing new ideas. Paper presents some experience based remarks on integrity testing by means of low energy impact (low strain) and introduces selected (Polish) developments in the field of closed-end pipe piles testing based on bi-directional loading, similar to Osterberg idea, but without sacrificial hydraulic jack. Such test is suitable especially when steel piles are used for temporary support in the rivers, where constructing of conventional testing appliance with anchor piles or kentledge meets technical problems. According to the author’s experience, such tests were not yet used on the building site but they bring a real potential especially, when the displacement control can be provided from the river bank using surveying techniques.
A Method for Cobalt and Cesium Leaching from Glass Fiber in HEPA Filter
Kim, Gye Nam; Lee, Suk Chol; Yang, Hee Chul; Yoon, In Ho; Choi, Wang Kyu; Moon, Jei Kwon
2011-01-01
A great amount of radioactive waste has been generated during the operation of nuclear facilities. Recently, the storage space of a radioactive waste storage facility in the Korea Atomic Energy Research Institute (KAERI) was almost saturated with many radioactive wastes. So, the present is a point of time that a volume reduction of the wastes in a radioactive waste storage facility needs. There are spent HEPA filter wastes of about 2,226 sets in the radioactive waste storage facility in KAERI. All these spent filter wastes have been stored in accordance with their original form without any treatment. Up to now a compression treatment of these spent HEPA filters has been carried out to repack the compressed spent HEPA filters into a 200 liter drum for their volume reduction. Frame and separator are contaminated with a low concentration of nuclide, while the glass fiber is contaminated with a high concentration of nuclide. So, for the disposal of the glass filter to the environment, the glass fiber should be leached to lower its radioactive concentration first and then must be stabilized by solidification and so on. Therefore, it is necessary to develop a leaching process of glass fiber in a HEPA filter. Leaching is a separation technology, which is often used to remove a metal or a nuclide from a solid mixture with the help of a liquid solvent
Moon, Bo Mi; Choi, Myung-Jin; Sultan, Md Tipu; Yang, Jae Won; Ju, Hyung Woo; Lee, Jung Min; Park, Hyun Jung; Park, Ye Ri; Kim, Soo Hyeon; Kim, Dong Wook; Lee, Min Chae; Jeong, Ju Yeon; Lee, Ok Joo; Sung, Gun Yong; Park, Chan Hum
2017-10-01
During the last decade, there has been a great advance in the kidney dialysis system by wearable artificial kidney (WAK) system for end-stage renal disease patients. Uremic solute removal and water regeneration system are the most prerequisite for WAK to work properly. In this study, we designed a filtering membrane system by using immobilized urease silk fibroin filter and evaluated its comparative effectiveness with a PVDF filtering system in peritoneal dialysate regeneration system by urea removal efficacy. We evaluated this membrane's characteristic and performances by conducting SEM-EDX analyze, water-binding abilities and porosity test, removal abilities of urea, cytotoxicity assay and enzyme activity assay. Under the condition for optimization of urease, the percentage removal of urea was about 40% and 60% in 50 mg/dL urea solution by urease immobilized PVDF and silk fibroin scaffolds, respectively. The batch experimental result showed that immobilized filter removed more than 50% of urea in 50 mg/dL urea solution. In addition silk fibroin with urease filter removed 90 percent of urea in the peritoneal dialysate after 24 h filtration. We suggest that silk fibroin with urease fixation filter can be used more effectively for peritoneal dialysate regeneration system, which have hydrophilic property and prolonged enzyme activity. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 105B: 2136-2144, 2017. © 2016 Wiley Periodicals, Inc.
Gas stream clean-up filter and method for forming same
Mei, J.S.; DeVault, J.; Halow, J.S.
1993-01-01
A gas cleaning filter is formed in-situ within a vessel containing a fluidizable bed of granular material of a relatively large size fraction. A filter membrane provided by a porous metal or ceramic body or such a body supported a perforated screen on one side thereof is coated in-situ with a layer of the granular material from the fluidized bed by serially passing a bed-fluidizing gas stream through the bed of granular material and the membrane. The layer of granular material provides the filtering medium for the combined membrane-granular layer filter. The filter is not blinded by the granular material and provides for the removal of virtually all of the particulates from a process gas stream. The granular material can be at least partially provided by a material capable of chemically reacting with and removing sulfur compounds from the process gas stream. Low level radioactive waste containing organic material may be incinerated in a fluidized bed in communication with the described filter for removing particulates from the gaseous combustion products
Applying Simulation Method in Formulation of Gluten-Free Cookies
Nikitina Marina
2017-01-01
Full Text Available At present time priority direction in the development of new food products its developing of technology products for special purposes. These types of products are gluten-free confectionery products, intended for people with celiac disease. Gluten-free products are in demand among consumers, it needs to expand assortment, and improvement of quality indicators. At this article results of studies on the development of pastry products based on amaranth flour does not contain gluten. Study based on method of simulation recipes gluten-free confectionery functional orientation to optimize their chemical composition. The resulting products will allow to diversify and supplement the necessary nutrients diet for people with gluten intolerance, as well as for those who follow a gluten-free diet.
Nuclear method applied in archaeological sites at the Amazon basin
Nicoli, Ieda Gomes; Bernedo, Alfredo Victor Bellido; Latini, Rose Mary
2002-01-01
The aim of this work was to use the nuclear methodology to character pottery discovered inside archaeological sites recognized with circular earth structure in Acre State - Brazil which may contribute to the research in the reconstruction of part of the pre-history of the Amazonic Basin. The sites are located mainly in the Hydrographic Basin of High Purus River. Three of them were strategic chosen to collect the ceramics: Lobao, in Sena Madureira County at north; Alto Alegre in Rio Branco County at east and Xipamanu I, in Xapuri County at south. Neutron Activation Analysis in conjunction with multivariate statistical methods were used for the ceramic characterization and classification. An homogeneous group was established by all the sherds collected from Alto Alegre and was distinct from the other two groups analyzed. Some of the sherds collected from Xipamunu I appeared in Lobao's urns, probably because they had the same fabrication process. (author)
Applying Multi-Criteria Analysis Methods for Fire Risk Assessment
Pushkina Julia
2015-11-01
Full Text Available The aim of this paper is to prove the application of multi-criteria analysis methods for optimisation of fire risk identification and assessment process. The object of this research is fire risk and risk assessment. The subject of the research is studying the application of analytic hierarchy process for modelling and influence assessment of various fire risk factors. Results of research conducted by the authors can be used by insurance companies to perform the detailed assessment of fire risks on the object and to calculate a risk extra charge to an insurance premium; by the state supervisory institutions to determine the compliance of a condition of object with requirements of regulations; by real state owners and investors to carry out actions for decrease in degree of fire risks and minimisation of possible losses.
Applied statistical methods in agriculture, health and life sciences
Lawal, Bayo
2014-01-01
This textbook teaches crucial statistical methods to answer research questions using a unique range of statistical software programs, including MINITAB and R. This textbook is developed for undergraduate students in agriculture, nursing, biology and biomedical research. Graduate students will also find it to be a useful way to refresh their statistics skills and to reference software options. The unique combination of examples is approached using MINITAB and R for their individual strengths. Subjects covered include among others data description, probability distributions, experimental design, regression analysis, randomized design and biological assay. Unlike other biostatistics textbooks, this text also includes outliers, influential observations in regression and an introduction to survival analysis. Material is taken from the author's extensive teaching and research in Africa, USA and the UK. Sample problems, references and electronic supplementary material accompany each chapter.
A new deconvolution method applied to ultrasonic images
Sallard, J.
1999-01-01
This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)
Applying Human-Centered Design Methods to Scientific Communication Products
Burkett, E. R.; Jayanty, N. K.; DeGroot, R. M.
2016-12-01
Knowing your users is a critical part of developing anything to be used or experienced by a human being. User interviews, journey maps, and personas are all techniques commonly employed in human-centered design practices because they have proven effective for informing the design of products and services that meet the needs of users. Many non-designers are unaware of the usefulness of personas and journey maps. Scientists who are interested in developing more effective products and communication can adopt and employ user-centered design approaches to better reach intended audiences. Journey mapping is a qualitative data-collection method that captures the story of a user's experience over time as related to the situation or product that requires development or improvement. Journey maps help define user expectations, where they are coming from, what they want to achieve, what questions they have, their challenges, and the gaps and opportunities that can be addressed by designing for them. A persona is a tool used to describe the goals and behavioral patterns of a subset of potential users or customers. The persona is a qualitative data model that takes the form of a character profile, built upon data about the behaviors and needs of multiple users. Gathering data directly from users avoids the risk of basing models on assumptions, which are often limited by misconceptions or gaps in understanding. Journey maps and user interviews together provide the data necessary to build the composite character that is the persona. Because a persona models the behaviors and needs of the target audience, it can then be used to make informed product design decisions. We share the methods and advantages of developing and using personas and journey maps to create more effective science communication products.
D.D. Lestiani
2011-08-01
Full Text Available Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA and particles induced X-ray emission (PIXE. Particle samples in the PM2.5 and PM2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preffered, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment.
Lestiani, D.D.; Santoso, M.
2011-01-01
Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA) and particles induced X-ray emission (PIXE). Particle samples in the PM 2.5 and PM 2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preferred, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment. (author)
Filter replacement lifetime prediction
Hamann, Hendrik F.; Klein, Levente I.; Manzer, Dennis G.; Marianno, Fernando J.
2017-10-25
Methods and systems for predicting a filter lifetime include building a filter effectiveness history based on contaminant sensor information associated with a filter; determining a rate of filter consumption with a processor based on the filter effectiveness history; and determining a remaining filter lifetime based on the determined rate of filter consumption. Methods and systems for increasing filter economy include measuring contaminants in an internal and an external environment; determining a cost of a corrosion rate increase if unfiltered external air intake is increased for cooling; determining a cost of increased air pressure to filter external air; and if the cost of filtering external air exceeds the cost of the corrosion rate increase, increasing an intake of unfiltered external air.
Chisvert, Alberto, E-mail: alberto.chisvert@uv.es [Departamento de Quimica Analitica, Facultad de Quimica, Universitat de Valencia, Doctor Moliner St. 50, 46100 Burjassot, Valencia (Spain); Leon-Gonzalez, Zacarias [Unidad Analitica, Instituto de Investigacion Sanitaria Fundacion Hospital La Fe, 46009 Valencia (Spain); Tarazona, Isuha; Salvador, Amparo [Departamento de Quimica Analitica, Facultad de Quimica, Universitat de Valencia, Doctor Moliner St. 50, 46100 Burjassot, Valencia (Spain); Giokas, Dimosthenis [Laboratory of Analytical Chemistry, Department of Chemistry, University of Ioannina, 45110 Ioannina (Greece)
2012-11-08
Highlights: Black-Right-Pointing-Pointer Papers describing the determination of UV filters in fluids and tissues are reviewed. Black-Right-Pointing-Pointer Matrix complexity and low amounts of analytes require effective sample treatments. Black-Right-Pointing-Pointer The published papers do not cover the study of all the substances allowed as UV filters. Black-Right-Pointing-Pointer New analytical methods for UV filters determination in these matrices are encouraged. - Abstract: Organic UV filters are chemical compounds added to cosmetic sunscreen products in order to protect users from UV solar radiation. The need of broad-spectrum protection to avoid the deleterious effects of solar radiation has triggered a trend in the cosmetic market of including these compounds not only in those exclusively designed for sun protection but also in all types of cosmetic products. Different studies have shown that organic UV filters can be absorbed through the skin after topical application, further metabolized in the body and eventually excreted or bioaccumulated. These percutaneous absorption processes may result in various adverse health effects, such as genotoxicity caused by the generation of free radicals, which can even lead to mutagenic or carcinogenic effects, and estrogenicity, which is associated with the endocrine disruption activity caused by some of these compounds. Due to the absence of official monitoring protocols, there is a demand for analytical methods that enable the determination of UV filters in biological fluids and tissues in order to retrieve more information regarding their behavior in the human body and thus encourage the development of safer cosmetic formulations. In view of this demand, there has recently been a noticeable increase in the development of sensitive and selective analytical methods for the determination of UV filters and their metabolites in biological fluids (i.e., urine, plasma, breast milk and semen) and tissues. The complexity of
Chisvert, Alberto; León-González, Zacarías; Tarazona, Isuha; Salvador, Amparo; Giokas, Dimosthenis
2012-01-01
Highlights: ► Papers describing the determination of UV filters in fluids and tissues are reviewed. ► Matrix complexity and low amounts of analytes require effective sample treatments. ► The published papers do not cover the study of all the substances allowed as UV filters. ► New analytical methods for UV filters determination in these matrices are encouraged. - Abstract: Organic UV filters are chemical compounds added to cosmetic sunscreen products in order to protect users from UV solar radiation. The need of broad-spectrum protection to avoid the deleterious effects of solar radiation has triggered a trend in the cosmetic market of including these compounds not only in those exclusively designed for sun protection but also in all types of cosmetic products. Different studies have shown that organic UV filters can be absorbed through the skin after topical application, further metabolized in the body and eventually excreted or bioaccumulated. These percutaneous absorption processes may result in various adverse health effects, such as genotoxicity caused by the generation of free radicals, which can even lead to mutagenic or carcinogenic effects, and estrogenicity, which is associated with the endocrine disruption activity caused by some of these compounds. Due to the absence of official monitoring protocols, there is a demand for analytical methods that enable the determination of UV filters in biological fluids and tissues in order to retrieve more information regarding their behavior in the human body and thus encourage the development of safer cosmetic formulations. In view of this demand, there has recently been a noticeable increase in the development of sensitive and selective analytical methods for the determination of UV filters and their metabolites in biological fluids (i.e., urine, plasma, breast milk and semen) and tissues. The complexity of the biological matrix and the low concentration levels of these compounds inevitably impose sample
Souza, Carla; Maia Campos, Patrícia M B G
2017-12-01
This study describes the development, validation and application of a high-performance liquid chromatography (HPLC) method for the simultaneous determination of the in vitro skin penetration profile of four UV filters on porcine skin. Experiments were carried out on a gel-cream formulation containing the following UV filters: diethylamino hydroxybenzoyl hexyl benzoate (DHHB), bis-ethylhexyloxyphenol methoxyphenyl triazine (BEMT), methylene bis-benzotriazolyl tetramethylbutylphenol (MBBT) and ethylhexyl triazone (EHT). The HPLC method demonstrated suitable selectivity, linearity (10.0-50.0 μg/mL), precision, accuracy and recovery from porcine skin and sunscreen formulation. The in vitro skin penetration profile was evaluated using Franz vertical diffusion cells for 24 h after application on porcine ear skin. None of the UV filters penetrated the porcine skin. Most of them stayed on the skin surface (>90%) and only BEMT, EHT and DHHB reached the dermis plus epidermis layer. These results are in agreement with previous results in the literature. Therefore, the analytical method was useful to evaluate the in vitro skin penetration of the UV filters and may help the development of safer and effective sunscreen products. Copyright © 2017 John Wiley & Sons, Ltd.
Advances in analytical methods and occurrence of organic UV-filters in the environment--A review.
Ramos, Sara; Homem, Vera; Alves, Arminda; Santos, Lúcia
2015-09-01
UV-filters are a group of compounds designed mainly to protect skin against UVA and UVB radiation, but they are also included in plastics, furniture, etc., to protect products from light damage. Their massive use in sunscreens for skin protection has been increasing due to the awareness of the chronic and acute effects of UV radiation. Some organic UV-filters have raised significant concerns in the past few years for their continuous usage, persistent input and potential threat to ecological environment and human health. UV-filters end up in wastewater and because wastewater treatment plants are not efficient in removing them, lipophilic compounds tend to sorb onto sludge and hydrophilics end up in river water, contaminating the existing biota. To better understand the risk associated with UV-filters in the environment a thorough review regarding their physicochemical properties, toxicity and environmental degradation, analytical methods and their occurrence was conducted. Higher UV-filter concentrations were found in rivers, reaching 0.3mg/L for the most studied family, the benzophenone derivatives. Concentrations in the ng to μg/L range were also detected for the p-aminobenzoic acid, cinnamate, crylene and benzoyl methane derivatives in lake and sea water. Although at lower levels (few ng/L), UV-filters were also found in tap and groundwater. Swimming pool water is also a sink for UV-filters and its chlorine by-products, at the μg/L range, highlighting the benzophenone and benzimidazole derivatives. Soils and sediments are not frequently studied, but concentrations in the μg/L range have already been found especially for the benzophenone and crylene derivatives. Aquatic biota is frequently studied and UV-filters are found in the ng/g-dw range with higher values for fish and mussels. It has been concluded that more information regarding UV-filter degradation studies both in water and sediments is necessary and environmental occurrences should be monitored more
Applying the partitioned multiobjective risk method (PMRM) to portfolio selection.
Reyes Santos, Joost; Haimes, Yacov Y
2004-06-01
The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model
Simplified Methods Applied to Nonlinear Motion of Spar Platforms
Haslum, Herbjoern Alf
2000-07-01
Simplified methods for prediction of motion response of spar platforms are presented. The methods are based on first and second order potential theory. Nonlinear drag loads and the effect of the pumping motion in a moon-pool are also considered. Large amplitude pitch motions coupled to extreme amplitude heave motions may arise when spar platforms are exposed to long period swell. The phenomenon is investigated theoretically and explained as a Mathieu instability. It is caused by nonlinear coupling effects between heave, surge, and pitch. It is shown that for a critical wave period, the envelope of the heave motion makes the pitch motion unstable. For the same wave period, a higher order pitch/heave coupling excites resonant heave response. This mutual interaction largely amplifies both the pitch and the heave response. As a result, the pitch/heave instability revealed in this work is more critical than the previously well known Mathieu's instability in pitch which occurs if the wave period (or the natural heave period) is half the natural pitch period. The Mathieu instability is demonstrated both by numerical simulations with a newly developed calculation tool and in model experiments. In order to learn more about the conditions for this instability to occur and also how it may be controlled, different damping configurations (heave damping disks and pitch/surge damping fins) are evaluated both in model experiments and by numerical simulations. With increased drag damping, larger wave amplitudes and more time are needed to trigger the instability. The pitch/heave instability is a low probability of occurrence phenomenon. Extreme wave periods are needed for the instability to be triggered, about 20 seconds for a typical 200m draft spar. However, it may be important to consider the phenomenon in design since the pitch/heave instability is very critical. It is also seen that when classical spar platforms (constant cylindrical cross section and about 200m draft
Variational methods applied to problems of diffusion and reaction
Strieder, William
1973-01-01
This monograph is an account of some problems involving diffusion or diffusion with simultaneous reaction that can be illuminated by the use of variational principles. It was written during a period that included sabbatical leaves of one of us (W. S. ) at the University of Minnesota and the other (R. A. ) at the University of Cambridge and we are grateful to the Petroleum Research Fund for helping to support the former and the Guggenheim Foundation for making possible the latter. We would also like to thank Stephen Prager for getting us together in the first place and for showing how interesting and useful these methods can be. We have also benefitted from correspondence with Dr. A. M. Arthurs of the University of York and from the counsel of Dr. B. D. Coleman the general editor of this series. Table of Contents Chapter 1. Introduction and Preliminaries . 1. 1. General Survey 1 1. 2. Phenomenological Descriptions of Diffusion and Reaction 2 1. 3. Correlation Functions for Random Suspensions 4 1. 4. Mean Free ...
Nondestructive methods of analysis applied to oriental swords
Edge, David
2015-12-01
Full Text Available Various neutron techniques were employed at the Budapest Nuclear Centre in an attempt to find the most useful method for analysing the high-carbon steels found in Oriental arms and armour, such as those in the Wallace Collection, London. Neutron diffraction was found to be the most useful in terms of identifying such steels and also indicating the presence of hidden patternEn el Centro Nuclear de Budapest se han empleado varias técnicas neutrónicas con el fin de encontrar un método adecuado para analizar las armas y armaduras orientales con un alto contenido en carbono, como algunas de las que se encuentran en la Colección Wallace de Londres. El empleo de la difracción de neutrones resultó ser la técnica más útil de cara a identificar ese tipo de aceros y también para encontrar patrones escondidos.
Perturbation Method of Analysis Applied to Substitution Measurements of Buckling
Persson, Rolf
1966-11-15
Calculations with two-group perturbation theory on substitution experiments with homogenized regions show that a condensation of the results into a one-group formula is possible, provided that a transition region is introduced in a proper way. In heterogeneous cores the transition region comes in as a consequence of a new cell concept. By making use of progressive substitutions the properties of the transition region can be regarded as fitting parameters in the evaluation procedure. The thickness of the region is approximately equal to the sum of 1/(1/{tau} + 1/L{sup 2}){sup 1/2} for the test and reference regions. Consequently a region where L{sup 2} >> {tau}, e.g. D{sub 2}O, contributes with {radical}{tau} to the thickness. In cores where {tau} >> L{sup 2} , e.g. H{sub 2}O assemblies, the thickness of the transition region is determined by L. Experiments on rod lattices in D{sub 2}O and on test regions of D{sub 2}O alone (where B{sup 2} = - 1/L{sup 2} ) are analysed. The lattice measurements, where the pitches differed by a factor of {radical}2, gave excellent results, whereas the determination of the diffusion length in D{sub 2}O by this method was not quite successful. Even regions containing only one test element can be used in a meaningful way in the analysis.
Complexity methods applied to turbulence in plasma astrophysics
Vlahos, L.; Isliker, H.
2016-09-01
In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and transports magnetic flux tubes to the solar surface. Using probabilistic percolation models we were able to reproduce the size distribution and the fractal properties of the emerged and randomly moving magnetic flux tubes. (b) Using a Non Linear Force Free (NLFF) magnetic extrapolation numerical code we can explore how the emerged magnetic flux tubes interact nonlinearly and form thin and Unstable Current Sheets (UCS) inside the coronal part of the AR. (c) The fragmentation of the UCS and the redistribution of the magnetic field locally, when the local current exceeds a Critical threshold, is a key process which drives avalanches and forms coherent structures. This local reorganization of the magnetic field enhances the energy dissipation and influences the global evolution of the complex magnetic topology. Using a Cellular Automaton and following the simple rules of Self Organized Criticality (SOC), we were able to reproduce the statistical characteristics of the
Ahunbay, Ergun E.; Ates, O.; Li, X. A.
2016-01-01
Purpose: In a situation where a couch shift for patient positioning is not preferred or prohibited (e.g., MR-linac), segment aperture morphing (SAM) can address target dislocation and deformation. For IMRT/VMAT with flattening-filter-free (FFF) beams, however, SAM method would lead to an adverse translational dose effect due to the beam unflattening. Here the authors propose a new two-step process to address both the translational effect of FFF beams and the target deformation. Methods: The replanning method consists of an offline and an online step. The offline step is to create a series of preshifted-plans (PSPs) obtained by a so-called “warm start” optimization (starting optimization from the original plan, rather than from scratch) at a series of isocenter shifts. The PSPs all have the same number of segments with very similar shapes, since the warm start optimization only adjusts the MLC positions instead of regenerating them. In the online step, a new plan is obtained by picking the closest PSP or linearly interpolating the MLC positions and the monitor units of the closest PSPs for the shift determined from the image of the day. This two-step process is completely automated and almost instantaneous (no optimization or dose calculation needed). The previously developed SAM algorithm is then applied for daily deformation. The authors tested the method on sample prostate and pancreas cases. Results: The two-step interpolation method can account for the adverse dose effects from FFF beams, while SAM corrects for the target deformation. Plan interpolation method is effective in diminishing the unflat beam effect and may allow reducing the required number of PSPs. The whole process takes the same time as the previously reported SAM process (5–10 min). Conclusions: The new two-step method plus SAM can address both the translation effects of FFF beams and target deformation, and can be executed in full automation except the delineation of target contour
Ahunbay, Ergun E., E-mail: eahunbay@mcw.edu; Ates, O.; Li, X. A. [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin 53226 (United States)
2016-08-15
Purpose: In a situation where a couch shift for patient positioning is not preferred or prohibited (e.g., MR-linac), segment aperture morphing (SAM) can address target dislocation and deformation. For IMRT/VMAT with flattening-filter-free (FFF) beams, however, SAM method would lead to an adverse translational dose effect due to the beam unflattening. Here the authors propose a new two-step process to address both the translational effect of FFF beams and the target deformation. Methods: The replanning method consists of an offline and an online step. The offline step is to create a series of preshifted-plans (PSPs) obtained by a so-called “warm start” optimization (starting optimization from the original plan, rather than from scratch) at a series of isocenter shifts. The PSPs all have the same number of segments with very similar shapes, since the warm start optimization only adjusts the MLC positions instead of regenerating them. In the online step, a new plan is obtained by picking the closest PSP or linearly interpolating the MLC positions and the monitor units of the closest PSPs for the shift determined from the image of the day. This two-step process is completely automated and almost instantaneous (no optimization or dose calculation needed). The previously developed SAM algorithm is then applied for daily deformation. The authors tested the method on sample prostate and pancreas cases. Results: The two-step interpolation method can account for the adverse dose effects from FFF beams, while SAM corrects for the target deformation. Plan interpolation method is effective in diminishing the unflat beam effect and may allow reducing the required number of PSPs. The whole process takes the same time as the previously reported SAM process (5–10 min). Conclusions: The new two-step method plus SAM can address both the translation effects of FFF beams and target deformation, and can be executed in full automation except the delineation of target contour
Mi-Kyeong Kim
2017-11-01
Full Text Available Landslides are one of the critical natural hazards that cause human, infrastructure, and economic losses. Risk of catastrophic losses due to landslides is significant given sprawled urban development near steep slopes and the increasing proximity of large populations to hilly areas. For reducing these losses, a high-resolution digital terrain model (DTM is an essential piece of data for a qualitative or a quantitative investigation of slopes that may lead to landslides. Data acquired by a terrestrial laser scanning (TLS, called a point cloud, has been widely used to generate a DTM, since a TLS is appropriate for detecting small- to large-scale ground features on steep slopes. For an accurate DTM, TLS data should be filtered to remove non-ground points, but most current algorithms for extracting ground points from a point cloud have been developed for airborne laser scanning (ALS data and not TLS data. Moreover, it is a challenging task to generate an accurate DTM from a steep-slope area by using existing algorithms. For these reasons, we developed an algorithm to automatically extract only ground points from the point clouds of steep terrains. Our methodology is focused on TLS datasets and utilizes the adaptive principal component analysis–triangular irregular network (PCA-TIN approach. Our method was applied to two test areas and the results showed that the algorithm can cope well with steep slopes, giving an accurate surface model compared to conventional algorithms. Total accuracy values of the generated DTMs in the form of root mean squared errors are 1.84 cm and 2.13 cm over the areas of 5252 m2 and 1378 m2, respectively. The slope-based adaptive PCA-TIN method demonstrates great potential for TLS-derived DTM construction in steep-slope landscapes.
Linear filters as a method of real-time prediction of geomagnetic activity
McPherron, R.L.; Baker, D.N.; Bargatze, L.F.
1985-01-01
Important factors controlling geomagnetic activity include the solar wind velocity, the strength of the interplanetary magnetic field (IMF), and the field orientation. Because these quantities change so much in transit through the solar wind, real-time monitoring immediately upstream of the earth provides the best input for any technique of real-time prediction. One such technique is linear prediction filtering which utilizes past histories of the input and output of a linear system to create a time-invariant filter characterizing the system. Problems of nonlinearity or temporal changes of the system can be handled by appropriate choice of input parameters and piecewise approximation in various ranges of the input. We have created prediction filters for all the standard magnetic indices and tested their efficiency. The filters show that the initial response of the magnetosphere to a southward turning of the IMF peaks in 20 minutes and then again in 55 minutes. After a northward turning, auroral zone indices and the midlatitude ASYM index return to background within 2 hours, while Dst decays exponentially with a time constant of about 8 hours. This paper describes a simple, real-time system utilizing these filters which could predict a substantial fraction of the variation in magnetic activity indices 20 to 50 minutes in advance
Mikhaylov, V. E.; Khomenok, L. A.; Sherapov, V. V.
2016-08-01
The main problems in creation and operation of modern air inlet paths of gas turbine plants installed as part of combined-cycle plants in Russia are presented. It is noted that design features of air inlet filters shall be formed at the stage of the technical assignment not only considering the requirements of gas turbine plant manufacturer but also climatic conditions, local atmospheric air dustiness, and a number of other factors. The recommendations on completing of filtration system for air inlet filter of power gas turbine plants depending on the facility location are given, specific defects in design and experience in operation of imported air inlet paths are analyzed, and influence of cycle air preparation quality for gas turbine plant on value of operating expenses and cost of repair works is noted. Air treatment equipment of various manufacturers, influence of aerodynamic characteristics on operation of air inlet filters, features of filtration system operation, anti-icing system, weather canopies, and other elements of air inlet paths are considered. It is shown that nonuniformity of air flow velocity fields in clean air chamber has a negative effect on capacity and aerodynamic resistance of air inlet filter. Besides, the necessity in installation of a sufficient number of differential pressure transmitters allowing controlling state of each treatment stage not being limited to one measurement of total differential pressure in the filtration system is noted in the article. According to the results of the analysis trends and methods for modernization of available equipment for air inlet path, the importance of creation and implementation of new technologies for manufacturing of filtering elements on sites of Russia within the limits of import substitution are given, and measures on reliability improvement and energy efficiency for air inlet filter are considered.